Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Creating Family Friendly Chat More Difficult Than Imagined (1996)

from the the-kids-will-find-a-way dept

Summary: Creating family friendly environments on the internet presents some interesting challenges that highlight the trade-offs in content moderation. One of the founders of Electric Communities, a pioneer in early online communities, gave a detailed overview of the difficulties in trying to build such a virtual world for Disney that included chat functionality. He described being brought in by Disney alongside someone from a kids? software company, Knowledge Adventure, who had built an online community in the mid-90s called ?KA-Worlds.? Disney wanted to build a virtual community space, HercWorld, to go along with the movie Hercules. After reviewing Disney?s requirements for an online community, they realized chat would be next to impossible:

Even in 1996, we knew that text-filters are no good at solving this kind of problem, so I asked for a clarification: “I?m confused. What standard should we use to decide if a message would be a problem for Disney?”

The response was one I will never forget: “Disney?s standard is quite clear:

No kid will be harassed, even if they don?t know they are being harassed.”…

“OK. That means Chat Is Out of HercWorld, there is absolutely no way to meet your standard without exorbitantly high moderation costs,” we replied.

One of their guys piped up: “Couldn?t we do some kind of sentence constructor, with a limited vocabulary of safe words?”

Before we could give it any serious thought, their own project manager interrupted, “That won?t work. We tried it for KA-Worlds.”

“We spent several weeks building a UI that used pop-downs to construct sentences, and only had completely harmless words ? the standard parts of grammar and safe nouns like cars, animals, and objects in the world.”

“We thought it was the perfect solution, until we set our first 14-year old boy down in front of it. Within minutes he?d created the following sentence:

I want to stick my long-necked Giraffe up your fluffy white bunny.

In that initial 1996 project, chat was abandoned, but as they continued to develop HercWorld, they quickly realized that they still had to worry about chat, even without a chat feature:

It was standard fare: Collect stuff, ride stuff, shoot at stuff, build stuff? Oops, what was that last thing again?

“?kids can push around Roman columns and blocks to solve puzzles, make custom shapes, and buildings.”, one of the designers said.

I couldn?t resist, “Umm. Doesn?t that violate the Disney standard? In this chat-free world, people will push the stones around until they spell Hi! or F-U-C-K or their phone number or whatever. You?ve just invented Block-ChatTM. If you can put down objects, you?ve got chat. We learned this in Habitat and WorldsAway, where people would turn 100 Afro-Heads into a waterbed.” We all laughed, but it was that kind of awkward laugh that you know means that we?re all probably just wasting our time.

Decisions for family-friendly community designers:

  • Is there a way to build a chat that will not be abused by clever kids to reference forbidden content (e.g., swearing, innuendo, harassment, abuse)?
  • Can you build a chat that does not require universal moderation and pre-approval of everything that users will say?
  • Are there ways in which kids will still be to communicate with others even without an actual chat feature?
  • How much of a ?community? do you have with no chat or extremely limited chat?

Questions and policy implications to consider:

  • Is it possible to create an online family friendly environment that will work?
    • If so how do you prevent abuse?
    • If not, how do you handle the fact that kids will get online whether they are allowed to or not?
  • How do you incentivize companies to create spaces that actually remain as child-friendly as possible?
  • If ?the kids will always find a way? to get around limitations, does it make sense to hold the companies themselves responsible?
  • Should family friendly environments require full-time monitoring, or pre-vetting of any usage?

Resolution: Disney eventually abandoned the idea of HercWorld due to all of the issues raised. However, the interview highlights the fact that they tried again a couple of years later, with an online chat where users could only pull from a pre-selected list of sentences, but it did not have much success:

“The Disney Standard” (now a legend amongst our employees) still held. No harassment, detectable or not, and no heavy moderation overhead.

Brian had an idea though: Fully pre-constructed sentences ? dozens of them, easy to access. Specialize them for the activities available in the world. Vaz Douglas, our project manager working with Zoog, liked to call this feature “Chatless Chat.” So, we built and launched it for them. Disney was still very tentative about the genre, so they only ran it for about six months; I doubt it was ever very popular.

The same interview notes that Disney tried once again in 2002 with a new world called ?ToonTown?, with pulldown menus that allowed you to construct very narrowly tailored speech within the chat to try to avoid anything that violated the rules.

As the story goes, Disney still had problems with this. To make sure people were only communicating with people they knew in real life, one of the restrictions in this new world was that you had to have a secret code from any user you wished to chat with. The thinking was that parents would print these out for kids who could then share them with their friends in real life, and they could link up and ?chat? in the online world.

And yet, once again, people figured out how to get around the restrictions:

Sure enough, chatters figured out a few simple protocols to pass their secret code, several variants are of this general form:

User A:”Please be my friend.”
User A:”Come to my house?”
User B:”Okay.”
A:[Move the picture frames on your wall, or move your furniture on the floor to make the number 4.]
A:”Okay”
B:[Writes down 4 on a piece of paper and says] “Okay.”
A:[Move objects to make the next letter/number in the code] “Okay”
B:[Writes?] “Okay”
A:[Remove objects to represent a “space” in the code] “Okay”
[Repeat steps as needed, until?]
A:”Okay”
B:[Enters secret code into Toontown software.]
B:”There, that worked. Hi! I?m Jim 15/M/CA, what?s your A/S/L?”

Incredibly, there was an entire Wiki page on the Disney Online Worlds domain that included a variety of other descriptions on how to exchange your secret number within the game, even as users were not supposed to be doing so:

For example, let’s say you have a secret code (1hh 5rj) which you would like to give to a toon named Bob.

First, you should make clear that you want to become their SF.
You: Please be my friend!
You: (random SF chat)
You: I can’t understand you
You: Let’s work on that
Bob: Yes
Now, start the secret.
You: (Jump 1 time and say OK. Jump 1 time because that is the first thing in your code. Say OK to confirm that was part of your secret.)
Bob: OK (Wait for this, as this means he has written down or otherwise recorded the 1)
You: Hello! OK (Say hello because the first letter of hello is h, which is the second part of your secret.)
Bob: OK (again, wait for confirmation)
Repeat above step, as you have the same letter for the third part of your secret.
Bob: OK (by now you should know to wait for this)
You: (Jump 5 times and say OK. Jump 5 times as this is the 4th part of your secret)
Bob: OK
You: Run! OK (The 5th part of your secret is r, and “Run!” starts with r)
Bob: OK
You: Jump! OK (Say this because j is the last part of your secret.)
Bob: OK
At this point, you have successfully transmitted the code to Bob.
Most likely, Bob will understand, and within seconds, you will be Secret Friends!

So even though Disney eventually did enable a very limited chat, with strict rules to keep people safe, it still left open many challenges for early trust & safety work.

Images from HabitChronicles

Filed Under: , ,
Companies: disney

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Creating Family Friendly Chat More Difficult Than Imagined (1996)”

Subscribe: RSS Leave a comment
29 Comments
Samuel Abram (profile) says:

Nintendo's experiments

Would Nintendo’s dabbling into Friend Codes be similar as far as content moderation goes? Because whereas in the past you could send drawings of penises over DS pictochat, in today’s era of the Nintendo Switch, there is no internet browser or chat function, but there are still friend codes.

Could Nintendo’s experiments be considered more successful or proof that "family-friendly chat" is as impossible as absolute zero or psychic powers?

Anonymous Coward says:

Re: Nintendo's experiments

Could Nintendo’s experiments be … proof that "family-friendly chat" is as impossible as … psychic powers?

What? I have VAST psychic powers! I can, with my psychic powers, cause you intense nausea, provoke anger, and even cause you to (try to) make my comments disappear. All I have to do, using my mind alone, is …

… cause my fingers to type: I still believe in Trump.

PatrickH says:

Re: Nintendo's experiments

At least according to a leaked presentation friend codes came about cause making users choose unique names and others typing them in wasn’t simple enough. Though I imagine avoiding offensive names is a benefit that if not realized during design, is one that keeps it in place.

https://www.eurogamer.net/articles/2020-05-04-nintendo-chose-internet-friend-codes-because-using-real-names-was-not-simple-enough

Possibly, only Nintendo knows why they got rid of it. Though a chat so limited by design it’s "safe" is probably not worth using, and human moderation is quickly overwhelmed. There might be hope if AI becomes so advanced it rivals humans in understanding human communication and fast enough to effective moderate, but that’s a pretty big if, and it seems highly unlikely in the foreseeable future.

This comment has been deemed funny by the community.
Lewdtoo says:

"How can we prevent people from making lewd/innapropriate comments?"

"Enough nukes in the centre of the earth core could do the trick"

some days later

"No dice, we had our people ran simulations"

"Where’s the problem?"

"They managed to have the debris spell out ‘F U C K you alien scum’ and form a penis"

This comment has been deemed insightful by the community.
Anonymous Coward says:

Task is insanity like DRM

Really the very concept is futile like DRM because it requires a contradiction – to be able to communicate and not communicate at the same time. With arbitrary mapping they can easily send dirty morse code to go to absurd extremes.

The closest that can be done is to make it more obtuse like the passing of codes. Although it might work for covering their asses if the moderators remain oblivious to sexual RP done by items in trade windows and emotes even if "large doll and eggplant for small doll" followed up by adding blue eyedrops with the doll" means something deeply disturbing.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Here’s the start of the second paragraph of Shannon’s information theory paper:

The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning

The meaning is, you’ll notice, separate from the message. If you can communicate anything then you can communicate anything because it’s only a matter of assigning different meaning to the information you can push through the channel.

PaulT (profile) says:

"Is it possible to create an online family friendly environment that will work?
If so how do you prevent abuse?"

Yes.
By having parents actually do their jobs and actively parent their children rather than using passive entertainment as a babysitter.

"Should family friendly environments require full-time monitoring, or pre-vetting of any usage?"

Yes.

This comment has been deemed insightful by the community.
nasch (profile) says:

Re: Re:

By having parents actually do their jobs and actively parent their children rather than using passive entertainment as a babysitter.

Requiring parents to sit by their children and watch the screen for everything going on the entire time they’re using the system does not violate any known laws of physics, so it’s not impossible. But it’s also not going to happen.

Scary Devil Monastery (profile) says:

Re: Re: Re: Re:

"Concerned parents should look into using them, rather than trying to get companies to ruin the Internet by making it totally child safe."

Ah, yes, parental filters. Children have great sport in circumventing those.

I mainly encourage the use of such because it means more children get a head start in understanding computer security.

PaulT (profile) says:

Re: Re: Re:2 Re:

"Children have great sport in circumventing those."

That’s one part of the problem. The other part is the "Scunthorpe problem", where over-filtering will remove access to perfectly innocent content, including things that might actually be necessary for the child’s education. So, either they bypass the filter making it useless, or it blocks them from doing their homework – or, perhaps both as a kid willing going in search of porn probably won’t mind being told he can’t do school work.

Scary Devil Monastery (profile) says:

Re: Re: Re: Re:

…as in; "Never", you mean.

What really gets to me is that the same "concerned" parents dreading the moral hazard of the internet usually don’t have much trouble sending their children to congregate in vast herds of…well, other children.

At some point you’d think it’d be obvious that if you want your kids to be safe what you need to do is tell them of the dangers, how to recognize them, and how to get away from them. Instead too many parents just do their damnedest to keep their kids from learning that the world is full of dangers.

PaulT (profile) says:

Re: Re: Re:2 Re:

"…as in; "Never", you mean."

Both are "never" situations in the broader scheme of things. But, between "censorship of adults working without fail" and "parents taking an active interest in their child’s welfare", at least the latter will occasionally happen in some households.

"At some point you’d think it’d be obvious that if you want your kids to be safe what you need to do is tell them of the dangers, how to recognize them, and how to get away from them."

…but that takes effort! How is the parent who uses YouTube as a babysitter for their 3 year old meant to do that without lifting a finger?!?

Anonymous Coward says:

This is basically a given in most games. Even when they try to give baked-in communication phrases, there will always be a way to imply something bad-mannered.

Examples:
Hearthstone – “Well Played!” spam
League of Legends – lots of possible pings here, one of them is a literal question mark that appears on the map/mini-map
FPS games – teabagging

“The convoluted wording of legalisms grew up around the necessity to hide from ourselves the violence we intend toward each other. Between depriving a man of one hour from his life and depriving him of his life there exists only a difference of degree. You have done violence to him, consumed his energy.”

Anonymous Coward says:

This is basically a given in most games. Even when they try to give baked-in communication phrases, there will always be a way to imply something bad-mannered.

Examples:
Hearthstone – “Well Played!” spam
League of Legends – lots of possible pings here, one of them is a literal question mark that appears on the map/mini-map
FPS games – teabagging

“The convoluted wording of legalisms grew up around the necessity to hide from ourselves the violence we intend toward each other. Between depriving a man of one hour from his life and depriving him of his life there exists only a difference of degree. You have done violence to him, consumed his energy.”

Anonymous Coward says:

I feel Club Penguin managed to do it "right". First a word filter for edgy kids who log in and type "fuck my bitch up" or whatever. After that, manual moderation of messages and a report function. It seems that until we have true AI, there is no alternative to just sitting people in front of the screen and having them moderate messages.

More philosophically however, what age is appropriate for letting a person figure out that the world is full of shitheads? 21? 15? 12? In real life I could say whatever shit I wanted to whatever other kid I wanted outside of school and believe me we said a LOT of shit. I get that parents want their kids to be safe, but isn’t online inherently safe? Give people the tools to block others, have a system to catch ban evaders (you need this anyway), have a filter that doesn’t let anything that looks like a credit card number or an address through (you also need this anyway), and I dare say that’s enough for a bunch of 12-16 year olds.

For younger kids, the Disney system seems ok. Yes, you can still transmit information through various means, but whatever you do, any sufficiently determined child will not be deterred. At some point you have to say "we made the best system we could, if you decide to break the rules, you’re on your own".

It’s weird this Internet thing. On one hand it sounds scary that you can just connect anyone with anyone at random with no overseers. On the other, you can always refuse to receive messages from anyone and services try to keep your personal info secure so strangers don’t have direct access to it. It sounds like just about anyone can just harm your kid because they’re online, but in reality it’s more like you can hear the yells of the crazy guy from across town if you listen carefully, and you can always just not listen.

PaulT (profile) says:

Re: Re:

Basically, some people think that the Internet should be Disney World, with everything cuddly and child safe and lots of helpful people to bow to your every wish should you have a problem with any aspect of life.

In reality, the internet is like any major city, with good and bad people of every kind out there, areas where some people are best advised not to go, certainly not a place you let young children wander around unsupervised, and for the most part you’re on your own with any decision you make.

The problem isn’t the nature of the city, it’s that some idiots think everyone should live in a theme park because they can’t supervise their own kids.

Scary Devil Monastery (profile) says:

Re: testing.

True that. Can you imagine how triggerhappy the average obscenity filter would get once a few ten thousand 14 year olds had written their own version of what the long-necked giraffe wanted to do to the fluffy white bunny? I can envision now, the average business mail going;

"Dear [REDACTED]. We are [REDACTED] to [REDACTED] your [REDACTED]…"

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow