Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Friendster Battles Fakesters (2003)

from the revenge-of-the-fakesters dept

Summary: While the social media/social networking space today is dominated by Facebook, it?s interesting to look at how Facebook?s predecessors dealt with content moderation challenges as well. One of the earliest social networks to reach mainstream recognition was Friendster, founded by Jonathan Abrams in 2002 and launched in early 2003, gaining millions of users who signed up to connect with friends. Originally built as a dating site, it expanded quickly beyond that.

One of the first big content moderation questions that the site faced was whether or not to allow ?fakesters.? As the site grew rapidly, one popular usage was to set up fake accounts — these were accounts for completely made up fictional characters (e.g., Homer Simpson), concepts (e.g., Pure Evil), random objects (e.g., Giant Squid), or places (e.g., New Jersey). Researcher danah boyd catalogued the different types of fakesters and studied the phenomenon of fake accounts on the site.

However, Abrams quickly decided that ?fakester? went against the ethos of the site he envisioned. In a 2003 article in SF Weekly that discusses the ?fakester? issue, Abrams makes it clear that such accounts do not belong on the site, even if some people find them amusing:

In early July, Friendster’s affable chief operating officer, Kent Lindstrom, told me the only fakesters that the company would likely remove would be ones it received complaints about. (On Friendster, users can ?flag? somebody’s profile for the company to review, and write comments about why it offended them.) But Abrams shakes his head emphatically when I mention this.

?No. They’re all going,? he says, his voice steely. ?All of them.?

As the article notes, the fakesters were often the most active users on the site, but that did not change Abrams? mind about whether or not they belonged there:

Though they are some of Friendster’s most ardent fans ? many spend several hours a day on the site ? fakesters do everything they can to create anarchy in the system. They are not interested in finding friends through prosaic personal ads, but through a big, surreal party where Jesus, Chewbacca, and Nitrous are all on the guest list. To fakesters, phony identities don’t destroy the social experience of Friendster; they enrich it.

But fakesters aren’t hosting this gig. Jonathan Abrams, the 33-year-old software engineer who founded Friendster to improve his own social life, is ? and he abhors the phony profiles. He believes they diminish his site’s worth as a networking tool and claims that fakesters’ pictures ? often images ripped off the Web ? violate trademark law. Abrams’ 10-person Sunnyvale company has begun ruthlessly deleting fakesters and plans to eventually eradicate them completely from the site.

A few months later, an article in Salon laid out the growing conflict between those who found the ?fakesters? to be fun, and Abrams who remained adamantly against them.

Giant Squid is not alone: Among the “Fakesters” who’ve signed up for Friendster are Jackalope, God, Beer, Drunk Squirrel, Hippie Jesus, Malcolm X and more than a dozen Homer Simpsons. Just like regular users, they post their photos, blab on bulletin boards and collect friends like so many baseball cards. Some, staying in character, even write gushing testimonials about their friends: What higher endorsement could there be than a few complimentary words from Homer himself? “Better than a cold can of Duff beer … ”

But while it may be amusing to invite God himself into your pool of friends and get back the message, “God is now your friend,” the founder of the site says that such chicanery only distorts his system.

“Fake profiles really defeats the whole point of Friendster,” says entrepreneur Abrams, interviewed by cellphone as he waited to catch a plane in Los Angeles. “Some people find it amusing, but some find it annoying. And it doesn’t really serve a legitimate purpose. The whole point of Friendster is to see how you’re connected to people through your friends,” he says.

Decisions to be made by Friendster:

  • Should it delete any and all ?fakester? profiles? Should it only do so if users complain about a specific profile, or should it be proactive in removing such profiles?
  • How should the company deal with nicknames, rather than a person?s real name? How would it distinguish between celebrities and those pretending to be celebrities?
  • Does removing ?fakesters? harm some of the ?fun? aspects that brought people to the site in the first place?

Questions and policy implications to consider:

  • Does forcing everyone to be on the site with their real name lead to less participation from more vulnerable and at risk populations, worried about putting their profiles online?
  • How much should web sites cater to emergent behavior, like fakesters, which some users enjoyed and resulted in more usage of the site?
  • Are there legal risks associated with allowing fake profiles?

Resolution: While Friendster continued its fights against ?fakester? profiles, apparently the company?s vehement stance against such profiles did not apply to monetization opportunities. In the summer of 2004, some people noticed an advertising campaign on Friendster for the ?Anchorman? movie with Will Ferrell in which his character, Ron Burgandy was suggested as a ?friend? to users.

When asked about this, Friendster tried to frame this situation as different than the ?fakester? issue, saying that it was a ?new paradigm.?

What Friendster is doing with these movie-character profiles is actually a brand-new paradigm in media promotion,” Friendster spokeswoman Lisa Kopp said. “We are working directly with a number of production houses and movie studio partners to create film-character profiles, or ‘fan’ profiles, that allow our users to share their enthusiasm about the film with their friends.”

The company also claimed that it wasn?t ?fake? because it was done in partnership with the movie studio Dreamworks, which had the rights to the character:

“The issue here is actually about consumer protection,” said Kopp. “We do, as a policy, strongly discourage fake profiles. A rogue user hiding behind a Jesus profile, for example, has the potential to abuse the service or users in many ways. In the case of the Anchorman characters, DreamWorks owns the rights to the characters and there is nothing fraudulent about it.”

Of course, many of the ?fakester? profiles didn?t involve anyone else?s intellectual property, so this excuse wouldn?t apply to accounts like ?Pure Evil? and ?Giant Squid.?

Friendster struggled to grow, in part because its own success overwhelmed its technical abilities. The site was quickly overtaken by MySpace and then Facebook. Since then, other sites, including Facebook, have struggled with the question of whether or not accounts should have ?real names.? While many have argued that such policies discourage bad behavior, studies on this point have suggested otherwise.

Originally posted to the Trust & Safety Foundation website.

Filed Under: , ,
Companies: friendster

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Friendster Battles Fakesters (2003)”

Subscribe: RSS Leave a comment
4 Comments
This comment has been deemed insightful by the community.
Anonymous Coward says:

Aside from insufficiently scaling to handle the massive traffic they were getting, Friendster’s biggest mistake was shutting down the fake accounts that people were using to build communities. The accounts people created to represent places and interests were working around shortcomings of the system. Both mistakes can be summarized as inflexibility.

Anonymous Coward says:

This is only tangentially related, but this brings back a memory of the old, old days of working at Blockbuster. There were commands in the computer system that could be used to copy a local account to the national database, and vice-versa. It was kind of a fun game punching in odd number combinations to find joke profiles people had uploaded – 66666, 77777, 69696, 00000, 54321, etc. I remember God and Darth Vader being among them. We also used to have chair races, and we’d shrink wrap each other’s stuff – not that management knew, of course. Good times!

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow