Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Amazon's Attempt To Remove 'Sock Puppet' Reviews Results In The Deletion Of Legitimate Reviews (November 2012)

from the real-and-fake dept

Summary: As is the case on any site where consumer products are sold, there’s always the chance review scores will be artificially inflated by bogus reviews using fake accounts, often described as “sock puppets.”

Legitimate reviews are organic, prompted by a buyer’s experience with a product. “Sock puppets,” on the other hand, are bogus accounts created for the purpose of inflating the number of positive (or — in the case of a competitor — negative) reviews for a seller’s product. Often, they’re created by the seller themself. Sometimes these faux reviews are purchased from third parties. “Sock puppet” activity isn’t limited to product reviews. The same behavior has been detected in comment threads and on social media platforms.

In 2012 — apparently in response to “sock puppet” activity, some of it linked to a prominent author — Amazon engaged in a mass deletion of suspected bogus activity. Unfortunately, this moderation effort also removed hundreds of legitimate book reviews written by authors and book readers.

In response to authors’ complaints that their legitimate reviews had been removed (along with apparently legitimate reviews of their own books), Amazon pointed to its review guidelines, claiming they forbade authors from reviewing other authors’ books.

We do not allow reviews on behalf of a person or company with a financial interest in the product or a directly competing product. This includes authors, artists, publishers, manufacturers, or third-party merchants selling the product. As a result, we’ve removed your reviews for this title. Any further violations of our posted Guidelines may result in the removal of this item from our website.

Multiple authors sought to have their legitimate reviews reinstated (including reviews of their books written by readers), but Amazon refused, insisting that authors reviewing other authors’ books constituted a violation of its review guidelines, even if authors had no financial interest in the books they were reviewing.

Amazon’s handling of reviews in response to sock puppet activity continues to be criticized periodically, most recently over the mass removal of one-star reviews for Hillary Clinton’s 2017 book about her presidential election run.

Decisions to be made by Amazon:

  • What characteristics do ?sock puppet? reviews have, that make them distinct from legitimate reviews?
  • Do more steps need to be added to the process of verifying reviewers?
  • When targeting sock puppet activity, are options considered that might reduce the chance of negatively affecting legitimate reviews?
  • Would more flexibility in moderation decisions help or harm efforts targeting abusers of the review system?
  • Is the loss of goodwill towards the company by sellers an acceptable tradeoff for moderation efforts that remove possibly legitimate reviews of their products?
  • Can moderation efforts be handled with more human interaction to reduce the number of legitimate reviews inadvertently targeted?

Questions and policy implications to consider:

  • As the number of vendors and products continues to expand, how reasonable is it to expect reviewers to avoid violating the rule forbidding reviews of products by someone offering a competing product?
  • How much moderation should be left to automatic mechanisms when dealing with suspected sock puppet activity?
  • Does the inevitable collateral damage of these efforts raise or lower the legitimacy of the remaining reviews in the eyes of potential customers?
  • Would more transparency on review moderation efforts lead to more or less abuse of the review system?
  • Do mishandled moderation efforts harm buyers or sellers more? Which harm is more acceptable?

Resolution: Amazon reacted to news reports about sock puppet activity involving major authors by engaging in mass removals of anything that appeared questionable to moderators. Legitimate reviews/reviewers were caught up in the sweep, resulting in several authors publicly criticizing the company for not being more careful with its moderation efforts.

Filed Under: , ,
Companies: amazon

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Amazon's Attempt To Remove 'Sock Puppet' Reviews Results In The Deletion Of Legitimate Reviews (November 2012)”

Subscribe: RSS Leave a comment
9 Comments
SirWired says:

Meh; they deleted mine for no reason a couple years ago

A couple years ago, Amazon deleted all my reviews, a couple hundred going all the way back to 1997. I asked them why, and it was because they suspected I had been paid for one (or more?) of them. They didn’t say which one, and I have no idea what they were talking about. They specifically stated no appeal was possible.

I didn’t press the issue; if they don’t want my input available to customers, that’s their right I guess. (Many of my reviews were quite highly-rated.)

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow