Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Yelp Attempts To Tackle Racism On Its Platform (2020)

from the a-challenge dept

Summary: Running a site that relies on third-party content means having to deal with the underside of human existence. While most people engage in good faith, a small minority of people engage with the sole purpose of disparaging others.

Yelp is no exception. Designed to provide potential customers with useful information about goods and services, the site’s popularity lent itself to brigading (negative reviews delivered en masse in response to current outrages) and the lowest common denominators of the general public: bigots.

The potential to ruin a business’s reputation over their views on immigration policy, their employment of minorities, or other perceived slights made it possible for the most-respected review site to be weaponized by racists.

Yelp recognized this inevitability. Moderators patrol the site to limit the spread of bigoted content that skew review scores based on the racist predilections of reviewers.

Communities have always turned to Yelp in reaction to current events at the local level. As the nation reckons with issues of systemic racism, we?ve seen in the last few months that there is a clear need to warn consumers about businesses associated with egregious, racially-charged actions to help people make more informed spending decisions. Yelp?s User Operations team already places alerts on business pages when we notice an unusual uptick in reviews that are based on what someone may have seen in the news or on social media, rather than on a first-hand experience with the business. Now, when a business gains public attention for reports of racist conduct, such as using racist language or symbols, Yelp will place a new Business Accused of Racist Behavior Alert on their Yelp page to inform users, along with a link to a news article where they can learn more about the incident.

This move may have seemed laudable but it lent itself to subjective interpretations of decisions made by businesses, as well as individual actions by employees. Employing a racist person is not the same as running a racist business, but Yelp’s blanket policy seemed to indicate both were equally racist.

Further comments by Yelp clarified some of its employees would make the final determination on alleged racism by businesses or business owners. Any company flagged for racist behavior would be sheltered from further comment until a determination was made.

Decisions to be made by Yelp:

  • Does allowing users to unilaterally declare businesses to be “racist” thwart monetization efforts by Yelp?
  • Is it wise to succumb to the “wisdom of the crowd,” especially when Yelp feels an interstitial warning is an acceptable replacement for due diligence?

Questions and policy implications to consider:

  • Does Yelp’s reliance on income from businesses seeking to expand their reach conflict with allegations of racism by business owners/employees?
  • Do policies like this actually encourage bad faith behavior by hiding reviews behind an ominous warning that suggests the complaints are legitimate?

Resolution: This use of warnings and the hiding of unverified reviews (at least temporarily) is still company policy. While its moderation efforts may eventually lead to a satisfactory resolution, its decision to flag businesses based on unverified claims has the potential to result in a lot of collateral damage.

Originally posted on the Trust & Safety Foundation website.

Filed Under: , ,
Companies: yelp

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Yelp Attempts To Tackle Racism On Its Platform (2020)”

Subscribe: RSS Leave a comment
4 Comments
kag (profile) says:

SJW

All it takes anymore is a single employee not wearing a mask, and SJWs will flood yelp with negative reviews. They have never even been to the place in question, but will feel the need to sentence it with one star. Any reports of racism from the business, real or imagined will also see the same fate. I have never heard of racists flagging businesses, other than left wing ones.

This comment has been flagged by the community. Click here to show it.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow