Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Removing Nigerian Police Protest Content Due To Confusion With COVID Misinfo Rules (2020)

from the moderation-confusion dept

Summary: With the beginning of the COVID-19 pandemic, most of the large social media companies very quickly put in place policies to try to handle the flood of disinformation about the disease, responses, and treatments. How successful those new policies have been is subject to debate, but in at least one case, the effort to fact check and moderate COVID information ran into a conflict with people reporting on violent protests (totally unrelated to COVID) in Nigeria.

In Nigeria, there?s a notorious division called the Special Anti-Robbery Squad, known as SARS in the country. For years there have been widespread reports of corruption and violence in the police unit, including stories of how it often robs people itself (despite its name). There have been reports about SARS activities for many years, but in the Fall of 2020 things came to a head as a video was released of SARS officers dragging two men out of a hotel in Lago and shooting one of them in the street.

Protests erupted around Lagos in response to the video, and as the government and police sought to crack down on the protests, violence began, including reports of the police killing multiple protesters. The Nigerian government and military denied this, calling it ?fake news.?

Around this time, users on both Instagram and Facebook found that some of their own posts detailing the violence brought by law enforcement on the protesters were being labeled as ?False Information? by Facebook?s fact checking system. In particular an image of the Nigerian flag, covered in blood of shot protesters, which had become a symbolic representation of the violence at the protests, was flagged as ?false information? multiple times.

Given the government?s own claims of violence against protesters being ?fake news? many quickly assumed that the Nigerian government had convinced Facebook fact checkers that the reports of violence at the protests were, themselves, false information.

However, the actual story turned out to be that Facebook?s policies to combat COVID-19 misinformation were the actual problem. At issue: the name of the police division, SARS, is the same as the more technical name of COVID-19: SARS-CoV-2 (itself short for: ?severe acute respiratory syndrome coronavirus 2?). Many of the posts from protesters and their supporters in Lagos used the tag #EndSARS, talking about the police division, not the disease. And it appeared that the conflict between those two things, combined with some automated flagging, resulted in the Nigerian protest posts being mislabeled by Facebook?s fact checking system.

Decisions to be made by Facebook:

  • How should the company review content that includes specific geographical, regional, or country specific knowledge, especially when it might (accidentally) clash with other regional or global issues?
  • In dealing with an issue like COVID misinformation, where there?s an urgency in flagging posts, how should Facebook handle the possibility of over-blocking of unrelated information as happened here?
  • What measures can be put in place to prevent mistakes like this from happening again?

Questions and policy implications to consider:

  • While large companies like Facebook now go beyond simplistic keyword matching for content moderation, automated systems are always going to make mistakes like this. How can policies be developed to limit the collateral damage and false marking of unrelated information?
  • If regulations require removal of misinformation or disinformation, what would likely happen in scenarios like this case study?
  • Is there any way to create regulations or policies that would avoid the mistakes described above?

Resolution: After the incorrectly labeled content began to get attention both Instagram and Facebook apologized and took down the ?false information? flag on the content.

Yesterday our systems were incorrectly flagging content in support of #EndSARS, and marking posts as false. We are deeply sorry for this. The issue has now been resolved, and we apologize for letting our community down in such a time of need.

Facebook?s head of communications for sub-Saharan Africa, Kezia Anim-Addo, gave Tomiwa Ilori, writing for Slate, some more details on the combination of errors that resulted in this unfortunate situation:

In our efforts to address misinformation, once a post is marked false by a third party face checker, we can use technology to ?fan out? and find duplicates of that post so if someone sees an exact match of the debunked post, there will also be a warning label on it that it?s been marked as false.

In this situation, there was a post with a doctored image about the SARS virus that was debunked by a Third-Party Fact Checking partner

The original false image was matched as debunked, and then our systems began fanning out to auto-match to other images

A technical system error occurred where the doctored images was connected to another different image, which then also incorrectly started to be matched as debunked. This created a chain of fan outs pulling in more images and continuing to match them as debunked.

This is why the system error accidentally matched some of the #EndSARS posts as misinformation.

Thus, it seems like a combination of factors was at work here, including a technical error and the similarities in the ?SARS? name.

Originally posted to the Trust & Safety Foundation website.

Filed Under: , , , , ,
Companies: facebook

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Removing Nigerian Police Protest Content Due To Confusion With COVID Misinfo Rules (2020)”

Subscribe: RSS Leave a comment
8 Comments
Anonymous Coward says:

Yes, it is easy to see why the people involved thought something nefarious might have been going on. It is not so easy to see, but it is as certain as sunrise, that they are nearly always wrong. Not everything is a global conspiracy: it’s mostly people trying to do the right thing (or at least, figure out what that is). And not every global conspiracy is aimed at you personally; there are eight billion other people to plot against, after all!

Anonymous Coward says:

Re: Re:

They clearly shouldn’t. The banner apparently means nothing more than "an image in this post was used by someone making a claim our fact-checkers found to be false; therefore this claim is also false." Never mind that the text might be totally different.

I feel like Facebook is setting themselves up for defamation lawsuits if they’re falsely marking random stuff as false.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow