Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019)

from the godwin-in-effect dept

Summary: On June 5, 2019, YouTube announced it would be stepping up its efforts to remove hateful content, focusing on the apparent increase of white nationalist and pro-Nazi content being created by users. This change in algorithm would limit views of borderline content and push more viewers towards content less likely to contain hateful views. The company’s blog post specifically stated it would be removing videos that “glorified Nazi ideology.”

Unfortunately, when the updated algorithm went to work removing this content, it also took down content that educated and informed people about Nazis and their ideology, but quite obviously did not “glorify” them.

Ford Fischer — a journalist who tracks extremist and hate groups — noticed his entire channel had been demonetized within “minutes” of the rollout. YouTube responded to Fischer’s attempt to have his channel reinstated by stating multiple videos — including interviews with white nationalists — violated the updated policy on hateful content.

A similar thing happened to history teacher Scott Allsop, who was banned by YouTube for his uploads of archival footage of propaganda speeches by Nazi leaders, including Adolph Hitler. Allsop uploaded these for their historical value as well as for use in his history classes. The notice placed on his terminated account stated it had been taken down for “multiple or severe violations” of YouTube’s hate speech policies.

Another YouTube user noticed his upload of 1938 documentary about the rise of the Nazi party in Germany had been taken down for similar reasons, even though the documentary was decidedly anti-Nazi in its presentation and had obvious historical value.

Decisions to be made by YouTube:

  • Should algorithm tweaks be tested in a sandboxed environment prior to rollout to see how often they’re flagging content that doesn’t actually violate policies?
  • Given that this sort of mis-targeting has happened in the past, does YouTube have a response plan in place to swiftly handle mistaken content removals?
  • Should additional staffing be brought on board to handle the expected collateral damage of updated moderation policies? 

Questions and policy implications to consider:

  • Should there be a waiting period on enforcement that would allow users with flagged content to make their case prior to being hit by enforcement methods like demonetization or bans?
  • Should YouTube offer some sort of compensation to users whose channels are adversely affected by mistakes like these? 
  • Should users whose content hasn’t been flagged previously for policy violations be given a benefit of a doubt when flagged by automated moderation efforts?

Resolution: In most cases, content mistakenly targeted by the algorithm change was reinstated within hours of being taken down. In the case of Ford Fischer, reinstatement took longer. And he was again demonetized by YouTube in early 2021, apparently over raw footage of the January 6th riot in Washington, DC. Within hours, YouTube had reinstated his account, but not before drawing more negative press over its moderation problems.

Originally published to the Trust & Safety Foundation website.

Filed Under: , , , , , ,
Companies: youtube

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019)”

Subscribe: RSS Leave a comment
5 Comments
Lostinlodos (profile) says:

One alternative would be to simply have the community down vote material to hide it.
YouTube has many working content filters that, to anyone who prefers gateway guards over simply deleting, finds acceptable.

Be it age banners aMD or violence or nudity, offensive banners for things known to trigger specific groups. Etc.

Relying on machine learning and less than specific algorithms for content removal rarely works. Be it copyright or politics, or in this case Nazis.

Purely, or mostly, relying on people for content removal causes the Twitter impasse where nearly half the country believes they ONLY moderate on political grounds and nearly half believe political takedowns NEVER happen.

YouTube’s pre-playback screens work quite well. Adult content is blocked from under-aged accounts. As is any inappropriate/triggering material.
Adults are (generally) considered wise enough to read the screen and decide if they wish to view something. Before clicking on it.

Sure the downside is it takes a bit of views to flag material if it’s not author/uploader tagged, but for the majority of people it simply works.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow