Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019)

from the godwin-in-effect dept

Summary: On June 5, 2019, YouTube announced it would be stepping up its efforts to remove hateful content, focusing on the apparent increase of white nationalist and pro-Nazi content being created by users. This change in algorithm would limit views of borderline content and push more viewers towards content less likely to contain hateful views. The company's blog post specifically stated it would be removing videos that "glorified Nazi ideology."

Unfortunately, when the updated algorithm went to work removing this content, it also took down content that educated and informed people about Nazis and their ideology, but quite obviously did not "glorify" them.

Ford Fischer -- a journalist who tracks extremist and hate groups -- noticed his entire channel had been demonetized within "minutes" of the rollout. YouTube responded to Fischer's attempt to have his channel reinstated by stating multiple videos -- including interviews with white nationalists -- violated the updated policy on hateful content.

A similar thing happened to history teacher Scott Allsop, who was banned by YouTube for his uploads of archival footage of propaganda speeches by Nazi leaders, including Adolph Hitler. Allsop uploaded these for their historical value as well as for use in his history classes. The notice placed on his terminated account stated it had been taken down for "multiple or severe violations" of YouTube's hate speech policies.

Another YouTube user noticed his upload of 1938 documentary about the rise of the Nazi party in Germany had been taken down for similar reasons, even though the documentary was decidedly anti-Nazi in its presentation and had obvious historical value.

Decisions to be made by YouTube:

  • Should algorithm tweaks be tested in a sandboxed environment prior to rollout to see how often they're flagging content that doesn't actually violate policies?
  • Given that this sort of mis-targeting has happened in the past, does YouTube have a response plan in place to swiftly handle mistaken content removals?
  • Should additional staffing be brought on board to handle the expected collateral damage of updated moderation policies? 
Questions and policy implications to consider:
  • Should there be a waiting period on enforcement that would allow users with flagged content to make their case prior to being hit by enforcement methods like demonetization or bans?
  • Should YouTube offer some sort of compensation to users whose channels are adversely affected by mistakes like these? 
  • Should users whose content hasn't been flagged previously for policy violations be given a benefit of a doubt when flagged by automated moderation efforts?
Resolution: In most cases, content mistakenly targeted by the algorithm change was reinstated within hours of being taken down. In the case of Ford Fischer, reinstatement took longer. And he was again demonetized by YouTube in early 2021, apparently over raw footage of the January 6th riot in Washington, DC. Within hours, YouTube had reinstated his account, but not before drawing more negative press over its moderation problems.

Originally published to the Trust & Safety Foundation website.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: content moderation, disinformation, education, educational videos, hate speech, history, nazis
Companies: youtube


Reader Comments

Subscribe: RSS

View by: Thread


  • identicon
    Anonymous Coward, 13 May 2021 @ 11:54am

    Makes sense.

    I mean YouTube being incredibly useless of course, not what they're doing.

    reply to this | link to this | view in chronology ]

  • icon
    Darkness Of Course (profile), 13 May 2021 @ 6:03pm

    Ah YouTube, I hardly knew ya

    The ineptness radiates from YouTube control like a beacon of hubris as they insist they can solve all the hard problems with their AI.

    Which is not AI. Merely ML. They routinely generate bad press by using ML on data that a simple regex could give the thumbs up or down to.

    reply to this | link to this | view in chronology ]

    • identicon
      Rocky, 13 May 2021 @ 7:02pm

      Re: Ah YouTube, I hardly knew ya

      The historical documentaries was created because Nazi's started WWII, and then those documentaries where mistakenly taken down as a result of Nazi's behaving like Nazi's.

      So in the end, it's the Nazi's fault regardless.

      reply to this | link to this | view in chronology ]

  • icon
    Lostinlodos (profile), 16 May 2021 @ 12:54pm

    One alternative would be to simply have the community down vote material to hide it.
    YouTube has many working content filters that, to anyone who prefers gateway guards over simply deleting, finds acceptable.

    Be it age banners aMD or violence or nudity, offensive banners for things known to trigger specific groups. Etc.

    Relying on machine learning and less than specific algorithms for content removal rarely works. Be it copyright or politics, or in this case Nazis.

    Purely, or mostly, relying on people for content removal causes the Twitter impasse where nearly half the country believes they ONLY moderate on political grounds and nearly half believe political takedowns NEVER happen.

    YouTube’s pre-playback screens work quite well. Adult content is blocked from under-aged accounts. As is any inappropriate/triggering material.
    Adults are (generally) considered wise enough to read the screen and decide if they wish to view something. Before clicking on it.

    Sure the downside is it takes a bit of views to flag material if it’s not author/uploader tagged, but for the majority of people it simply works.

    reply to this | link to this | view in chronology ]


Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here



Subscribe to the Techdirt Daily newsletter




Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Close

Add A Reply

Have a Techdirt Account? Sign in now. Want one? Register here



Subscribe to the Techdirt Daily newsletter




Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Follow Techdirt
Essential Reading
Techdirt Insider Chat
Recent Stories
.

Close

Email This

This feature is only available to registered users. Register or sign in to use it.