Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Social Media Services Respond When Recordings Of Shooting Are Uploaded By The Person Committing The Crimes (August 2015)

from the real-time-decision-making dept

Summary: The ability to instantly upload recordings and stream live video has made content moderation much more difficult. Uploads to YouTube have surpassed 500 hours of content every minute (as of May 2019), making any form of moderation inadequate.

The same goes for Twitter and Facebook. Facebook’s user base exceeds two billion worldwide. Over 500 million tweets are posted to Twitter every day (as of May 2020). Algorithms and human moderators are incapable of catching everything that violates terms of service.

When the unthinkable happens — as it did on August 26, 2015 — these two social media services swiftly responded. But even their swift efforts weren’t enough. The videos posted by Vester Lee Flanagan, a disgruntled former employee of CBS affiliate WDBJ in Virginia, showed him tracking down a WDBJ journalist and cameraman and shooting them both.

Both platforms removed the videos and deactivated Flanagan’s accounts. Twitter’s response took only minutes. But the spread of the videos had already begun, leaving moderators to try to track down duplicates before they could be seen and duplicated yet again. Many of these ended up on YouTube, where moderation efforts to contain the spread still left several reuploads intact. This was enough to instigate an FTC complaint against Google, filed by the father of the journalist killed by Flanagan. Google responded by stating it was still removing every copy of the videos it could locate, using a combination of AI and human moderation.

Users of Facebook and Twitter raised a novel complaint in the wake of the shooting, demanding “autoplay” be opt in — rather than the default setting — to prevent them from inadvertently viewing disturbing content.

Moderating content as it is created continues to pose challenges for Facebook, Twitter, and YouTube — all of which allow live-streaming.

Decisions to be made by social media platforms:

  • What efforts are being put in place to better handle moderation of streaming content?
  • What efforts — AI or otherwise — are being deployed to potentially prevent the streaming of criminal acts? Which ones should we adopt?
  • Once notified of objectionable content, how quickly should we respond?
  • Are there different types of content that require different procedures for responding rapidly?
  • What is the internal process for making moderation decisions on breaking news over streaming?
  • While the benefits of auto-playing content are clear for social media platforms, is making this the default option a responsible decision — not just for potentially-objectionable content but for users who may be using limited mobile data?

Questions and policy implications to consider:

  • Given increasing Congressional pressure to moderate content (and similar pressure from other governments around the world), are platforms willing to “over-block” content to demonstrate their compliance with these competing demands? If so, will users seek out other services if their content is mistakenly blocked or deleted?
  • If objectionable content is the source for additional news reporting or is of public interest (like depictions of violence against protesters, etc.), do these concerns override moderation decisions based on terms of service agreements?
  • Does the immediate removal of criminal evidence from public view hamper criminal investigations? 
  • Are all criminal acts of violence considered violations of content guidelines? What if the crime is being committed by government agents or law enforcement officers? What if the video is of a criminal act being performed by someone other than the person filming it? 

Resolution: All three platforms have made efforts to engage in faster, more accurate moderation of content. Live-streaming presents new challenges for all three platforms, which are being met with varying degrees of success. These three platforms are dealing with millions of uploads every day, ensuring objectionable content will still slip through and be seen by hundreds, if not thousands, of users before it can be targeted and taken down.

Content like this is a clear violation of terms of service agreements, making removal — once notified and located — straightforward. But being able to “see” it before dozens of users do remains a challenge.

Filed Under: , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Social Media Services Respond When Recordings Of Shooting Are Uploaded By The Person Committing The Crimes (August 2015)”

Subscribe: RSS Leave a comment
84 Comments
Anonymous Coward says:

Re: Re:

There are far too many live streams, and there are even more than usual due to COVID. How many thousand churches are closed or semi-closed or serving self-isolating members via live streaming of services? How many amateur or wannabe-pro entertainers are streaming because live audiences are not available? College lectures … the list goes on and on. All of Google’s employees couldn’t monitor everything in real-time.

Anonymous Coward says:

Re: Re:

No. They can’t even get the moderation done correctly with regular videos. What makes you think that adding more time, when they can’t do it with infinite time, will fix the issue? Let alone fix the issue for raw and unedited content that the streamer may have no control over?

This isn’t an issue of time. It’s an issue of viewer sensibilities being violated on the internet. Which given the fact that it can’t be done in the real world reliably, shouldn’t surprise people.

Anonymous Coward says:

Re: Re:

Think about how much content. To put that 500 hours per minute into a different context. Just 4 minutes of uploads to YouTube would provide an entire year of employment to a moderator who does absolutely nothing, but spent 100% of their time at work reviewing videos. No meetings. No performance reviews. No professional development. Nothing except 8 hours of continuous review of uploaded videos, every working day.

Now, if you assume a more reasonable 4 hours of reviewing videos per day, YouTube would require over a quarter of a million employees whose only job is to review videos. Not gonna happen.

This comment has been flagged by the community. Click here to show it.

Stephen T. Stone (profile) says:

Re:

Moderation is a platform/service owner or operator saying “we don’t do that here”. Personal discretion is an individual telling themselves “I won’t do that here”. Editorial discretion is an editor saying “we won’t print that here”, either to themselves or to a writer. Censorship is someone saying “you won’t do that anywhere” alongside threats or actions meant to suppress speech.

Show me which one Twitter admins do when they boot someone from Twitter for violating the Terms of Service.

PaulT (profile) says:

Re: Re: Re: Re:

That is one of the more hilarious things that’s been reported recently. These people whine about people attacking them, yet the only credible report of actual bias recently is one where they’re being defended. In other words – if they were being treated equally, they’d be told to shut up even more than they are now. Sadly, it won’t enter their thick skulls that they are the problem.

That One Guy (profile) says:

Re: Re: Re:2 Re:

It does rather show that the whining about ‘persecution’ are actually working sadly, as you’ve got companies bending over backwards to give them special treatment lest they throw yet another childish tantrum, along with suggesting that companies need to stop walking on eggshells and trying to spare the feelings of the poor put upon conservatives and just flat out tell them that if they get hammered by moderation that’s aimed at cutting down on assholes and the lies they tell more often it’s because they are lying assholes.

PaulT (profile) says:

Re: Moderating = Censoring

No, leave your "moderation = censorship" BS at the door.

The main difference is that while it’s hard to move countries, you can take your whiny ass to any other website in seconds if you find that the one you use is telling people like you to STFU and GTFO too much for your tastes. Stop crying and do it, and tell the friends of yours who are also being told to shut their childish mouths to join you. Problem solved. Every one of these sites has a bunch of competition, the main problem is people insisting on trying to force sites that don’t want them to host them, rather than going to a place that actually wants them there.

Anonymous Coward says:

Re: Moderating = Censoring

"And leave your "only The Govt Can Censor" BS at the keyboard."

I do not recall the phrase you quoted as being common mantra on TD, perhaps you have an example.

I suspect you are complaining about the commonly required explanation of the First Amendment and your associated rights.

Yes, one can be censored by anyone else. It, by definition, does not have to be the government doing it, but that is what these fine people are trying to convey and you know it.

/twocents

Anonymous Coward says:

Re: Re:

YouTube is succeeding quite well thank you. What is however impossible is to get every one in town to agree on what should be moderated, never mind everyone in the whole world, and YouTube is trying to moderate the world, and so will have problems in doing that.

If you prefer small platforms that are moderated close to your tastes, find them, or create your own. However do not complain about lack of content if you do that.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow