Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Handling Off Platform Harassment On Platform (June 2020)

from the how-far-does-it-go? dept

Summary: Dealing with harassment of users is always a challenge for internet platforms — and that is especially true for platforms that are focused on live streaming. For many years there have been accusations around sexual harassment problems for female Twitch streamers. Going all the way back to 2012 when a Twitch streamer argued that sexual harassment was part of the culture of streaming (saying ??this is a community that?s, you know, 15 or 20-years-old and the sexual harassment is part of a culture?) there have been ongoing questions about how Twitch should deal with such behavior both on and off the platform.

Not surprisingly, there have been many reports of on-platform harassment for Twitch streamers to the point that some reporters have noted it is quite easy to seek out harassment and find it on the platform. In 2018, Twitch put in place new rules for dealing with harassment on the platform and it also provides a variety of tools for managing harassment within Twitch?s chat feature.

More recently, another issue has been raised: how should Twitch handle harassment that occurs off-platform? Some users started collecting reports of harassment and sexual abuse that were occuring connected to Twitch, and it was notable that many of them were not happening directly on the platform. According to an article at The Verge many of the reported claims of harassment and abuse were people who met via Twitch or were popular Twitch users, who were accused of using their position of power to harass others.

It is difficult enough to deal with harassment in real time on a streaming platform (where the incidents come and go), but figuring out how to deal with harassment that happens off-platform is even more fraught. However, in June of 2020, many Twitch streamers engaged in a 24-hour blackout in protest over what they felt was Twitch?s failure to act regarding many of the accusations.

Decisions to be made by Twitch:

  • How should it handle credible claims and accusations of abuse and sexual harassment that occur off-platform between users of Twitch?
  • How much responsibility should Twitch take in dealing with off-platform behavior?
  • What resources are necessary for investigating off-platform behavior?
    • How should serious claims be verified?

Questions and policy implications to consider:

  • Is it proper for conduct outside of an internet platform to impact usage of that platform?
  • Will serious accusations against ?popular? users of a platform reflect poorly on the platform?
  • Will widespread accusations of sexual abuse among users of the platform create an unwelcoming environment?
  • Can on-platform policies impact off-platform behavior?

Resolution: After this issue got lots of attention Twitch announced that it was investigating the various claims of harassment and trying to prioritize the most serious.

We want to provide an update on our investigations into the recent allegations of sexual abuse and harassment involving Twitch streamers and actions we?re taking. We are reviewing each case that has come to light as quickly as possible, while ensuring appropriate due diligence as we assess these serious allegations. We?ve prioritized the most severe cases and will begin issuing permanent suspensions in line with our findings immediately. In many of the cases, the alleged incident took place off Twitch, and we need more information to make a determination. In some cases we will need to report the case to the proper authorities who are better placed to conduct a more thorough investigation. For those who?ve come forward and would like to share additional information, and to anyone who hasn?t shared their experience and wants to do so, you can report confidentially through the reporting tools on each streamer?s channel page.

The company also promised to continue to improve the tools on-platform for combating harassment and abuse. Twitch?s founder and CEO Emmett Shear also sent an email to the entire company about the issue and the importance of building ?an experience that is community-centered, safe and positive for all.?

Days later, the company banned a popular user who was accused of harassing other Twitch streamers as well as some other users who had been similarly accused. At least one suspended user has since insisted that he was innocent and suggested he was working with lawyers to appeal the suspension.

Filed Under: , ,
Companies: twitch

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Handling Off Platform Harassment On Platform (June 2020)”

Subscribe: RSS Leave a comment
13 Comments

This comment has been flagged by the community. Click here to show it.

U. Little Cheaters says:

Yeah, Maz, we know: moderation is tough! But censorship is easy.

Moderation is tough because edge cases and general.

Censorship is easy: ideas that you don’t want to spread are usually clear, and anyone who even spreads them is just permanently labeled "bad".

If I wasn’t censored here, disadvantaged visibly and in hidden ways, constantly HARASSED by off-topic little kids, I’d run the site, you all KNOW it! You have NO ideas that anyone reasonable will adopt. Proved by the steep decline in just last year.

This comment has been flagged by the community. Click here to show it.

U. Little Cheaters says:

Of course either is just deciding whether suits your agenda

to allow certain speech.

Control over speech is now KEY element of Silicon Valley esp here on Techdirt, nearly all that you write of. You’re alarmed that control could be taken away with changes to CDA S230.

Your tactic is very simple: try to keep discussion on whether censoring, while stating that have every right to do so.

This comment has been flagged by the community. Click here to show it.

U. Little Cheaters says:

As with Facebook and Twitter today, in a panic because

Truth about the Bidens came out. So stop it spreading and remove the original publishing, because it’s clear danger to your/their leftist political agenda.

WAY back, you censoring weenies used to trot out the "may not agree with but will defend to the death your right to say it". You didn’t mean it then, just suited your purposes. But now, at best, you add: "just not anywhere it can be seen".

PaulT (profile) says:

Re: As with Facebook and Twitter today, in a panic because

"remove the original publishing"

Nobody’s done that. Yet again, you’re so interested in ranting rambling nonsense against phantoms of your own imagination, you can’t even get the basic of the story you’re whining about correct.

"WAY back, you censoring weenies used to trot out the "may not agree with but will defend to the death your right to say it""

We still say that. We just don’t agree with your attempt to hijack other peoples’ private property to do it.

This comment has been flagged by the community. Click here to show it.

U. Little Cheaters says:

Leftists then go on to state it's GOOD to stop the spread

of dangerous disinformation! Meaning The Truth because you always LIE. At best, leftists never trust people to come to the correct conclusion, especially on questions about your globalist agenda.

Masnick has stated he’s not upset by "literal Nazis" being "de-platformed". He didn’t speak up when Alex Jones was taken off by a dozen platforms and payment systems all at once, in obvious coordination. Right here on the very "blog" over which he has total control, the "report" button is de facto censoring, disadvantaging certain viewpoints without EVER "moderating" the nasty fanboys. Masnick gives no details how that works, won’t even admit there’s an Administrator, just LIES that it’s somehow "the community" without any Administrator decision at all.

And then has the sheer effrontery to keep claiming he’s for "free speech".

You leftists only win when lie, cheat, and sneak around out of sight. Your ideas can’t stand dissent. Your actions can’t stand being openly stated. — And you’ve long since overcome any "moral" qualms, besides think victory is within sight, so keep getting worse.

PaulT (profile) says:

Re: Leftists then go on to state it's GOOD to stop the spread

"He didn’t speak up when Alex Jones was taken off by a dozen platforms and payment systems all at once, in obvious coordination"

No, he didn’t speak up when the man who has inspired multiple real-life acts of violence and harassment of the parents of murder victims was told he was no longer welcome to use some other peoples’ private property to do such things, especially since he has his own property to speak from. What’s your problem with this?

"Your ideas can’t stand dissent."

Yet, here you are, dissenting like a raving moron without reprisal..

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow