Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Twitch Allows Users To Enable Emote-Only Chats (2016)

from the can't-be-a-troll-when-you-can-only-emote dept

Summary: Dealing with content moderation during real-time chats always presents an interesting challenge. Whether it?s being able to police language in real time, or dealing with trolling and harassment, chat has always been one of the most difficult content moderation challenges going back to its earliest days.

In 2016, Twitch decided to enable a new feature for its users: an ?emote-only? mode for the chat. Emotes, on Twitch, are basically a custom set of what are more traditionally called emoji on most other websites/platforms. With Twitch, though, they are almost entirely custom, and users at certain levels are able to add their own.

Emote-only is one of a bunch of different modes and features that Twitch streamers can use to try to tame their chat. Twitch itself suggests using this as a way to stop harassment in the chats.

Turning on and off the feature is a choice for the streamer themselves, rather than Twitch. It?s just one of a few tools that Twitch users can enable to deal with potentially harassing behavior in the chat alongside their streams.

Decisions for Twitch:

  • What tools should you provide to users to deal with abusive or harassing chat participants?
  • Do features like this give more power to Twitch users, or are they offloading moderation demands from the company itself?
  • Should there be any exceptions to emote-only mode?
  • Are there times where a custom emote would be considered a policy violation because it takes on a harassing meaning in a certain context? 
  • What sort of emote review processes should be put in place?

Questions and policy implications to consider:

  • Rather than offering entirely binary options (allow/disallow) are there more creative alternatives for dealing with harassing behavior?
  • Are there ways in which even emote-only mode might be abused for harassment?

Resolution: Emote-only mode was launched quietly with little fanfare from Twitch in 2016. While it may not be widely used, many streamers do find it useful. It is not just used for stopping harassment, but sometimes to stop people in a chat from revealing spoilers or other information that may impact what they?re streaming (such as information about the video game they are playing).

Originally posted to the Trust & Safety Foundation website.

Filed Under: , ,
Companies: twitch

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Twitch Allows Users To Enable Emote-Only Chats (2016)”

Subscribe: RSS Leave a comment
12 Comments
Anonymous Coward says:

That’s why there’s emote-only chat? well ok then.

i hardly think this is offliading moderation. who else is supposed to moderate their own chat anyway?

now it’s also an option to turn on emote-only chat briefly for viewers who have collected enough channel points (on channels that use them and have that "reward" left available). now moderation is just more entertainment. poggers.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:

To go with a lame but caught on term – isn’t that just a case of "milkshake duck"? Something features someone doing something fun and then it comes out that they are a terrible person in some way overshadowing the original context.

Said term comes from a twitter joke describing the phenomenon with a joke about a duck drinking a milkshake and then finding out the duck was racist.

Anonymous Coward says:

It is not just used for stopping harassment, but sometimes to stop people in a chat from revealing spoilers or other information that may impact what they’re streaming (such as information about the video game they are playing).

Essentially, there is an inverse relationship between a streamer’s audience size, and the amount of effort they’ll put into channel interactions. When they only have a few folks, or if their stream is only to hang out with close friends, there’s more direct interaction with their audience. However, when a streamer reaches a certain size, it becomes impossible to meaningfully interact with that many people at once, so the chat is really only interacting with itself, if even that. Chat can be flooded and updating so rapidly interaction is difficult and meaningless, so it turns into cheerleading at that point. If you need to resort to the emote-only chat feature, that’s essentially what you’ve turned your chat into, and you’re sending the message you only want cheerleading.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow