Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020)

from the content-moderation-in-new-areas dept

Summary: Since its debut in 2007, Twitter hasn’t changed much about its formula, except for expanding its character limit from 140 to 280 in 2007 and adding useful features such as lists, trending topics and polls. Twitter has embraced images and videos, adding it to its original text-only formula, but seemed to have little use for audio. That changed in June 2020 when Twitter announced it would allow users to upload audio-only tweets. Remaining true to the original formula, audio tweets were limited to 140 seconds, although Twitter will automatically add new audio tweets to a thread if the user’s recording ran long.

With Twitter engaged in day-to-day struggles moderating millions of tweets, critics and analysts expressed concern the platform would be unable to adequately monitor tweets whose content couldn’t be immediately discerned by other users. The content would be unable to be pre-screened by moderators — at least not without significant AI assistance. But that assistance might prove problematic if it caused more problems than it solved by overblocking.

There was also the potential for harassment. Since abusive audio tweets relied heavily on other Twitter users reporting, abusive audio tweets could be posted and remain up until someone noticed and reported it. Another issue audio tweets raised wasn’t about proactively flagging and removing unwanted content, but that this new offering excluded certain Twitter users from being a part of the conversation.

?Within hours of the first voice tweets being posted, deaf and hard-of-hearing users began to criticize the tool, saying that Twitter had failed to provide a way to make the audio clips accessible for anyone who can?t physically hear them.?

— Kiera Frazier, YR Media

The new feature debuted without auto-captioning or any other options that would have made the content more accessible to Deaf or hard of hearing users.

There were other potential problems, such as users being exposed to possibly disturbing content with no heads up from the platform.

??You can Tweet a Tweet. But now you can Tweet your voice!? This was how Twitter introduced last week its new audio-tweet option. In the replies to the announcement [another user asked], ?Is this what y?all want?? … reposting another user?s audio tweet, which used the new feature to record the sounds of? porn.

— Hanna Kozlowska, OneZero

Unlike other adult content on Twitter, the recording of porn sounds was not labelled as sensitive by Twitter or hidden from users whose account settings requested they not be shown this sort of content.

Company considerations:

  • Is it possible to proactively filter audio content to be flagged, prevented from being posted, or quickly removed?
  • If an audio tweet was reported, should Twitter remove this feature from the user immediately or wait until it is reviewed? If the tweet violates Twitter’s content policy, should they temporarily or permanently take away this feature from the user?
  • Should unmoderated audio be labelled as sensitive if they are reported until cleared by moderators/AI?
  • Should users be given the option to hide/block all audio tweets?

Issue considerations:

  • What makes audio-only moderation different from text, image, or video moderating? Is audio-only moderation more challenging?
  • What are other proactive methods of moderating audio content? Would they be more or less effective than relying on users flagging abusive content?
  • Is AI reliable enough to handle most instances of unwanted content without the assistance of human moderators?
  • How can platforms ensure that audio-only or visual-only content be accessible to those who have a hearing or visual disability or impairment?

Resolution: Twitter responded to the concerns of the Deaf and hard-of-hearing by apologizing for not considering the implications of an audio-only option.

?We’re sorry about testing voice Tweets without support for people who are visually impaired, deaf, or hard of hearing. It was a miss to introduce this experiment without this support.

Accessibility should not be an afterthought.?

— Twitter

The platform fixed some issues with visual accessibility and said it was implementing a combination of auto- and human-captioning to give Deaf persons a way to access this content.

As for the porn-audio tweet, Twitter flagged it after it was reported but did not appear to have any other approach to dealing with questions around adult content in audio tweets. It appears sensitive content is not as easy to detect when it’s in audio form, which means that for now, it’s up to users to report unwanted or abusive content so that Twitter can take action.

Originally published on the Trust & Safety Foundation website.

Filed Under: , ,
Companies: twitter

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020)”

Subscribe: RSS Leave a comment
6 Comments
Pixelation says:

“Within hours of the first voice tweets being posted, deaf and hard-of-hearing users began to criticize the tool, saying that Twitter had failed to provide a way to make the audio clips accessible for anyone who can’t physically hear them.”

For those of you who are deaf or hard of hearing, just say "Thank you!" and move on.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow