Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Twitter Briefly Bans Russian Parody Accounts (2016)

from the parody-in-action dept

Summary: Twitter allows parody accounts to remain live (often over the protests of those parodied), provided they follow a narrow set of rules — rules apparently intended to make sure everyone’s in on the joke.

Here’s everything Twitter users agree to do when creating a parody account:

  • Bio: The bio should clearly indicate that the user is not affiliated with the subject of the account. Non-affiliation can be indicated by incorporating, for example, words such as (but not limited to) “parody,” “fake,” “fan,” or “commentary.? Non-affiliation should be stated in a way that can be understood by the intended audience.

  • Account name: The account name (note: this is separate from the username, or @handle) should clearly indicate that the user is not affiliated with the subject of the account. Non-affiliation can be indicated by incorporating, for example, words such as (but not limited to) “parody,” “fake,” “fan,” or “commentary.? Non-affiliation should be stated in a way that can be understood by the intended audience.

Unfortunately for the very popular Vladimir Putin parody account (@DarthPutinKGB), Twitter’s moderators decided the account didn’t strictly adhere to the “make it obvious” policies covering accounts like these.

In May 2016, Twitter suspended the account for its alleged violations.

This ban immediately resulted in backlash from other Twitter users who were fans of the account — one that made it clear (albeit without all the specifics demanded by Twitter) that it was a parody. Disappointed fans included Estonian president Toomas Hendrik and Radio Free Europe, which published a collection of the account’s best tweets.

While the ban was technically justified by the violation of the specifics of Twitter’s rules, the end result was a lot of Twitter users wondering whether Twitter moderators were capable of recognizing obvious parody without accounts bios copying the platform’s parody guidelines word-for-word.

Decisions to be made by Twitter:

  • Is the banning of harmless parody accounts an acceptable tradeoff for protecting users from impersonation?

  • Should the parody guidelines be altered to make it easier to identify parody accounts?

  • Should moderators be allowed to make judgment calls if an account is clearly a parody but does not strictly adhere to the parody account guidelines?

Questions and policy implications to consider:

  • Should Twitter use more caution when moderating parody accounts whose parodic nature isn’t immediately clear?

  • Is impersonation too much of a problem on the platform to ever relax the standards governing this kind of humor?

Resolution: Twitter swiftly reinstated the account following the backlash. The account remains active, despite its new bio not explicitly following the Twitter Rules for parody accounts.

But it wasn’t the first time Twitter moderated accounts parodying Russian government officials. A similar thing happened roughly a year earlier, when Twitter blocked an account parodying powerful Russian oil executive Igor Sechin, apparently in response to a Russian government complaint the satirical account “violated privacy laws.” This happened despite the fact the user’s handle was IgorSechinEvilTwin, making it clear it was a parody, rather than an attempt to impersonate the real Igor Sechin.

Originally published on the Trust & Safety Foundation website.

Filed Under: , , ,
Companies: twitter

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Twitter Briefly Bans Russian Parody Accounts (2016)”

Subscribe: RSS Leave a comment
8 Comments
That Anonymous Coward (profile) says:

"Non-affiliation should be stated in a way that can be understood by the intended audience."

And there-in lies the problem.

If moderators were not the intended audience they misread things.

Its like that one creepy guy at the office party who doesn’t laugh at the joke everyone else does, because he lacks the frame of reference required because he’s never watched the Simpsons.

It is impossible for humans to look at 1 tweet removed from all context & divine intent.

More than once, when I was allowed to use the platform who me pissed off?, I often rooted for an ELE just so I could see if the next species nature lifted up would be more interesting to watch.

ELE – Extinction Level Event

Strict reading is me calling for genocide of the entire human race… that seems bad.

Add the piece of the puzzle that I am ‘an immortal sociopath tired of watching you hairless apes making the same mistakes since time started’.

Add the piece that these comments are often made in response to media stories of humans being human.

Man carries baby into elephant pen, nearly killed, drops kid all to get a fscking selfie.

Man kills 9 yr old son when sled, HE TIED TO THE BACK OF THE CAR, slammed into a parked car.

Mitch McConnel reelected after he rips out 105 yr old womans heart and eats to to seal a dark pact for another 500 yrs of power.

Something horrific happens, people tut & fret, people demand change, 3 weeks later it happens again & people are shocked just shocked.

I’m a go-getter sort of immortal, be the ELE you wish to see in the world.

Responses vary based on how much of the puzzle of TAC you are aware of.
We follow each other… oh its a day that ends in Y.
Someone you know follows & RTs me… oh its that weird guy again.
Someone clueless sees it & I am become death, destroyer of humanity. I must be reported and stopped.

Call for an ELE, no problemo
Call yourself faggot, GTFO

Twitter where the rules are made up & aren’t followed anyways.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

Moderation is a platform/service owner or operator saying “we don’t do that here”. Personal discretion is an individual telling themselves “I won’t do that here”. Editorial discretion is an editor saying “we won’t print that here”, either to themselves or to a writer. Censorship is someone saying “you won’t do that anywhere” alongside threats or actions meant to suppress speech.

Now, which one of these applies to the incident in question?

This comment has been deemed insightful by the community.
That Anonymous Coward (profile) says:

Re: "Content Moderation" is newspeak for censorship.

LMFTFY

"Content Moderation" is newspeak for censorship, but only when it happens to me, when it happens to people who hold views I disagree with its perfectly fine because they shouldn’t be allowed to say those things ever.

That One Guy (profile) says:

Re: The conservative that cried 'censorship!'

By all means keep pushing that dishonest definition, all you’re doing is watering the word down so that on the off chance that you actually are censored at some point and seek sympathy from those around you all you’ll get is an indifferent shrug or support for the one who silenced you, as you’ll have trained people to associate any claims of ‘censorship’ with ‘suffered a penalty for breaking the rules and/or acting like an ass on private property’ and you’ll have only yourself and your fellow ‘victims’ to blame for that response.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow