Content Moderation Is Impossible: You Can't Expect Moderators To Understand Satire Or Irony

from the just-doesn't-work-that-way dept

The latest in our never ending series of posts on why content moderation at scale is impossible to do well, involves Twitter now claiming that a tweet from the account @TheTweetOfGod somehow violates its policies:

If you’re unfamiliar with that particular Twitter account, it is a popular account that pretends to tweet pithy statements from “God” that attempt (often not very well, in my opinion) to be funny in a sort of ironic, satirical way. I’ve found it to miss a lot more than it hits, but that’s only my personal opinion. Apparently, Twitter’s content moderation elves had a problem with the tweet above. And it’s not hard to see why. Somewhere Twitter has a set of rules that include that it’s a violation of its rules to mock certain classes of people — and that includes making fun of people for their sexual orientation, which violates Twitter’s rules on “hateful conduct.” And it’s not difficult to see how a random content moderation employee would skim a tweet like the one flagged above, not recognize the context, the fact that it’s an attempt at satire, and flag it as a problem.

Thankfully, in this case, Twitter did correct it upon appeal, but it’s just another reminder that so many things tend to trip up content moderators — especially when they have to moderate a huge amount of content — and satire and irony are categories that frequently trip up such systems.

Filed Under: , , , ,
Companies: twitter

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Is Impossible: You Can't Expect Moderators To Understand Satire Or Irony”

Subscribe: RSS Leave a comment
92 Comments
That One Guy (profile) says:

Too good to miss

Of all the times not to include the ‘If you can’t read/see the tweet it says’ bit…

The ‘offending’ tweet in question:

‘If gay people are a mistake, they’re a mistake I’ve made hundreds of millions of times, which proves I’m incompetent and shouldn’t be relied upon for anything.’

And it’s not difficult to see how a random content moderation employee would skim a tweet like the one flagged above, not recognize the context, the fact that it’s an attempt at satire, and flag it as a problem.

No, it really is hard to see how someone could read that and not realize it was satire/humor. The only way it could have been more obvious is if they opened it with ‘THIS IS SATIRE’ in bold.

Anonymous Coward says:

Re: Too good to miss

Worse, Mike wrote they "had a problem with the tweet above", and I’ll I could think is they didn’t like the word "fuck". They actually had a problem with the tweet depicted in the image linked from the above tweet (which is a shitty way to reference a tweet—not everyone can read images).

One Cheeseburger Away From Keeling Over says:

Content Moderation Is Impossible? You do it right here!

Or are you today claiming Techdirt doesn’t moderate at all? Your official position is that it’s "the community". But I get confused because "Gary" (who’s actually minion Timothy Geigner, aka "Dark Helmet") keeps claiming Techdirt DOES moderates.

Anyhoo, my comments here get hidden, so by whatever system and by whoever, you must claim that Techdirt has sound practice, right? So why haven’t you brought your method to attention of these "platforms" which keep flailing? — You wouldn’t charge them for it, either, with your notions of not protecting ideas and "sharing". You also have the IN to get attention. So I’m mystified why the Masnick system isn’t in place…

Now, all ya got yourself here is another anomaly of no importance, but you won’t stick up for those with substantive political views who get arbitrarily "de-platformed", so what good are you?

YOU state (in rare declaration) that plaforms have a totally arbitrary RIGHT to do so:

"And, I think it’s fairly important to state that these platforms have their own First Amendment rights, which allow them to deny service to anyone."

https://www.techdirt.com/articles/20170825/01300738081/nazis-internet-policing-content-free-speech.shtml

You’re NOT against the act in principle, if it’s taken against those you view as political opponents, like Alex Jones or "conservatives" even when well within common law terms: you’re okay if it’s for "hate speech". It’s ONLY when YOUR goals are being thwarted that you object.

One Cheeseburger Away From Keeling Over says:

Re: Content Moderation Is Impossible? You do it right here!

Since you not only have no philosophical objection to the meat ax approach, but STATE repeatedly that "platforms" have a right to do so entirely arbitrarily, why are you wringing your hands, yet again, over remarks you state are of no importance?

Clearly your goal here is to try and prevent legislation that would require "platforms" to "moderate" in a neutral way.

Of course you also have the notion that these "platforms" are fundamental and absolutely necessary, cannot be regulated until they DO come up with "possible" system, let alone shut down, even though are proven to be against societal interests.

As always your real purpose with irrelevant anomaly is to guarantee corporate profits AND corporate arbitrary control of ALL speech.

Stephen T. Stone (profile) says:

Re: Re:

why haven’t you brought your method to attention of these "platforms" which keep flailing?

What works for one platform won’t necessarily work for another.

all ya got yourself here is another anomaly of no importance

One is an anomaly. Two is a coincidence. Three or more is a pattern. Considering how often bans/suspensions like this one happen on Twitter (and on other platforms), they are not anomalies.

you won’t stick up for those with substantive political views who get arbitrarily "de-platformed"

What political views, dear sir, are they being banned for expressing?

plaforms have a totally arbitrary RIGHT to do so

They do. If you ran a blog for your political beliefs and had open comments on it, you could ban commenters for expressing beliefs that run contrary to yours. Nothing could stop you from doing so.

You’re [for] the act in principle, if it’s taken against those you view as political opponents

Prove it.

common law

Courts haven’t ever ruled that a platform must be forced to host every kind of legal speech. And you haven’t provided an argument for why a platform should be forced to host speech. Your citation of “common law” means nothing here.

you’re okay if it’s for "hate speech". It’s ONLY when YOUR goals are being thwarted that you object.

Again: Prove it.

Since you not only have no philosophical objection to the meat ax approach, but STATE repeatedly that "platforms" have a right to do so entirely arbitrarily, why are you wringing your hands, yet again, over remarks you state are of no importance?

Because it shows an example of the impossibility of “perfect” content moderation at the scale of a site like Twitter. Given how people think moderation can be “perfected”, even at that scale, showing examples that prove otherwise offers a meaningful rebuttal to a worthless argument.

your goal here is to try and prevent legislation that would require "platforms" to "moderate" in a neutral way

For what reason should a Black Lives Matter forum be forced by law to host White supremacist propaganda for the sake of “content neutrality”?

you also have the notion that these "platforms" are fundamental and absolutely necessary

Once more with feeling: Prove it.

cannot be regulated until they DO come up with "possible" system, let alone shut down

Any government regulation of content moderation, up to and including the shutdown of a platform, would constitute a violation of the First Amendment.

your real purpose with irrelevant anomaly is to guarantee corporate profits AND corporate arbitrary control of ALL speech

…says the absolute asshole who wholeheartedly supports copyright maximalism, which would also guarantee corporate profits and arbitrary corporate control of all speech.

btr1701 (profile) says:

Re: Re: Re: Re:

Courts haven’t ever ruled that a platform must be forced to host every kind of legal speech. And you haven’t provided an argument for why a platform should be forced to host speech. Your citation of “common law” means nothing here.

Even less than nothing, because common law in the area of free speech has been entirely supplanted in the US by constitutional law. Common law literally has no application in issues concerning the 1st Amendment,

Rocky says:

Re: CDA 230?

Uhm, CDA 230 doesn’t require that content gets moderated. Where did you get that idea from?

What you fail to grasp is that without CDA 230 a site either can’t accept UGC or they must moderate EVERYTHING posted and as Mike pointed out in the article, is that moderation isn’t something that’s easy to do in general but impossible at scale.

Anonymous Coward says:

Re: Re: CDA 230?

It was an implied understanding that tech companies would not abuse 230 protection as they have. They use it as a sword, not a shield. RipoffReport.com is a good example.

UGC would still exist without 230 as it does in other countries, due to the notice requirement similar to the DMCA.

Moderation isn’t necessary if people would grow up and start filtering stuff, though this would reveal that it’s not that they don’t want to read what they can easily avoid, they just don’t want OTHERS reading it.

Sites which spread anti-vaxxer information should be held liable for measles outbreaks but aren’t. Sites which allow online mobs to spill over into real violence should also be liable. Sites which allow people to be harassed as well. 230 enables all the horrible things we see online, plus it means you can’t trust what you read, including advertising.

Anonymous Coward says:

Re: Re: Re:4 Ziiiippppp

I dunno – That One Guy would probably call it a bad habit, but I personally find it entertaining as hell.

It’s funny seeing how horse with no name has devolved from concerned Prenda fanboy to the farcical travesty worrying about Masnick’s wife that he’s become.

That One Guy (profile) says:

Re: Re: Re:5 Ziiiippppp

Eh, depends on a couple of factors.

Pre-emptive troll baiting? Yeah, don’t do that, bad enough when they show up on their own, no need to mention them before that. Comments like that I flag just the same as the troll comments, as that’s just spamming the comment section in an attempt to draw in someone to fill it with rubbish.

Beyond that… so long as people keep in mind that they’re giving the trolls the very thing they want(attention), avoid sinking to the troll’s level by doing nothing but slinging insults, and at least try to keep the responses productive to avoid back and forths consisting of basically ‘Yes it is!’/’No it isn’t!’ while I consider it a waste of time and not the best use of effort everyone needs a hobby and whack-a-troll can be entertaining at times, so knock yourself out.

Gary (profile) says:

Re: Re: Re:4 Ziiiippppp

I know. There’s no point in debating him. Therefore I just content my self to bait him. Bad habit, I know.

That is a bad habit and you are a terrible person for treating such a notorious scammer like that. Jhon has suffered at the hands of everyone here who thinks his unsubstantiated claims are fucking hilarious. He should be free to lie and rip people off without such contempt.

Anonymous Coward says:

Re: Re: Re: CDA 230?

plus it means you can’t trust what you read, including advertising

You already couldn’t trust advertising before the Internet was a thing. Actually, ads on the Internet aren’t usually an issue for the conscientious user, because Adblock exists. Fuck if I’m letting your RIAA track me.

Anonymous Coward says:

Re: Re: Re: CDA 230?

"It was an implied understanding that tech companies would not abuse 230 protection as they have"

Perhaps if you were to explain your point of view providing examples and description of any extrapolation.

In addition, there are several unsupported accusations in your post. Perhaps you could elucidate.

"Moderation isn’t necessary if people would grow up and start filtering stuff"

  • There would be no need for police nor prisons if everyone obeyed the laws … LOL, are you really that daft?

"Sites which spread anti-vaxxer information"

  • Which sites might that be and how are they spreading anything? Third party liability is stupid and will cause all sort of damage to innocent parties, why are you so cavalier about it?

"Sites which allow people to be harassed"

  • Do you need a safe space?

"230 enables all the horrible things we see online"

  • How does 230 enable the horrible things I see coming out of the mouths of asshole politicians and their wise guys?
btr1701 (profile) says:

Re: Re: Re: CDA 230?

Sites which spread anti-vaxxer information should be held liable for measles outbreaks but aren’t.

Such an imposition of liability wouldn’t meet the Brandenburg test, so no.

Sites which allow online mobs to spill over into real violence should also be liable.

Also wouldn’t meet the elements in Brandenburg even if applied to the mob members and violent actors themselves, let alone the internet platform that is one level removed from them.

btr1701 (profile) says:

Re: Re: Re:2 CDA 230?

A good example is the word "literally", some use the word as though it meant "figuratively".

Including the dictionary. (see definition 4 below)

The ignorant have done to ‘literally’ what they did to the word ‘decimate’, which actually means ‘to reduce by 10%’. This would leave 90% of whatever is being described intact, which means it really isn’t a good word to use when describing something that’s been all but wiped out. We already have a perfectly good word for that: ‘annihilate’. But so many people– including our industrious authors here on TechDirt– incorrectly insist on using ‘decimate’ to mean ‘annihilate’ that the dictionaries have changed the definition of the word.

literally – adverb

  1. in the literal or strict sense:
    She failed to grasp the metaphor and interpreted the poem literally.
  2. in a literal manner; word for word: to translate literally.
  3. actually; without exaggeration or inaccuracy:
    The city was literally destroyed.
  4. in effect; in substance; very nearly; virtually:
    I literally died when she walked out on stage in that costume.
That One Guy (profile) says:

Re: CDA 230?

Uh… no? I honestly have no idea where you could have gotten the idea that 230 requires moderation rather than allows it, as all it really says is that if a platform decides to moderate the act of doing so does not make them liable for the content on their platform that they didn’t create/post.

At most it encourages moderation that wouldn’t have otherwise happened if a platform had to worry that moderating any content would open them up to liability for all of it, by making clear that whether a site moderates or not they still aren’t liable for what they don’t create.

Éibhear (profile) says:

Why was it flagged

It would be nice, though I don’t know how it could be done in a way that wouldn’t be roundly criticised, for Twitter to specify how such a tweet came to the moderator’s attention.

I ask this, because I see communities on twitter encourage each other to report tweets that could only marginally be considered offensive or abusive.

When you see such a patently satirical tweet being blocked, I often wonder if part of the transaction is the moderator trying to decide (in the 4 seconds she has available to her) between the large number of “reports” a tweet has received and the likelihood that the tweet hasn’t objectively broken the rules.

btr1701 (profile) says:

Re: Re: Re:

When he says "You have a basic human right to your opinion… unless you disagree with me", he’s talking about Twitter users, not Twitter itself.

Many users and their followers make sport out of reporting anything that people they don’t like say. And if a tweet has 1500 complaints, the suspicion is the moderators don’t read it at all and just assume there must be something wrong with it if so many people have reported it.

PaulT (profile) says:

Re: Re: Re: Re:

"When he says "You have a basic human right to your opinion… unless you disagree with me", he’s talking about Twitter users, not Twitter itself."

That makes even less sense. Yeah, when you’re exposing yourself to a global audience, some people are going to tell you you’re a dick and to stfu. Some people might even genuinely wish to silence you. But, that’s human nature, not people violating your rights.

"Many users and their followers make sport out of reporting anything that people they don’t like say"

Yeah, that’s wrong. But, speaking as someone who pirate a lot of VHS tapes back in the day because "moral guardians" decided they shouldn’t be legally allowed, it’s hardly new. If you deal with the public some will be assholes, and if you’re big some will try to destroy you. Twitter could deal with this better, but at the end of the day this is like some guy getting you barred for a pub because he started a fight with you. It sucks, it shouldn’t happen, but it’s your own fault if that’s the only pub in your life.

Chris Clark says:

I find it laughable to see that Twitter is trying to enforce their "hate speech" policy. Every day, I see a lot of the high profile people that were de-platformed elsewhere getting to keep their Twitter accounts active. Just like with Crowder on YouTube, if you have a lot of followers, you get the "rules for thee, not for me" treatment because social media sites like the ad revenue they get by allowing known violators to keep their accounts until, possibly, public outrage rises to the point that they have to backpedal and come up with another reason that they violate policy. The platform will save face and say "They totally didn’t violate policy before, but now they do, so they get the Über-ban"

PaulT (profile) says:

Re: Re: Re:

…and even if that were possible, it would be impossible to have moderated end content that everybody in the world who saw it would find equally acceptable.

As evidenced by the Crowder case – some people are complaining that he didn’t really violate T&Cs so should never have been banned. Others are complaining that he was obviously breaking them for a long time and it’s unacceptable that it’s taken this long. Whatever they do or don’t, someone will have an issue with it.

btr1701 (profile) says:

Re: Re: Re: Re:

As evidenced by the Crowder case – some people are complaining that he didn’t really violate T&Cs so should never have been banned.

More accurately, they’re complaining that he was demonetized when YouTube itself admitted that he hadn’t violated any of its rules; that it was done merely to placate a vocal critic with a significant platform from he could shout at them.

That One Guy (profile) says:

Re: Re: Re:2 Re:

More accurately, they’re complaining that he was demonetized when YouTube itself admitted that he hadn’t violated any of its rules

… after the first pass, but as this article demonstrates quite nicely just because a platform makes that initial judgement doesn’t necessarily mean they were right(or actually put any effort into the initial review).

If the finding from the original pass is to be treated as The Word Of God(pun absolutely intended) then that would mean that the tweet discussed in this very article was correctly found to be in violation of the rules, and it was a mistake for the company to reverse course after it was brought to their attention because the initial ruling couldn’t possibly have been wrong.

Also if you’re going to make the ‘he didn’t actually violate the rules, YT was just pressured into bringing the hammer down on him’ argument then it would seem to be trivial to turn that right around and suggest that the reason he wasn’t found in violation on that first pass is because he runs a popular channel and YT didn’t want to give the boot to someone that was good for business, violations or not, and it was only when public backlash started costing them more than they got from him that they demonetized him.

PaulT (profile) says:

Re: Re: Re:3 Re:

Exactly. There’s no right to use YouTube, they can kick you off for whatever reason as long as it’s not discrimination against a protected class. T&Cs are just the excuse, but it’s not really needed. He can get back every penny he paid to host his videos there if he wants lol. PR might mean they can’t just say "too many people think you’re an asshole", but if the reason’s just "our support team are tired of being spammed by people who hate you", there’s no reason he shouldn’t be kicked off.

The good news is that instead of just whining, there should be enough people now motivated to create a competitor to YouTube that are OK with putting up with his shit, and he can take similarly mind people with him. Let’s see if these guys actually do that or if they just want to violate YouTube’s freedom of association.

btr1701 (profile) says:

Re: Re: Re:4 Re:

There’s no right to use YouTube

I never claimed there was. YouTube is, however, a business and it’s perfectly legitimate and acceptable to point out when they’re being asshats to their customers and/or fail to apply the rules they themselves have established in a constituent manner.

When a restaurant gives a diner shitty service and the customer complains about it on Yelp or somewhere like that, no one says to the customer, "Well, there’s no right to eat at that diner so you shouldn’t bitch about it." But for some reason when it comes to social media platforms, the fact that you don’t have a right to use them means any complaints you have about how they treat you and others are somehow inappropriate.

PaulT (profile) says:

Re: Re: Re:5 Re:

Yes and both establishments have the right to show you the door. If you feel they are being unfair by all means express that, perhaps even organise to get others to join you in a boycott or at least choose an appropriate time and method to complain to management.

The issue isn’t those people, it’s the people who believe they have a right to stay even after being told that they’re disturbing everyone’s else’s meal. Even if you’re ultimately in the right, unless they’re breaking other laws by asking you to leave, it’s their prerogative to do so.

Zof (profile) says:

Gentle Reminder

The same twitter that determined "learn to code" was nazi hate speech. Yeah. Those morons. People should just leave their echo chamber. How many times does twitter have to be wrong and humiliate itself. I love how people are just suing now. Vic Mignogna got sick of it. He’s suing every media outlet that slandered him and is winning. I hope that becomes a popular trend. We need to take out the gawkers and other trash.

PaulT (profile) says:

Re: Gentle Reminder

"The same twitter that determined "learn to code" was nazi hate speech"

Did they? I can’t keep track of the whining, there’s so much of it. If true, maybe the guys who were doing that can code their own platform so that they don’t fall foul of the one they were allow temporarily to use for free. Or, at least understand that there’s a lot of other platforms that already exist and the best way to react would be to use one.

Anyway, you’re pretty dumb if you believe that there was either a human being involved in that action, or if you believe that it’s possible to moderate a platform with 100% accuracy. Suing Twitter will not change this.

ANON says:

Lost in Translation

I recall from a book "If At All Possible Involve a Cow" which details practical jokes over the years… it mentioned the story of a set of university recycle bins labelled "white paper" and "colored paper". Someone replaced the label with "paper of color" which elicited this note from administration – "If this is a joke I find it in poor taste, if someone has an actual problem come see me."

The book noted the pithy observation:

"Humor is lost on some people."

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...