It's Not Personal: Content Moderation Always Involves Mistakes, Including Suspending Experts Sharing Knowledge

from the it-happens dept

I keep pointing out that content moderation at scale is impossible to do well. There are always going to be mistakes. And lots of them. We’ve spent years highlighting the many obvious mistakes that websites trying to make moderation decisions on thousands, hundreds of thousands, or even millions of pieces of content are going to make every day. It’s completely natural for those who are on the receiving end of obviously bogus suspensions to take it personally — though there does seem to be one group of people who have built an entire grievance complex on the false belief that the internet companies are targeting them specifically.

But if you look around, you can see examples of content moderation “mistakes” on a daily basis. Here’s a perfect example. Dr. Matthew Knight, a respiratory physician in the UK, last week tweeted out a fairly uncontroversial statement about making sure there was adequate ventilation in the hospitality industry in order to help restart the economy. At this point, the scientific consensus is very much that good ventilation is absolutely key in preventing COVID transmission, and that the largest vector of super spreader events are indoor gatherings with inadequate ventilation. As such this tweet should be wholly uncontroversial:

And yet… despite this perfectly reasonable tweet from a clearly established expert, Twitter suspended his account for “spreading misleading and potentially harmful information related to COVID-19.” It then rejected Dr. Knight’s appeal.

Thankfully it appears that Twitter eventually realized its mistake and gave Dr. Knight his account back. Lots of people are (understandably) asking why Twitter is so bad at this, and it’s a fair enough question. But the simple fact is that the companies are all put in an impossible spot. When they weren’t removing blatant mis- and disinfo about COVID-19, they were getting slammed from plenty of people (also for good reason). So they ramped up the efforts, and it still involves a large group of (usually non-experts) having to make a huge number of decisions very quickly.

There are always going to be mistakes. As Harvard’s Evelyn Douek likes to note, content moderation is all about error rates. Each choice you make is going to have error rates. The biggest questions are what kinds of errors are preferable, and how many are you willing to deal with. Should the focus be on minimizing false positives? Or false negatives? Or somehow trying to balance the two? And the answers to that may vary given the circumstances and may change over time. But one thing that is clear is that no matter what choices are made, mistakes inevitably come with them, because content moderation at scale is simply impossible to do well.

Filed Under: , ,
Companies: twitter

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “It's Not Personal: Content Moderation Always Involves Mistakes, Including Suspending Experts Sharing Knowledge”

Subscribe: RSS Leave a comment
37 Comments
Travis says:

The appeal rejection is the big problem

I can understand over-enthusiastic enforcement for dis/mis-info. Its to be expected in the current climate. What I find troubling is the seemingly blanket rejection of appeals that all the social media companies are doing for “sensitive” subjects. If you appeal the automated moderation, it should be reviewed by a person. I don’t know if the rejections are literally automatic, or if the reviewers just reject everything, but many people have reported the rejection emails being near-instantaneous. If they are being “reviewed” by people, they must be payed by the review or something.

Anonymous Coward says:

Re: The appeal rejection is the big problem

If you appeal the automated moderation, it should be reviewed by a person.

Nice idea, bur difficult to do because of the number of appeals. You can’t allow a backlog to start as it will grow without limit and when the backlog starts it is too late to hire more people because it will continue to grow while you hire and train them..

Anonymous Coward says:

Re: Re: You still can't win, but that isn't the reason.

it will continue to grow while you hire and train them..

Point of order: You don’t just hire/train enough to handle the current rate, you hire enough to also dispose of the backlog.

The "excess judges" will eventually be absorbed as the appeals count increases. So yes, it can be done.

The issue isn’t "you can’t hire enough judges". The issue is "you can’t afford to hire enough judges".

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re: You still can't win, but that isn't the reason.

The problem is not as simple as hire more moderators, you also need the buildings, managers, equipment, support personnel to support them. Meanwhile it is taking longer and longer for an appeal to reach moderation. At the scale of Twitter and Facebook et al. it is not a case of hiring a few more people, but more hire several thousand more people and try and train them before your problem grows even larger.

Overall, the problem is not how does Facebook moderate its users, but rather how do you moderate the whole of humanity.

This comment has been flagged by the community. Click here to show it.

Koby (profile) says:

They Hope You Will Forget

Lots of people are (understandably) asking why Twitter is so bad at this, and it’s a fair enough question. But the simple fact is that the companies are all put in an impossible spot.

It may be impossible to moderate at scale, but there will not be any improvement or accountability until the system stops being so opaque. Publish the algorithm, and explain why this one got censored.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: They Hope You Will Forget

Publish the algorithm, and then you’ll have people manipulating it to the point of worthlessness.

Don’t publish it, and then you have people claiming it’s biased in some way.

There’s no perfect solution.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Re: Rigid and specific rules, a troll's wet dream

‘As the specific rules that were forced upon platforms in the name of ‘clarity’ note saying the word ‘Green’ will get a comment flagged and removed, however I very clearly did not say green I said ‘the primary color of plants on the planet earth’ which is not prohibited and therefore the removal of my comment is against the rules and it deserves to be put back in place until they are further clarified to cover my comment.’

Anonymous Coward says:

Re: Strict liability

In cases such as Dr. Knight’s, there should be strict liability for libel. Claiming untruthfully that a doctor of his credentials is "spreading misleading and potentially harmful information related to COVID-19" is libel per se, and considering the source of the libel, should be cause for substantial punitive damages as well.

christenson says:

The problem *is* the scale...

In any large ecosystem that supports any level of controversy, what to moderate out starts to depend heavily on which audience member happens to be looking.

For example, the mythical average techdirt commenter hates what kobe or OOTB says, flags it for boring and doesn’t want to hear it …. but I’m pretty sure one of Mike Masnick’s friends has been studying those very same comments.

and that’s before we get to the idea that we can’t expect a large platform with huge numbers of commenters on X to also have huge numbers of experts on X, for all controversies X.

If twitter (or facebook) wants to do better, it’s going to need to establish some trusted public figures… and good luck, because we collectively can’t decide whether to trust Donald Trump or not, even if every techdirt poster comes to the same decision.

That Anonymous Coward (profile) says:

"It’s completely natural for those who are on the receiving end of obviously bogus suspensions to take it personally"

I feel seen. GLARES

Another metric: how many suspensions go over 100 days with no response. There are to many competing goals in the mix & its making things much worse.

In response to a video of someone being ‘kidnapped’ for a party (victim did not know until later it was a prank) friend of mine said if someone grabbed her like that she would punch him.
12 hour timeout, having to remove the tweet, because it was wishing pain or harm on someone.
(HI KAT!)

"If someone assaults me I would hit them." – Banable tweet treated on the same level as someone threatening harm to all the Jews.

Zero tolerance policies still are bad ideas.
More timeouts for people threatening hypothetical imaginary people (who aren’t part of one of the 10K protected groups on twitter) doesn’t make the platform better or "safer".

The CoVid Misinfo AI is bugged, Senator Anti-Vaxx is still promoting all of his crazy ideas (can still see them on the platform) while some sociopath mocks morons who think there is a chip in the vaccines has the tweet hidden and is over 100 days in TwitMo waiting for assistance. I’ve offered to delete the tweet, but prying a phone number from my poor hands matters more than anything else it seems.

I guess if you use a trendy avatar, call yourself an immortal sociopath, you are held to higher standards than members of Congress.

christenson says:

Re: Re:

@TAC, the problem is, let me take your "mistake" and now hold it up to ridicule with a word…

How you gonna teach an AI to know the difference between

"BS, Vaxxes work!"
and
"Vaxxes work!"

All it took was a word, or a small difference in context, and the meanings are opposite. It ain’t gonna happen!

Anonymous Coward says:

Re: Re:

Sure they do. But as soon as they do, the groups they want to block change their strategy again. And again. And again.

And now you’re 15 AIs in, and the 3rd "mistake" is now being used by an entirely unrelated group for entirely unrelated reasons, and the 2nd and 7th directly contradict the 11th and 12th respectively, and the 9th is actually being simultaneously used by three different "sides" of the issue, and the the 14th shouldn’t have been included at all because the human moderators messed it up….

and your AI remains just as useless as before

SpaceLifeForm says:

Bad Example of something that is purported to be moderation

https://www.theregister.com/2021/06/01/google_usenet/

The Usenet group comp.lang.tcl vanished from Google Groups for several hours before being restored on Tuesday.

Google took over the Usenet archive when it acquired Deja News in 2001.

Almost a year ago, comp.lang.forth and comp.lang.lisp, were also removed from Google Groups. And they remain unavailable. Likewise, comp.lang.c and comp.lang.python are currently inaccessible via Google Groups.

The suppression of Usenet groups has cultural, academic, and technical consequences. Some active systems, for example, still rely on Forth.

Lily May says:

I agree with your analysis that moderation on the scale of the world is impossible to do right. But your conclusion that it’s not personal, that moderation is always going to suck and people should accept it as inevitable as death and taxes I can’t support. They should do it in a scale that they can do right.

If they’re doing too much moderation to do it right, then they’re doing too much moderation. At the point where the issue stops being humans disagreeing on what should be moderated (which truly is inevitable) and becomes instead humans being too busy to moderate so they let dumb algorithms auto-ban people and auto-deny appeals because it’s easier than doing their jobs, then they are moderating too much.

Despite what some seem to think, people saying stupid things on Twitter is not the end of the world. It’s not worth it to randomly ban normal people just for discussing a controversial topic in an effort to try to silence some conspiracy nuts’ inane BS.

Stephen T. Stone (profile) says:

Re:

They should do it in a scale that they can do right.

They literally can’t — not without spending far more money and hiring far more people than should ever be necessary for a moderation team.

The whole point of saying “moderation doesn’t scale” is to point out how moderating a small community is always going to be easier than moderating a large community. If you’re moderating a group of a couple dozen people or so, you’re gonna have an easier time of telling people to knock their bullshit off — mostly because a community that small will have its own quirks and contexts under which it can be moderated. But a community of a couple…oh, let’s say a couple hundred thousand people will require far more moderators and far harder calls to make because those people can branch into subcommunities and develop their own quirks and contexts that won’t parse universally. The ability to moderate all those people doesn’t scale well when compared to the ability to moderate a much smaller amount of people.

Even people who moderate small communities never get it right 100% of the time. How do you expect a company/service/“community” as large as Twitter to get it perfect?

Lostinlodos (profile) says:

The problem isn’t so much the moderation but the automated rejection of appeal and the blank generics of “why”.

TOS is nothing more than local civil law, at its most basic construct. When you are to be punished because you break the law you should be informed of exactly what you did. Not generic ‘section 4 subsection b’ but the actual infraction:
“Posting false information, in your case ‘C19 is not contagious’”

Appeals should not be automated. If the poster can show a source then the content, and account, should be restored and marked as “disputed” when the poster adds a link edit.

All of this can be avoided by a tos clause that “we can ban you at any time for any reason”. Your using private property.
When such a clause exists there’s nothing to do.
When it doesn’t and a ban is based on violations such violations need to be spelled out.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...