Masnick's Impossibility Theorem: Content Moderation At Scale Is Impossible To Do Well

from the a-little-philosophy dept

As some people know, I’ve spent a fair bit of time studying economist Kenneth Arrow whose work on endogenous growth theory and information economics influenced a lot of my thinking on the economics of innovation in a digital age. However, Arrow is perhaps most well known for what’s generally referred to as Arrow’s Impossibility Theorem, which could be described most succinctly (if not entirely accurately) as arguing that there is no perfect voting system to adequately reflect the will of the public. No matter which voting system you choose will have some inherent unfairness built into it. The Wikipedia summary (linked above) of it is not the best, but if you want to explore it in more detail, I’d recommend this short description or this much longer description.

I was thinking about that theory recently, in relation to the ever present discussion about content moderation. I’ve argued for years that while many people like to say that content moderation is difficult, that’s misleading. Content moderation at scale is impossible to do well. Importantly, this is not an argument that we should throw up our hands and do nothing. Nor is it an argument that companies can’t do better jobs within their own content moderation efforts. But I do think there’s a huge problem in that many people — including many politicians and journalists — seem to expect that these companies not only can, but should, strive for a level of content moderation that is simply impossible to reach.

And thus, throwing humility to the wind, I’d like to propose Masnick’s Impossibility Theorem, as a sort of play on Arrow’s Impossibility Theorem. Content moderation at scale is impossible to do well. More specifically, it will always end up frustrating very large segments of the population and will always fail to accurately represent the “proper” level of moderation of anyone. While I’m not going to go through the process of formalizing the theorem, a la Arrow’s, I’ll just note a few points on why the argument I’m making is inevitably true.

First, the most obvious one: any moderation is likely to end up pissing off those who are moderated. After all, they posted their content in the first place, and thus thought it belonged wherever it was posted — so will almost certainly disagree with the decision to moderate it. Now, some might argue the obvious response to this is to do no moderation at all, but that fails for the obvious reason that many people would greatly prefer some level of moderation, especially given that any unmoderated area of the internet quickly fills up with spam, not to mention abusive and harassing content. There is the argument (that I regularly advocate) that pushing out the moderation to the ends of the network (i.e., giving more controls to the end users) is better, but that also has some complications in that it puts the burden on end users, and they have neither the time nor inclination to continually tweak their own settings. No matter what path is chosen, it will end up being not ideal for a large segment of the population.

Second, moderation is, inherently, a subjective practice. Despite some people’s desire to have content moderation be more scientific and objective, that’s impossible. By definition, content moderation is always going to rely on judgment calls, and many of the judgment calls will end up in gray areas where lots of people’s opinions may differ greatly. Indeed, one of the problems of content moderation that we’ve highlighted over the years is that to make good decisions you often need a tremendous amount of context, and there’s simply no way to adequately provide that at scale in a manner that actually works. That is, when doing content moderation at scale, you need to set rules, but rules leave little to no room for understanding context and applying it appropriately. And thus, you get lots of crazy edge cases that end up looking bad.

We’ve seen this directly. Last year, when we turned an entire conference of “content moderation” specialists into content moderators for an hour, we found that there were exactly zero cases where we could get all attendees to agree on what should be done in any of the eight cases we presented.

Third, people truly underestimate the impact that “scale” has on this equation. Getting 99.9% of content moderation decisions at an “acceptable” level probably works fine for situations when you’re dealing with 1,000 moderation decisions per day, but large platforms are dealing with way more than that. If you assume that there are 1 million decisions made every day, even with 99.9% “accuracy” (and, remember, there’s no such thing, given the points above), you’re still going to “miss” 1,000 calls. But 1 million is nothing. On Facebook alone a recent report noted that there are 350 million photos uploaded every single day. And that’s just photos. If there’s a 99.9% accuracy rate, it’s still going to make “mistakes” on 350,000 images. Every. Single. Day. So, add another 350,000 mistakes the next day. And the next. And the next. And so on.

And, even if you could achieve such high “accuracy” and with so many mistakes, it wouldn’t be difficult for, say, a journalist to go searching and find a bunch of those mistakes — and point them out. This will often come attached to a line like “well, if a reporter can find those bad calls, why can’t Facebook?” which leaves out that Facebook DID find that other 99.9%. Obviously, these numbers are just illustrative, but the point stands that when you’re doing content moderation at scale, the scale part means that even if you’re very, very, very, very good, you will still make a ridiculous number of mistakes in absolute numbers every single day.

So while I’m all for exploring different approaches to content moderation, and see no issue with people calling out failures when they (frequently) occur, it’s important to recognize that there is no perfect solution to content moderation, and any company, no matter how thoughtful and deliberate and careful is going to make mistakes. Because that’s Masnick’s Impossibility Theorem — and unless you can disprove it, we’re going to assume it’s true.

Filed Under: , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Masnick's Impossibility Theorem: Content Moderation At Scale Is Impossible To Do Well”

Subscribe: RSS Leave a comment
109 Comments
This comment has been deemed insightful by the community.
Samuel Abram (profile) says:

Great article!

I always point this out to people who want Facebook/Twitter/Google to “do something about X”. Not because it shouldn’t be done but because it can’t be done. And yet all those platforms are criticized when they don’t take action or take action and miss some false positives.

Damned if you do, damned if you don’t.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re:

That’s a ridiculous question. Only a pedo would have a problem with that. And we already have laws ("blanket bans") against it.

Mike’s point is that even if FB takes down 99.9% of such photos they’re still going to miss some. Just because you haven’t seen the missed photos does not mean others haven’t nor that they do not exist.

Anonymous Coward says:

Re: Re: Re:

Mike’s point is that even if FB takes down 99.9% of such photos they’re still going to miss some.

That’s still better than not even trying, isn’t it? No laws or regulations are 100% effective. Should health inspectors stop inspecting restaurants just because they don’t catch every problem?

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re: Re:

"That’s still better than not even trying, isn’t it? "

Yes it is, which is why it’s important to encourage that. Whereas, the people who promote removing section 230 will be telling them it’s easier to not bother at all than be held legally liable for the small amount they happen to miss.

This comment has been deemed insightful by the community.
Scary Devil Monastery (profile) says:

Re: Re: Re: Re:

"That’s still better than not even trying, isn’t it?"

Not when "seriously trying" results in actual harm far greater than the potential benefits.

We could win the war on drugs tomorrow. All we need to do is abolish Habeas Corpus" and Corpus Delicti*.

"It’s still better than not trying", right?

Content moderation, if you go past a very well defined boundary, becomes "government censorship" or "overblocking" which not only means you still don’t get everything blocked which you wanted to block, but you lost Free Speech in the process of the attempt. As collateral damage. Ooops.

Anonymous Coward says:

Re: Re:

On a gut instinct level I have no problem banning photographs of naked children on the internet. I find them icky. I also think it invades the child’s privacy.

Philosophically I can think of lawful ways to obtain and use the photograph.

If someone tried to put the ban in place I wouldn’t actually oppose it.

Scary Devil Monastery (profile) says:

Re: Re: Re:

"On a gut instinct level I have no problem banning photographs of naked children on the internet. I find them icky. I also think it invades the child’s privacy. "

And there went any journalism of events which caused harm or hurt to victims which included undressed children. As collateral damage.

There are plenty of photographs which depict outright revolting and upsetting imagery – which still NEEDS to be available because if not the public will be left in the assumption that nothing is wrong while innocent people suffer horrible fates.

This comment has been deemed insightful by the community.
Wendy Cockcroft (profile) says:

Re: Re: Re: Re:

I see we both thought of that naked girl running from a napalm attack during the Vietnam war. That helped to turn the tide of public opinion against the war and put an end to it.

While I tend to be a bit of a prude I also believe a nuanced approach is better than a blanket ban. CP only exists because some people are evil and like abusing kids because it’s more difficult for them to assert themselves. It’s a power thing. Bearing that in mind, there’s a hell of a difference between a snap of Li’l Danny on the potty (his parents can embarrass him in front of his girlfriend when he’s older) and an explicit or suggestive pose.

As some people have correctly pointed out there’s a great deal of unwarranted panicking about this instead of careful thought and consideration, not to mention a good dollop of common sense. Blanket bans may be easier as less thought and consideration is required to decide whether it’s "good" naked or "bad" naked, but they would, as Scary Devil Monastery pointed out, suppress important news and cultural items. Old Masters paintings featuring Putti (naked baby angels) would be banned too, you know.

It’s the subjectiveness that makes moderation so hard. It’s not as hard-and-fast as some people seem to think.

Scary Devil Monastery (profile) says:

Re: Re: Re:2 Re:

"While I tend to be a bit of a prude I also believe a nuanced approach is better than a blanket ban. CP only exists because some people are evil and like abusing kids because it’s more difficult for them to assert themselves."

Actually I would say that for some 99% of the population CP doesn’t exist at all because that’s the proportion of people who simply don’t feel children to be sexually attractive.

"there’s a hell of a difference between a snap of Li’l Danny on the potty (his parents can embarrass him in front of his girlfriend when he’s older) and an explicit or suggestive pose."

There SHOULD be. Except that even Li’l Danny on the potty is fap material for SOME fetishist out there. In some legislations where the criteria for what constitutes CP now relies on whether anyone could find the image arousing it has now become fair dangerous to own ANY imagery of your offspring.

"As some people have correctly pointed out there’s a great deal of unwarranted panicking about this instead of careful thought and consideration, not to mention a good dollop of common sense."

Common sense doesn’t even get a seat in this debate. What is arguably worse is that much of the "panicking" isn’t. A swedish watchdog a few years ago made an investigation in the recent spate of "Anti-CP" legislation issued and came to the conclusion that most of the preparatory work justifying the new legislation was originally written by an american right-wing NGO as part of their strategy to reduce premarital intercourse among teenagers.

To me the main issue with the ever-recurring hyperbole around CP isn’t that the main argument in favor of overreaching surveillance is due to panicking population and politicians, but that it’s almost invariably used as a wedge to undermine legal protection against something quite different.

The copyright cult, for instance, has used CP as part of their rhetoric as to why ubiquitous surveillance of data communication should be necessary. One of them, Johann Schlüter, of the danish IFPI, was even on record stating that CP was Great, because it offered every justification they needed, and the very word stifled almost any criticism.

Today every time I hear "For the children" or similar, I always assume there’s someone trying to undermine common jurisprudens while trying to link their unacceptable proposal to a topic so toxic no one has the courage to oppose them.

Anonymous Coward says:

Re: Re: Re:3 Re:

A swedish watchdog a few years ago made an investigation in the recent spate of "Anti-CP" legislation issued and came to the conclusion that most of the preparatory work justifying the new legislation was originally written by an american right-wing NGO as part of their strategy to reduce premarital intercourse among teenagers.

Can you remember who that was or where you found the paper? I can well believe it, but real evidence would be a fascinating read.

I’d seen the Schlüter quote before, in an article by Falkvinge years ago.

This comment has been deemed insightful by the community.
christenson says:

Re: Blanket bans on naked children

Why yes, I do have a problem with such a blanket ban.

What of the historically important photo of a naked, napalmed girl running from her village?

[And thus I reinforce Mike’s point: Whether I am OK with that photo depends on what mood I’m in, and how I got there, read my mind please, Mike!]

Anonymous Coward says:

Re: Re:

“Do you have a problem with blanket bans on things like posting pictures of naked children?”

Yes bro actually I do. There are plenty of times children have been photographed nude that are 100% innocent. Including the one mentioned before but also tons of family photos have a kid or three with their drawers down.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re:

"Do you have a problem with blanket bans on things like posting pictures of naked children?"

Define "naked". Define "children". Do they have to be photos, or is Renaissance cherubic art also banned?

"I believe Facebook has that in place and they seem to do a pretty good job moderating those kinds of photos"

They’re also notorious for over-censoring perfectly innocent content, including things that have nothing to do with the subject they were supposedly censored for.

Scary Devil Monastery (profile) says:

Re: Re: Re:

"Define "naked". Define "children". Do they have to be photos, or is Renaissance cherubic art also banned?"

Worse. I’m sure there’s SOME sick puppy out there turned on by the image of a naked child bleeding out in the streets in the aftermath of some riot or uprising. Who gets to confiscate the journalist’s camera and based on what criteria?

Content moderation is tricky because what is Excellent Journalism to some is Objectionable and Upsetting to others…and always Fap Material to at least some deeply disturbed individuals.

Wendy Cockcroft (profile) says:

Re: Re: Re: Re:

It’s the fapping, both real and imaginary, that’s the problem where the censorious are concerned.

I’m sure you’re all well aware that the more sexually repressive the population in a given area is, the higher the rate of porn watching taking place.

By that metric, addressing puritanical attitudes would go a long way towards solving the problem.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:5 Re:

"My scale is the crap I voluntarily allow on my personal computers so it’s easy."

Why is your scale OK but another person’s must be censored?

"The line is what I find to be icky or unduly risky."

What if you find something icky that I find acceptable? Why are you better than me?

This comment has been deemed insightful by the community.
Mike Masnick (profile) says:

Re: Re: Re:5 Re:

The line is what I find to be icky or unduly risky.

What’s amusing is that Facebook’s very first content policy person, Dave Wilner, stated publicly that for the first few years of Facebook’s existence that literally was Facebook’s content policy: "Take down what we find icky." What he realized, though, was that that does not scale and does not work.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:6 Re:

"What he realized, though, was that that does not scale and does not work."

Not only does it not scale, it doesn’t work if there’s just 2 people involved depending on the subject matter and how extreme the other party is. No matter what opinion you have on something you find utterly benign, someone out there will find it offensive.

Scary Devil Monastery (profile) says:

Re: Re: Re:5 Re:

" My scale is the crap I voluntarily allow on my personal computers so it’s easy. The line is what I find to be icky or unduly risky. I have managed IT and IT problems for charities and businesses but not to the extent I would be liable for anything on them."

What you find icky or unduly risky may not be what the law assumes to be outright illegal today. There are examples from multiple jurisdictions where the possession of cartoons no sane person would consider erotic or even suggestive were deemed actual CP.

Nudes of asian models in their 30’s have been deemed depictions of teens and thus CP.

And let’s not start about art. Never keep anything from Picasso on your PC is all I’m saying.

As for "liability", multiple jurisdictions have now watered down legal protection to the point where in many cases, if the IT guy does find anything objectionable, THEY will by legal perversity become the first suspect the police must investigate in depth. Even if you come out with a clean bill of health having your record permanently stained with "investigated for possession and distribution of CP" isn’t a good thing.

Wendy Cockcroft (profile) says:

Re: Re: Re:4 Re:

Which, as I stated earlier, is the problem.

Fappers gonna fap to something. If the rest of us find it innocent, so be it.

The question should always be, "What’s the harm of allowing this image to be displayed?"

  • potential embarrassment of subject
  • encourages others to think badly of the subject
  • encourages violence or unfavourable actions or attitudes towards individuals or groups
  • controversial
  • graphic depiction of sex act or position or torture or gore that may cause distress to viewers

Followed by "What’s the good of allowing this image to be displayed?"

  • newsworthy
  • artistic
  • sentimental value
  • chronicle of event
  • instructional

Using these metrics ought to enable any reasonable person to tell the difference between what is or isn’t generally acceptable. I know we wouldn’t always get it right but that’s how I would do it.

Scary Devil Monastery (profile) says:

Re: Re: Re:5 Re:

"Using these metrics ought to enable any reasonable person to tell the difference between what is or isn’t generally acceptable. I know we wouldn’t always get it right but that’s how I would do it."

That speaks of your rationality, common sense, and merits as a person.

Unfortunately, observe your criteria for "harm".
Assume the one issuing the loudest and most persistent objection will be, say, the ultraorthodox religious right. Anyone with common sense certainly won’t be quite as vociferous.

So those mtrics never come into play because any politician with the moral courage to try to uphold or implement them will be shouted down by a crowd of mudslinging witch hunters screaming "He’s for CP!!!".

Scary Devil Monastery (profile) says:

Re: Re: Re:2 Re:

"I’m sure you’re all well aware that the more sexually repressive the population in a given area is, the higher the rate of porn watching taking place. By that metric, addressing puritanical attitudes would go a long way towards solving the problem."

I believe there was an old study on sexual abuse in Asia which had a direct correlation – The easier the access to pornography, the less abuse took place.

I can credit that. Repression of an urge only bottles it up until the pressure detonates the person trying to contain it.

It’s also pretty telling that today most organizations ostensibly against child abuse have become almost exclusively owned and operated by dedicated puritans. ECPAT did great work, once upon a time. Today they basically consist of the religious ultraorthodox and their line in the sand starts at "Sinful conduct".

Mike says:

Re: Re:

Do you have a problem with blanket bans on things like posting pictures of naked children?

As a parent, I would argue that there no socially appropriate contexts to posting naked children on line. First of all, most of the time it’s not consensual or implied consensual; it’s just some asshole parent posting naked pictures of their kid in the tub or whatever. Second, children do face a level of threat that comes from sexualization that simply doesn’t apply to adults. Third, the line of legality here is grey, and any self-respecting social media site will wield the ban hammer like a flaming sword in the hands of an avenging angel against accounts that put the platform at risk of adverse information reports to NCMEC, ICMEC, etc.

This comment has been deemed insightful by the community.
Chris-Mouse (profile) says:

As a matter of fact, I do have a problem with a blanket ban of pictures of naked children.
You might want to take a look at this picture. I’ll warn you, it has full frontal nudity of a pre-teen girl.
It also won a pulitzer prize and became the World Press photo of the year for 1973.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

We don’t need moderation if platforms were liable for abuses and harm inflicted by their users, once they are put on notice of the harm. The notice part is what makes it possible. Right now the lives and businesses destroyed are considered an "acceptable loss" for the Greater Good of the internet.

I doubt history will agree with this.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re:

That sounds great… except that the only way to achieve a perfect response rate to such "notice of harm" is to automatically remove anything which gets such a notice. It is impossible to manually go through them at scale. So, effectively, everything posted online gets a "remove" button that anyone, anywhere can press. For any reason, as long as they claim "harm".

Anything short of a perfect response rate makes the company liable, mening it will cost them money. At scale, it will cost them all their money. The only real alternative is to not allow users to post content at all.

CDA section 220 allows companies to grow to such a vast scales exactly because it does not increase the risk proportionally.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:

John Smith was once asked how companies could prevent the inevitable mess of erroneous notices that would arrive without 230 protection, when machines fail to identify fair use and dead grandmothers.

His response was humans. Like librarians and card catalogs.

His response was to hire less efficient people to make up for the mistakes of efficient machines, which he swears are always 100% accurate and anyone who disagrees is a pirate.

You cannot make up the shit these IP fanatics dream up.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re:

We don’t need moderation if platforms were liable for abuses and harm inflicted by their users, once they are put on notice of the harm.

That is a form of moderation that would be abused by scammers, spammers, politicians and others who want to make their fictional view of things the truth.

This comment has been deemed insightful by the community.
Rocky says:

Re: Re:

Please define "notice" and "harm" but also who issues the "notice".

Unless you can do that in a way so it doesn’t infringe on 1A rights, what you are suggesting is pure hand-waving and not a solution.

Also, if you believe moderation isn’t necessary with your "solution" you are in for a rude awakening when platforms start closing down the ability to post anything unless they know exactly who you are.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

We don’t need moderation if platforms were liable for abuses and harm inflicted by their users, once they are put on notice of the harm.

Would that be like the DMCA, where even false accusations of copyright infringement can take down legal content so long as the DMCA process itself is followed to the letter?

This comment has been deemed insightful by the community.
Scary Devil Monastery (profile) says:

Re: Re:

"We don’t need moderation if platforms were liable for abuses and harm inflicted by their users…"

Principles heartily embraced by the Soviet Union, East Germany, North Korea and China…

…and absolutely no one else.

What has only ever been considered a staple in ultra-autocratic communist/fascist dictatorships should NOT be considered a viable and desirable mechanism in a nation which desires to retain and practice democratic values.

This comment has been deemed insightful by the community.
Wendy Cockcroft (profile) says:

Re: Re:

I’ve addressed this topic over and over again with regard to my own personal experience. You’re just imagining things. My experience was real and I’m still here and still using my real name because I’m telling the truth: it’s your own conduct more than what others say about you that affects your reputation.

The lives and businesses destroyed by comments on the internet are in your twisted mind.

RE: that Australian case, the individual’s own conduct was the cause of her problems, not Google indexing links to people complaining about it.

Therefore, if you screw up and it ends up going viral online, take this advice. You’re welcome.

Scary Devil Monastery (profile) says:

Re: Re: Re: Re:

"Stop trying to get us to agree that shaking platforms down for butthurt money is a good thing. It’s not."

Oh, it IS a good thing for old Baghdad bob/Blue/Bobmail or Hamilton. Not only would it provide a chilling effect for them and their vested cause to stifle any and all opposition, it allows a lot of breathing room for the poor copyright trolls who increasingly find courts and judges as unsympathetic to their business models as they were when that model consisted of chasing ambulances.

Anonymous Coward says:

Re: Masnick's Impossibility Conjecture [was Theorem]

This is… not that.

P != NP is a famous conjecture. Just for example. Most CS and EE folks have heard of that famous conjecture. Many probably believe it. But it’s still a conjecture.

Anyhow, though, the somewhat elliptically-refenced point that I’m arcing towards with that here — is that a technical audience does get a fair amount of mathematical education somewhere along the line.

Is there still a technical audience at Techdirt? An audience that strongly and reflexively distinguishes between theorems and conjectures?

“Masnick’s Impossibility Conjecture.”

 

Anonymous Coward says:

Re: Re: "Theorem"

Sure. Heck, I bet that, given some effort, Masnick’s idea (Law? Theory?) could even be expressed as a restatement of Arrow’s Theorem, that it’s impossible to moderate using individual people’s moderation preferences in such a way that the moderation preserves the community’s preferences, with similar definitions of non-dictatorship, Pareto efficiency, etc..

But when you’re "not going to go through the process of formalizing the theorem," you shouldn’t present it as a theorem. Call it a Theory, or a Law: each of which has a suitably loose definition that something informal like this can fall into it.

Calling something a theorem while refusing to formally state it (let alone prove it) is missing the entire point of the word "theorem."

Anonymous Coward says:

Re: Re: "Theorem"

I get that. It’s still like saying, "While I’m not going to go through the process of freezing this ice cream…"

Until you freeze it, it’s not ice cream, it’s just cream.

Just like how, until you go through the process of formalizing it, it’s not a theorem, it’s just a (thank you red AC) conjecture.

This comment has been deemed insightful by the community.
TasMot (profile) says:

But AI!!!

Yeah, like that is the answer. I’m lucky if I can get one of the assistants (any of them) to understand my voice first, and then get me where I want to go. Maybe Siri could do it?

There are two battles over racists terms (one in court and one in society). Historically, the n—-r word and sla– eyed were derogatory racist words. Each has (in some cases/usages) come to be used by the members of the respective races. However; others are not really allowed to use it unless specifically admitted members of a particular group).

The one in court is where a group of people of Asian decent named their band "The Slants" who suffered through an eight year court battle to use their band’s name. It had to go all the way to the Supreme Court to get a judgement saying they could use the name they wanted for their band (http://www.theslants.com/statement-on-recent-scotus-ruling/).

The use of the n—- word is still restricted to use within a group. There is even a book about the topic. It may never be settled as to the general public being allowed to use it in a non-racially charged way.

Maybe one of these days there will be an AI that can determine context and moderate properly, but it is still a long way off. Especially if it has to know a speaker’s race before it can make a proper determination.

This comment has been deemed funny by the community.
timlash (profile) says:

Not again!

There goes Masnick again! Pushing another reasonable take on a current technology battle. Acknowledging that there are multiple viewpoints to an issue with no simple solutions. Sheesh, when will someone subscribe to the ‘Silence Techdirt’ level of support so we don’t have to hear his centrist schlock anymore. (/s for those who need it.)

Scary Devil Monastery (profile) says:

Re: Not again!

"Sheesh, when will someone subscribe to the ‘Silence Techdirt’ level of support so we don’t have to hear his centrist schlock anymore. (/s for those who need it.)"

Look through the posts by Jhon/Out-of-the-blue/Bobmail/Baghdad Bob in this thread for a few minutes…
…yes, the /s is always, ALWAYS needed because Poe’s Law applies in any thread where the resident copyright cult lunatic decides to take a dump..

This comment has been deemed insightful by the community.
Anonymous Coward says:

I like to think of people’s online behavior as being defined by two levels: the level of incivility they are willing to accept from others and the level of incivility they act at.

If the overall tone of a community becomes worse than what a user will accept, they eventually leave.
If the tone of a community is enforced to stay above a certain level, most users who act worse than that will be driven away, perhaps even by force (ban).

A highly tolerant and civil user will fit in anywhere.
Most people will have a much narrower band where they will both fit in and want to stay.

Moderation is what you do to keep bad behavior from making to many people leave, without also forcing away too many users. It’s an optimization problem, not an absolute.
There is no "perfect" moderation. Not even at smaller scales. You get the behavior you allow.

A highly tolerant and toxic user will be able to drive others away, without anyone being able to do the opposite. Those are the people you need to moderate. A community of only people like that is the end result of having no moderation.

And then there are the people who repeatedly act worse than what they accept (or at least what they silently accept) from others, usually arguing that in this particular case their own behavior was called for and rational, but the other people are just being unnecessarily rude and touchy. That’s where the drama is ????.

Koby (profile) says:

"Now, some might argue the obvious response to this is to do no moderation at all"

I was thinking of something different, specifically "follow the established rules". Sometimes, this leads to content being banned that shouldn’t, which then leads to a process of refinement of the rules, thereby leading to better rules.

The problem occurs when some people want to make special exceptions to allow content that they like, but disregards the rules; and ban content that they dislike but follows the rules. This is part of why pushing the moderation system onto users is important, because we can probably never find someone with zero bias and zero preferences to do the moderation on our behalf. Someone will always try to break the rules.

This comment has been deemed insightful by the community.
James Burkhardt (profile) says:

Re: Re:

I was thinking of something different, specifically "follow the established rules". Sometimes, this leads to content being banned that shouldn’t, which then leads to a process of refinement of the rules, thereby leading to better rules.

We don’t need to rely on questioning the bias of the moderator to see the flaws in evermore complex, refined, centralized content moderation rules.
That process will never and can never produce a perfect set of rules, but lets assume we achieved perfect rules and moderators were capable of applying the subjective rules without bias. Once content rules get sufficently complex to approach perfection, the complexity of the system will lead to breakdowns in understanding of the rules, the exceptions, and their applicability. As well, content moderation at scale relies on speed. Speed is the enemy of complex rules for moderation. Note Masnick’s comments on the number of failures Facebook would see if we achieve 99.9% success in applying the content moderation rules. It doesn’t matter how good faith the mods act, 35,000 mistakes a day will create outrage. The more complex and nuanced, the higher likelihood a mistake will occur dragging down that 99.9% correct application of the rules.

We don’t need to insert the concept of bad actors to understand the issues in your idea.

Anonymous Coward says:

"So while I’m all for exploring different approaches to content moderation, and see no issue with people calling out failures when they (frequently) occur, it’s important to recognize that there is no perfect solution to content moderation, and any company, no matter how thoughtful and deliberate and careful is going to make mistakes."

We should turn this process over to pharma and HHS – they never fail and have perfect solutions to everything.

This comment has been deemed insightful by the community.
christenson says:

Unstated assumption

Mike:

Moderation on the community we call Techdirt is pretty effective — we have enough like-minded people that for the purposes of the techdirt community, the gray areas are small, and the burden doesn’t fall heavily on everyone; the work is spread across the community.

Smaller communities, with many volunteers/regulars doing relatively light work seems to be an effective route to moderation. That is, I pay for Techdirt by helping moderate a little.

Not that such models cant go seriously awry, but by having many of them compete, we can optimize and minimize the total badness.

This comment has been deemed insightful by the community.
bob says:

Re: Unstated assumption

Yes but the moderation attempts on Techdirt are not to the scale where the whole idea of moderating breaks down.

There is a reason chemical manufacturers of cleaning solutions say it is 99.9% effective. No matter what you do in a non-atmosphere-controlled environment, like a home, will remove all the bacteria and germs from a surface.

Even on Techdirt the occasional spam or troll comment survives for a while or goes unnoticed. Also there are times comments are flagged that I, and others, think were flagged in error. So even a small community can’t perfectly moderate itself.

But I agree that the community here does a good enough job self moderating.

This comment has been deemed insightful by the community.
Mike Masnick (profile) says:

Re: Re: Unstated assumption

Yup. I agree with basically everything Bob said, with one addition. As you might notice, we have a few critics who insist that our own moderation is terrible/unfair/problematic etc. And that’s part of the point here. Someone will always find it unfair.

But, yes, also if Techdirt grew to a much bigger sized community, I certainly would not be confident that the moderation would continue to work as well as it does. Mistakes are still made today, and we’re not always able to catch all of the mistakes. With scale, that would get worse and worse.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:

we have a few critics who insist that our own moderation is terrible/unfair/problematic

What’s funny is that if you didn’t moderate at all, and spam overrode the site, they’d complain about you not moderating enough. The troll brigade will always find a way to criticize you. That’s the whole point of their existence here: No matter what, you’re wrong and they’re right, even when they are clearly in the wrong according to facts and logic.

And if they want to claim otherwise, they’re being disingenuous pricks. Not that they care if we know, though. They’ll always claim they’re just “telling it like it is”…which, of course, is always code for “I’m a massive chode who wants to be as cruel as possible to someone”. Their cruelty is their point — because it is all they have left.

This comment has been deemed insightful by the community.
This comment has been deemed funny by the community.
Anonymous Coward says:

Re: Re: Re:2 Re:

The trolls we have here are so venomous and contrary that if Masnick put out an article extolling the benefits of breathing oxygen, the trolls would demand that everyone else hold their breath to prove Masnick wrong.

Scary Devil Monastery (profile) says:

Re: Re: Re:2 Re:

"What’s funny is that if you didn’t moderate at all, and spam overrode the site, they’d complain about you not moderating enough."

Neither Blue/Bobmail nor Hamilton, I think…driving people away from any site where the audience in general doesn’t sing from the copyright cult hymnsheet or pays tribute to a flaming cross seems to be in line with their actual agenda.

Anonymous Coward says:

Re: Re: Re:2

One of the things that I think is interesting and fun is that this community has given the troll brigade a name ‘blu’.

I’m not a psychologist or anthropologist, but I find that interesting.

I agree with Bob, and particularly his point that some posts are poorly flagged, and often that is because people fail to see the satire. And that point itself add’s to MM’s Impossiblity Theorum/Conjecture.

Finally, well done MM. It is nice to have a standing piece to which we can direct all of the people screaming for the Impossible, within which there is a call for ‘better’ and trying different options, but knowing that perfection is Impossible.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:3

this community has given the troll brigade a name ‘blu’

Not really? I mean, yes, sometimes we simply refer to an anonymous troll as "Blue" (or “Blue Balls”) because fuck it, we don’t know if it’s the poster formerly known as “out_of_the_blue”. But we generally have a good idea of who our trolls are thanks to their posting styles. To wit:

  • “Blue (Balls)” tends to post rants with ALL CAPS in random SPOTS that rage against MIKE and TechDIRT and the spam FILTERS he triggers so often. He also posts with ridiculous anonymized usernames. Sometimes he even complains about horizontal lines like he’s at a bad limbo contest.
  • “Hamilton” tends to rant about America’s greatness, kiss Shiva Ayyadurai’s ass (since he first showed up after Techdirt announced Shiva’s lawsuit against the site), and make wild/insane claims (e.g., being a descendant of Alexander Hamilton) that he can’t/won’t back up. He also does a nasty little rhetorical gimmick that most people see through nowadays. And he also has weird fantasies about Donald Trump, a Melania mask, and (I think) anal sex of some sort.
  • “Jhon Smith” (a.k.a. “Herrick”) tends to bitch about Section 230, claim platforms can be held legally liable for spreading defamation that other people wrote, and threaten Mike and his family with everything from rape to murder.

I’m sure there may be one or two other trolls with recognizable posting styles, but those are the three primary assholes. And they all have one thing in common: No matter how much they hate Techdirt (and Mike personally), no matter how much they hate the commenters here, they keep coming back like it’s a psychological compulsion they can’t escape. Rather than avoid a thing they hate, they constantly return to troll the site and piss themselves off. They’re sad little children, really.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed funny by the community.
Rocky says:

Re: Re: Re:4 Re:

Thank you very much for making me inhale some of my coffee while reading the limbo contest comment, it resulted in 5 minutes of hacking cough with bouts of distressed laughter.

I’ll shall now clean my screen and throw a slightly coffee-stained shirt in the hamper…

Anonymous Coward says:

Re: Re: Re:4 Re:

There was a comment by Masnick recently which subtly hinted that the different troll identities may not necessarily be different individuals. Which I don’t actually agree with due to the difficulty in maintaining multiple unique personae, and the fact that Jhon, blue and Hamilton have simultaneously posted on threads before.

On the other hand, blue hasn’t shown his face since the anti-vaxxer outspammed him, while Herrick showed up like a battered wife to bitch about Section 230 at around the same time. I wonder…

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:3 Re:

"this community has given the troll brigade a name ‘blu’.

I’m not a psychologist or anthropologist, but I find that interesting."

You’d find it even more interesting if you bothered to learn the origins and understand that’s what he named himself, before he tried poorly obfuscating his identity when he kept getting called out for lying.

Anonymous Coward says:

350 million pictures.

1 million moderators look at 350 photos a day.

1 moderator for every 1000+ users.

Moderators come from the country they moderate and therefore the wages can be covered by the income.

Moderators look at 1 picture a minute for 6.5 hours each day.

Fb income £50+ million/day. Easily achievable when you consider the majority of the moderators will not be in Western countries.

So why is it impossible to moderate properly?

nasch (profile) says:

Re: Re:

So why is it impossible to moderate properly?

  1. Simple experiments have found a small group of a dozen or fewer do not agree on moderation decisions, and you want a million of them.
  2. Moderators look at horrible things all day long, so your turnover is probably going to be high. High turnover in a work force of a million is no small thing.
  3. You forgot managers. A million people are not going to all report to the CEO. You have to pay managers more than line workers so now your costs go way up.
  4. Hiring a million imperfect moderators just means you can get false negatives and false positives faster. At a 99.9% success rate (which is ridiculously, impossibly good) you get 350,000 mistakes from 350 million pieces of content. That’s 350,000 chances for someone to get upset, and 350,000 chances for bad publicity.

Or if that’s not enough:

https://www.techdirt.com/articles/20191111/23032743367/masnicks-impossibility-theorem-content-moderation-scale-is-impossible-to-do-well.shtml

Andrew Kadel says:

Impossibility

This is an excellent article. Understanding the impossibility of perfection is the first step toward finding solutions that are good enough. The big platforms can certainly do much better and much of the reason they don’t has nothing to do with "impossibility" but rather, may I say it? with filthy lucre. Facebook & Twitter, each in their own way profit largely from having lots and lots of content that is "controversial" misleading, harassing, etc. It might be impossible to eliminate Nazi trolls from Twitter, for instance, but the traffic driven by their harassment & bizarre pronouncements is a big part of Twitter’s revenue model. So they don’t make a serious effort to make it possible to eliminate such harassment–it’s ridiculously short of impossible.

I think that your suggestion of more end user involvement discretion would likely be of help. And it’s possible for the platforms to make this easier to do–i.e. with check boxes for broad categories of screening, say 5 categories from "hyper safe" to "anything goes muthafucka" and/or preference about what type of stuff you’re sensitive about be it politics, sex, religion or something specific like cashew farming.

In all of these, the moderation would never be perfect and that’s more than fine with me. The reporting functions can be made to serve the function of iterative improvement of the filters rather than banning or telling people they don’t know what a threat is.

Thanks for the good article.

Anonymous Coward says:

When Mike Masnick writes articles like this he always seems to miss the very important that Facebook is not a public form but a form owned by a private company and that as a form owned by a priviate company it is NOT protected by the First Amendment to the US constitutions. That is Facebook can not, is not, and will not ever make a law that deprives anyone of the right to say what that person desires to say on their own form.

All it can do is remove and stop people from posting on Facebook and in that regard Facebook has the lawful authority to remove any post for any reason or whim that Facebook has.

nasch (profile) says:

Re: Re:

When Mike Masnick writes articles like this he always seems to miss the very important that Facebook is not a public form but a form owned by a private company and that as a form owned by a priviate company it is NOT protected by the First Amendment to the US constitutions.

He frequently points that fact out, so I would be very surprised if you could provide an example of Mike making that mistake. Awaiting a link to back up your claim.

Mike Masnick (profile) says:

Re: Re:

When Mike Masnick writes articles like this he always seems to miss the very important that Facebook is not a public form but a form owned by a private company and that as a form owned by a priviate company it is NOT protected by the First Amendment to the US constitutions.

Wait, what? When have I "missed" that? I’ve pointed that out dozens of times. Here’s just one example:

https://www.techdirt.com/articles/20190507/17323742161/while-trump-complains-about-facebook-takedowns-facebook-is-helping-trump-take-down-content-he-doesnt-like.shtml

I didn’t mention it in this post, because this isn’t a post about the 1st amendment, but about content moderation.

Anonymous Coward says:

Cheese! (non sequitur factoid: blue cheese kills rats!) Can nobody here think outside the box!?! The answer is simple. Everybody should have effective personal filters, maybe even algorithms, defining what they want to see and what they don’t want to see.
The ultimate benefit of the internet is that any citizen-scholar, WITHOUT having to be certified by any authority, government or university, can have access to the TRUTH. As hard as it may be to take. There used to be a story that there were certain H.P.Lovecraft horror stories that were so traumatizing that they were kept locked in a back room at the library, you could ask to see them, but they would warn you. I probably wouldn’t want to read those. But there are horrible stories that I want, in fact need, to be able to verify with the actual documents, etc in order to be able to form a vision of reality that conforms to actual reality. Taking away my access to truth is taking away my reality. What is left then? Why not just sign up to be the slave/robot for somebody else’s reality that they will deign to permit me?

nasch (profile) says:

Re: Re:

The answer is simple. Everybody should have effective personal filters, maybe even algorithms, defining what they want to see and what they don’t want to see.

That would be a good experiment, but it would be a mistake to think that it would be a complete solution to the problem. No filter or algorithm will ever be perfect, which is what some people demand of content moderation. As long as users understand that there will be both false positives and false negatives, it could work.

Nemo_bis (profile) says:

YouTube comment spam

The problems of scale are getting popular:

Marques Brownlee, “YouTube Needs to Fix This”
https://www.youtube.com/watch?v=1Cw-vODp-8Y

Which also discusses the very interest
https://github.com/ThioJoe/YT-Spammer-Purge
(congratulations to the author of this GPL utility and to Google for providing an effective API to make it possible!).

Brownlee thinks the community-made tools prove that YouTube could do better, but in reality they only prove how hard it would be to perform such antispam work at scale. The repository is full of reports about false positives (which is normal), and we know how even a 99.5 % accuracy can result in a filter being a major net negative:
https://www.techdirt.com/2018/12/18/youtubes-100-million-upload-filter-failures-demonstrate-what-disaster-article-13-will-be-internet/

One depressing line in the video is that the comment section “is such a unique feature” of YouTube. Comment sections were ubiquitous 15-20 years ago, but for recent internet users they appear a rarity.

Nemo_bis (profile) says:

YouTube comment spam

The problems of scale are getting popular: see Marques Brownlee, “YouTube Needs to Fix This”, which also discusses the very interesting YT-Spammer-Purge (congratulations to the author of this GPL utility and to Google for providing an effective API to make it possible!).

Brownlee thinks the community-made tools prove that YouTube could do better, but in reality they only prove how hard it would be to perform such antispam work at scale. The repository is full of reports about false positives (which is normal), and we know how even a 99.5 % accuracy can result in a filter being a major net negative (see ” A Numerical Exploration Of How The EU’s Article 13 Will Lead To Massive Censorship “).

One depressing line in the video is that the comment section “is such a unique feature” of YouTube. Comment sections were ubiquitous 15-20 years ago, but for recent internet users they appear a rarity.

(Reposting the comments without links to see whether it gets past the antispam filters here…)

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...