Top Myths About Content Moderation

from the so-many-myths-to-debunk dept

How Internet companies decide which user-submitted content to keep and which to remove?a process called ?content moderation??is getting lots of attention lately, for good reason. Under-moderation can lead to major social problems, like foreign agents manipulating our elections. Over-moderation can suppress socially beneficial content, like negative but true reviews by consumers.

Due to these high stakes, regulators across the globe increasingly seek to tell Internet companies how to moderate content. European regulators are requiring Internet services to remove extremist content within an hour and to install upload filters to prospectively block copyright infringement; and U.S. legislators have proposed to ban Internet services from moderating content at all.

Unfortunately, many of these regulatory efforts are predicated on myths about content moderation, such as:

Myth: Content moderation can be done perfectly.

Reality: Regulators routinely assume Internet services can remove all bad content without suppressing any good content. Unfortunately, they can?t. First, mistakes occur when the service lacks key contextual information about the content?such as details about the author?s identity, other online and offline activities, and cultural references. Second, any line-drawing exercise creates mistake-prone border cases because users routinely submit ?edgy? content. Third, a high-volume service will make many mistakes, even if it?s highly accurate?1 billion submissions a day at 99.9% accuracy still yields a million mistakes a day.

Myth: Bad content is easy to find and remove.

Reality: Regulators often assume every item of bad content has an impossible-to-miss flashing neon sign saying ?REMOVE THIS CONTENT,? but that?s rare. Content is often obviously bad only in hindsight or with context unavailable to the service. Regulators? cherry-picked anecdotes don?t prove otherwise.

Myth: Technologists just need to ?nerd harder.?

Reality: Filtering and artificial intelligence play important roles in content moderation. However, technology alone cannot magically solve the problem. ?Edgy? and contextless content vexes the machines, too.

Myth: Internet services should hire more humans to review content.

Reality: Humans have biases and make mistakes too, so adding human reviewers won?t lead to perfection. Furthermore, human reviewers sometimes experience an unrelenting onslaught of horrible content to protect the rest of us.

Myth: Internet companies have no incentive to moderate content.

Reality: In 1996, Congress passed 47 U.S.C. 230, which says Internet services generally aren?t liable for third-party content. Due to this legal protection, critics often assume Internet services won?t invest in content moderation; and some companies have stoked that perception by publicly positioning themselves as ?neutral? technology platforms. Yet, virtually every Internet service moderates content, and major services like Facebook and YouTube employ many thousands of content reviewers. Why? The services have their own reputation to manage, and they care about how content can affect their users (e.g., Pinterest combats content that promotes eating disorders). Furthermore, advertisers won?t let their ads appear on bad content, which provides additional financial incentives to moderate.

Myth: Content moderation, if done right, will make everyone happy.

Reality: By definition, content moderation is a zero-sum game. Someone gets their desired outcome, and someone else doesn?t?and those folks won?t be happy with the result.

Myth: There is a one-size-fits-all approach to content moderation.

Reality: Internet services cater to diverse audiences that have different moderation needs. For example, an online crowdsourced encyclopedia like Wikipedia, an open-source software repository like GitHub, and a payment service for content publishers like Patreon all solve different problems for their communities. These services shouldn?t have identical content moderation rules.

Myth: Imposing content moderation requirements will stick it to Google and Facebook.

Reality: Google and Facebook have enough money to handle virtually any requirement imposed by regulators. Startup enterprises do not. Increased content moderation burdens are more likely to block new entrants than to punish Google and Facebook.

Myth: Poor content moderation causes anti-social behavior.

Reality: Poorly executed content moderation can accelerate bad behavior, but often the Internet simply mirrors existing anti-social behavior or tendencies. Better content moderation can?t fix problems that are endemic in the human condition.

Regulators are right to identify content moderation as a critically important topic. However, until regulators overcome these myths, regulatory interventions will cause more problems than they solve.

Reposted from Eric Goldman’s Technology & Marketing Law Blog.

Filed Under: , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Top Myths About Content Moderation”

Subscribe: RSS Leave a comment
96 Comments
This comment has been deemed insightful by the community.
Anonymous Anonymous Coward (profile) says:

Nerds might impact, but don't create societies

"Myth: Technologists just need to “nerd harder.”

Reality: Filtering and artificial intelligence play important roles in content moderation. However, technology alone cannot magically solve the problem. “Edgy” and contextless content vexes the machines, too."

The issue isn’t technical, the issue is social, and to that end, while there might be ‘one’ society, it is made up from many sub-societies. Those sub-societies might have similarities, but they are in fact often different. Society is also impacted by the form of governing happening around those sub-societies, and within each government there are likely several to many sub-sub-societies.

In authoritarian regimes there are probably supporters (those who endeavor to become authoritarian themselves) and opposers (those who wish a more democratic form of oppression).

In democratic regimes there are those who wish for, and work to impose a more authoritarian government position while those who actually enjoy liberty and freedom work to impede more rigid government control. Or are just complacent.

If the world wants to prevent bad, then they have to come to agreement about what bad is, and letting governments (or religions or for that matter any factions) decide for the populace what is or isn’t actually bad hasn’t gotten better with time, as theoretically ‘good’ societies seem to have tendencies to turn bad, and theoretically ‘bad’ societies seem to have tendencies to get worse. And just to keep things confused, sometimes the populace gets out of their complacent mode and vehemently opposes whatever regime is designing their current oppression’s.

So whatever answers anyone comes up with, there will be opposers to those answers and the subjective determination about whether the answer, or the content, is good or bad is merely a point of view. If people are concerned about influencing children with bad content, how about teaching ‘parenting’ without imposing ideologies? If I want my kids to play outside, unattended by adults (as I did as a child) then it is my business, not anyone else’s. Keeping children from viewing shocking videos on the Internet is about me, as a parent and how I control my children’s Internet usage, as well as what I tell them when they stumble across something I consider bad, and not about societies ability to censor. There is no actual way to keep them from seeing things I don’t want them to entirely (on the Internet or in the real world), but there are ways for me to help them understand and cope with those things when they run across them.

ECA (profile) says:

Re: Nerds might impact, but don't create societies

I love those that THINK, they can control thought/ideas/…

To those that Think you can moderate the net…LET THEM SPEND 1 day doing the job.

Lets look at this in a WIDE fashion.
Those countries that are Asking the net, to moderate? That say they are for Free speech..
I dont think we need to Think hard about this. They are asking the Net NOT to give free speech.

Basic moderation is great, monitoring everything is stupid.. Ask the FBI/CIA/Others how well its going, trying to track Everything on the net…
From Private chat, Public Chat, ingame Chats, Forums and all the rest. The amounts of data created is so large…its like counting the the highest number created, there is always 1 more.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Filtering and artificial intelligence play important roles in content moderation. However, technology alone cannot magically solve the problem. “Edgy” and contextless content vexes the machines, too.

Even contextual content can vex a machine. To wit: The word “retard” can be used as a verb and an ableist slur — and basic content filters generally can’t tell the difference between the two.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:

Yes, humans will have emotional responses to words, some of which will depend on personal circumstances (for example, a parent with a learning disabled child will react to that word differently to someone with no family connection to such people).

Human moderation removes some problems associated with automation, but introduces other problems.

ECA (profile) says:

Re: Re:

Can we do it this way..

That every time the computer cant figure it, it ships it to a human..
that human has the ability to send it to others to figure it out also..

How convoluted can Written speech be??
The Word Sink has 29 explanations.
Even if we goto concepts and Poems,. How many persons have a hard time NOT seeing PORN and drugs in Rock and roll songs…

Anonymous Coward says:

Re: Re:

"Moderation" in that case should earn them a Nobel prize /and/ a Sainthood regardless of religion or other comduct as they would have educated not only one nation but many in critical thinking without the time commitment of schools or the staffing , funding, or power involved.

In short that would be a miracle going far above the call of duty akin to your high school drop out hotel receptionist inventing and giving you for free a side effect pancaea in the form of a delicious cake. While it would be very nice to have expecting it would be insane.

Anonymous Coward says:

Re: gullible population

cc "The underlying problem is a gullible population"

but that’s the whole point of any government regulation, even if done imperfectly.

the public must be protected from its own ignorance, even if only partial protection is achievable currently.

regulation is everywhere in our economic and social lives. Though it always has flaws, society would be much worse off without it. Consider the FAA, FDA, FCC, FTC, SEC CPSC, DEA, ATF, Copyright Office etc.

Anonymous Coward says:

Re: Re: gullible population

"the public must be protected from its own ignorance"

Relative to consumer goods, there are things that the product user needs to be made aware of. Like, ummm the required voltage or gasoline type, although it is a bit silly to put notices on hammers about striking ones own thumb.
An ignorant populace is easier to "govern", or so I’ve heard. It does make one wonder why the present administration is hell bent on killing public education, the place where non-rich kids learn.

Anonymous Coward says:

Context

First, mistakes occur when the service lacks key contextual information about the content—such as details about the author’s identity, other online and offline activities, and cultural references.

Does anyone else see where this is heading? We just need to give the machines more data about ourselves and our interlocutors—all the data—and everything will be fine.

Comboman says:

Myth: Ads are the same as user-created content

Reality: Ads are not "at scale" and are already handled manually by actual humans (called salespeople). Requiring these people to review the ads to ensure they meet ethical requirements (the way they are already reviewed for trademark and other rules) is not just technically possible but relatively easy. TV, radio and print publishers do it all the time.

This comment has been deemed insightful by the community.
Wyrm (profile) says:

Re: Myth: Ads are the same as user-created content

That might have been the case before ads were automated like most other content.
Newspaper had to make editorial choice on just about anything, including ads. Nowadays, things are not so simple. Website just subscribe to an ad provider who themselves have content submitted by announcers. Neither the website not the provider have full control over the content, although filtering (read "moderation") tools are provided.

Anonymous Coward says:

Different countrys have different rules,
is content extreme, defaming someone, supporting terrorism,
false and fake news,
harmful or upsetting to teens or young people .
is it parody or political comment which is legal , or just rude or ignorant .
There,s no ai or automatic filter that can block all content
that may be in those categorys .
When social media websites have millions of users uploading images or comments it will need human moderators to block content
that might be illegal or extreme .
Do western societys want to become like china where all content
is screened and filtered ?
We have seen 1000,s videos, on youtube that should be fair use or parody
removed by dmca notice,s .
And those categorys do not even include imag,es or video that may be infringing on ip holders under the new laws in europe .
Where all user uploads will have to be screened by filters .

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:3 Xenophobic trash peddlers like you bro

Because unhiding a comment to see what was written is not, in itself, remotely close to "circle jerk" behavior. It’s to see what the comment was and decide for oneself if the comment was offensive, pointless, inciting or dumb enough to merit such treatment. Maybe I might respond by saying "I don’t think that message was offensive". Or maybe if I’m in the mood I’ll reply with "I agree, that was a stupid post". The latter would probably fit your definition of circle jerk.

Either way, what does it matter? What did you expect was the point accomplished by wading into a comment thread and calling everyone xenophobic?

bhull242 (profile) says:

Re: Re: Re: Xenophobic trash peddlers like Eric Goldman

That’s not in any way xenophobia. We elect people to represent our interests. Outsiders are perfectly fine, and I’m even okay with them moving here, but unless and until they become U.S. citizens and aren’t working on behalf of a foreign government, I don’t want them to be involved in our elections any more than they’d want me to interfere in their elections.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Poor content moderation causes anti-social behavior.

Two points about this:

First, poor content moderation can certainly attract anti-social behaviour. There’s a reason why Gab, 8chan, YouTube comments, etc. are marked with "HERE BE MONSTERS" on the maps of the internet, where even Reddit and Twitter, themselves not exactly paragons of content moderation, aren’t regarded with the same disdain.

Second, specifically regarding YouTube videos: a lot of complaints I’ve heard haven’t been regarding content moderation so much as the recommendation algorithm. And, while lax content moderation isn’t going to cause anything, recommending content absolutely can, and I can certainly believe an increasingly-strict diet of extremism and conspiracy theory can send someone down the hole of antisocial behaviour.

This comment has been deemed insightful by the community.
Wyrm (profile) says:

Summary

All these myths are based on a single misconception: that content can be evaluated objectively and by itself.
However, nearly all content is evaluated subjectively and requires context (including in-service context, poster-profile context, overall social and cultural context…) Denying this fundamental problem leads to being blind to all the aspects you mentioned. Virtually anything depends on context.

  • Violence is bad in real-life, but is fundamental to lots of entertainment products.
  • Sincere hate speech is bad, but can be quoted or parodied for criticism.
  • Criticizing someone is allowed, as long as you avoid libel/slander, but will make the target feel bad. (Particularly when they are thin-skinned, even more so when orange-skinned.) They might lash out, claim victim-status, pretend the critic is lying… or claim copyright violation.

Nothing is easy to judge as the spin given to the reports of the instance can sway public opinion regardless of the merit of the report. Something is often presented as an "obvious" case despite not being objectively obvious at all. This is done by several means, such as slightly misquoting the content, ignoring context or inversely adding false context, etc.

This makes for lots of cases presented as "black-and-white" issues, whereas the immense majority of edgy cases are often ignored because they are harder to spin as "obvious". This issue, which is pretty common in the media landscape, leads to the biased myths above, that – in short – "moderation is easy".

JoeCool (profile) says:

Re: Re:

Hiding anti-social behavior from the public internet will somehow make people stop being anti-social.

Heh – that’s the truth. Certain infamous commentators here are regularly down-voted enough to be hidden on virtually every post, but they constantly return, worse than ever.

But that’s why the posts are just hidden, not removed. Hiding them by popular consent is a very simple and straight-forward message that most of the people don’t like what they’re saying, but they’re still allowed to say it. Removing the posts would send the message that since most people don’t like it, they’re not ALLOWED to say it, which would be against Freedom of Speech. All content moderation should be like here: enough down-votes just hide the content, but it’s still there. And if it just happens to be illegal speech, it’s still there for the police as evidence.

Wyrm (profile) says:

Re: Re: Re:

Removing the posts would send the message that since most people don’t like it, they’re not ALLOWED to say it, which would be against Freedom of Speech.

More seriously, this wouldn’t be against freedom of speech. It would be if it’s legally mandated, but as long as it’s voluntary moderation by the platform and/or its users, that’s not a free speech issue.
You could frame that as a snowflake issue, of people trying to make themselves a utopian "safe space" that doesn’t exist in the real world, but that would not be a free speech issue.

Stephen T. Stone (profile) says:

Re: Re:

Removing the posts would send the message that since most people don’t like it, they’re not ALLOWED to say it, which would be against Freedom of Speech.

Maybe the spirit of that principle, but not the legality of it. Your usage of a third party platform for speech is a privilege, not a right — and that privilege can always be revoked.

This comment has been flagged by the community. Click here to show it.

Dark Shops Spotty Phones Rotting Fish in SF says:

Re: Re: Exulting in your little bit of fanboy power

Sheesh! You are exulting in your little bit of fanboy power without grasping the actual effect. "Hiding" infuriates and makes people leave the site, never to return. "Hiding" of reasonable on-topic comments is even worse, shows what normal people can expect here!

I don’t have to argue or show Alexa numbers, as that’s easily visible and you know it. Masnick’s little blog doesn’t pay for itself, has almost no rational discussion just ad hominem attacks. It’s not a "platform" getting his views to a wide audience, it’s just a couple dozen ultra-partisan fanboys echoing.

All that’s evident to the few new readers. I may have missed some of late, but are definitely NOT MANY new accounts: last I have listed appeared Sep 23rd, 2019!

This comment has been flagged by the community. Click here to show it.

Dark Shops Spotty Phones Rotting Fish in SF says:

Re: Re: Re: Exulting in your little bit of fanboy power

Once Techdirt became a club and not a forum, it’s been shrinking and is guaranteed to shrink.

All as I predicted ten years ago. Oh, it’s still going but only because of Millionaire Masnick’s vanity. He’s not validated anywhere else!

Now, HIDE this, kids! It serves my purpose that you do!

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

bhull242 (profile) says:

Re: Re: Re: Exulting in your little bit of fanboy power

Hiding" infuriates and makes people leave the site, never to return.

Outside of people who spam links, I can’t think of a single person whose comments were hidden and then actually left and never came back, though many have claimed that they would do so. In fact, not only do they all still read Techdirt, they keep writing stuff in the comments. Zof even still uses his account. And even if that’s true, how is that any different from removing comments? Also, if your comments keep getting hidden, you probably won’t be missed if you leave.

"Hiding" of reasonable on-topic comments is even worse, shows what normal people can expect here!

I might agree with that, but it’s actually pretty rare, and quite a few visible comments (i.e. not hidden) are actually not in favor of Techdirt’s views.

In particular, your comments are almost never both reasonable and on-topic, so you of all people don’t have any justification for complaining about that.

I don’t have to argue or show Alexa numbers, as that’s easily visible and you know it.

If you can’t be bothered to do the research, why should I? I for one know nothing about Alexa numbers at all. If it’s so easy, you should have no problems showing us them. As the one making the positive claim, you have the burden of proof.

Masnick’s little blog doesn’t pay for itself,

Most things don’t.

has almost no rational discussion just ad hominem attacks.

That hasn’t been my experience, but at any rate not everyone comes to a blog for the comments.

It’s not a "platform" getting his views to a wide audience, it’s just a couple dozen ultra-partisan fanboys echoing.

And you were criticizing us for ad hominem attacks? And actually, we often criticize both parties for a lot of stuff, so I don’t think “ultra-partisan” really applies here.

All that’s evident to the few new readers.

[Asserts facts not in evidence]

I may have missed some of late, but are definitely NOT MANY new accounts: last I have listed appeared Sep 23rd, 2019!

That’s less than a month ago. Also, most new readers don’t get accounts even if they stick around. You haven’t proven that new readers are few or don’t stick around.

This comment has been flagged by the community. Click here to show it.

Bruce C. says:

Mythbusters...

It’s great to see some level-setting in the back and forth debates on this issue.

It’s a shame a lot of the big platforms for a long time took their section 230 immunity to mean that there was no need for moderation, and are still playing catch-up once it became clear that the tide had turned.

It’s also a shame that the public discourse seems to be centered around a very narrow definition of acceptable speech. Advertiser/shareholder-censored content is something we already have plenty of on the airwaves and cable TV. When all of the platforms are run by publicly held companies that earn their revenue through ads, the internet becomes indistinguishable from old media.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Mythbusters...

It’s a shame a lot of the big platforms for a long time took their section 230 immunity to mean that there was no need for moderation

Eh, they had it right; You don’t. Section 230 does not mean there is a need to moderate, only a "please do" provision.

Anonymous Coward says:

Re: Re: Re:2 Mythbusters...

There’s a difference between…

Nothing about Section 230 indicates a need for moderation.

and…

It’s a shame a lot of the big platforms for a long time took their section 230 immunity to mean that there was no need for moderation

Namely, the difference between, "Section 230 doesn’t indicate a need for moderation," and "Section 230 indicates that there isn’t a need for moderation."

Not indicating a need =/= indicating something isn’t needed.

ECA (profile) says:

A comment...

I love it sometimes when others look at my comments and Decide to correct me.
I have to suggest to them that the English language is a conglomerate of many languages consolidated into a menagerie of Crap. We bring rules in for English from other languages that have no use except to use on those certain words from That 1 language. And expect Kids and even Adults to use those rules that have no use except those few words. Insted of converting the word to an English/Anglo spelling, we Just throw words into the mess and add more rules to cover them… I before E, has been changed as there are an Equal amount that dont use it. And so others may know, we have removed Letter from the english alphabet, because we found other ways to use a Shorter alphabet.

http://mentalfloss.com/article/31904/12-letters-didnt-make-alphabet

and probably a few others… Let alone pronunciation and inclusion of words from german, russian, italian, spanish and Other languages..
We have words that have so many Meanings, that unless you KNOW the language, you will be mistaken to even use the correct ones..
Im not supper educated, but my teacher gave me a dictionary because I like playing with words and meanings, even tho I do have a few handicaps that restricted me when I was younger.

Good luck folks have fun trying to get a computer to understand all the connotations and convolutions of English.

tp (profile) says:

Stackexchange shows proper content moderation is possible

It seems there’s a myth that content moderation is so difficult that noone can do it. But stackexchange has clearly succeeded in content moderation, there’s only very small amount of trolling or bad behaviour in their platform, even after other users are evaluating work of others.

Of course stackexchange have spent years perfecting their system to get content moderation working properly.

ECA (profile) says:

Re: Stackexchange shows proper content moderation is possible

Moderation can be cheap and easy, if you use SIMPLE rules..

but we have 2 groups..
1 that understands this idea.
1 that says TROLLS have rights. Even if they are calling everyone by every name in the book, and the Subject is lost to Rabble rousing discussion..

the First group keeps asking the second for proof of what they are saying and the Second seems to think Something they heard from the 18th century has any factual Meaning, given from their GREAt grand father..

bhull242 (profile) says:

Re: Stackexchange shows proper content moderation is possible

Of course, that could just be a coincidence. StackExchange is pretty niche, and there’s not much reason to troll people on it.

Plus, StackExchange isn’t even close to being as large as, say, YouTube or Facebook. StackExchange gets thousands of uploads per year; others get millions per day. The scale is completely different.

Peter (profile) says:

How does China do it

Totally not saying I want the US or any country to be China but it seems saying, "China controls content on its internet" and "Content Moderation can’t scale and here are some myths related to it" are at odds. Is it a matter of liability is on the platform side? Liability in this sense meaning fear of the government… Just wondering really, this may not be a unique thought at all but just occurred to me the clash in these two beliefs.

That One Guy (profile) says:

Re: 'Collateral damage? Eh, we don't care.'

To the extent that china ‘controls’ the internet within their country it appears to be two-fold: complete indifference to collateral damage, and having companies act as censors on their behalf.

So long as you don’t care about collateral damage and you’re willing to impact ever increasing amounts of ‘good’ content in your scramble to squash the ‘bad’ content then moderation scales just fine, it’s only when you aren’t willing to have someone else pay those prices that it fails to.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...