If You're Complaining About COVID-19 Misinformation Online AND About Section 230, You're Doing It Wrong

from the section-230-is-helping-quell-disinformation dept

I remain perplexed by people who insist that internet platforms “need to do more” to fight disinformation and at the same time insist that we need to “get rid of Section 230.” This almost always comes from people who don’t understand content moderation or Section 230 — or who think that because of Section 230’s liability protections that sites have no incentive to moderate content on their platforms. Of course platforms have tons of incentive to moderate: much of it social pressure, but also the fact that if they’re just filled with garbage they’ll lose users (and advertisers).

But a key point in all of these debates about content moderation with regards to misinformation around COVID-19, is that for it to work in any way, there needs to be flexibility — otherwise it’s going to be a total mess. And what gives internet platforms that flexibility? Why it’s that very same Section 230. Because Section 230 makes it explicit that sites don’t face liability for their moderation choices, that enables them to ramp up efforts — as they have — to fight off misinformation without fear of facing liability for making the “wrong” choices.

Without Section 230, these businesses would have had to vet every single post?s truthfulness and legality. Not only would that have bogged down businesses? response, it also would have been impossible ? we knew little about coronavirus when it first hit and don?t know much more today.

Put simply, Section 230 helps make the internet safer, and that, in turn, has let us all rely on it to keep life moving, even while we?re stuck inside.

I’d argue it’s even more stark than that article lays out. Not only did we know little about the coronavirus at the beginning, we still don’t know very much and many of the early messages from official sources turned out to be wrong. Indeed, one of the ways that we’ve zeroed in on more accurate information is by being able to discuss ideas freely and zero in on what makes the most sense.

This whole process involves experimentation on both sides of this market. The platform players get to experiment with different methods and ideas for content moderation, while users get to discuss and debate different ideas about COVID-19. But both of those only happen with the structural balance provided by Section 230. Platforms can experiment to figure out what works best to enable reasonable debate and move people towards more accurate analysis — while minimizing the impact of blatantly wrong information, misinformation, and disinformation. And users get to discuss and debate ideas to get closer to the truth themselves. Without the balance of Section 230, you create massive structural problems that prevent most of that from happening.

Without 230, companies face the classic moderator’s dilemma. Doing no moderation at all is one option — but then that lets disinformation flow freely, and companies might face liability for that disinformation. Alternatively, they could moderate very thoroughly, and pull down lots of information. But that might actually include good and useful information. For example, the discussion over whether or not people should wear masks as the pandemic began was all over the place with the WHO and the CDC initially urging people not to wear masks. However, in part because of widespread discussions and evidence presented on social media, the narrative shifted and eventually the CDC and WHO came on board with the recommendation to wear masks.

Without 230, what would a platform do regarding the mask discussion? Someone at the company could unilaterally decide that masks are a good thing — but then face outrage from those who supported the WHO and CDC, and they would argue that the platform is spreading dangerous misinformation that could lead to hoarding and fewer masks for medical professionals. And that, alone might create lawsuits (in the absence of 230). Or they could follow what the WHO and CDC said initially… and then might feel obligated to silence and delete the conversations which argued, persuasively, why masks actually are valuable. And that would create all sorts of problems as well. At the same time, there is actual misinformation about what types of masks to wear and how — and there are strong arguments for why platforms should be able to moderate that.

But all of that becomes much trickier, and much riskier, without a Section 230 — and the greatest likelihood is that platforms will seek to avoid liability, and that will mean censoring plenty of good and important information (such as how to make or wear masks and why they’re so important). It’s Section 230 that has enabled both platforms to adjust their moderation techniques and the important public discussions that allow people to share, debate, and discuss as we figure out what is going on and how best to deal with it.

Filed Under: , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “If You're Complaining About COVID-19 Misinformation Online AND About Section 230, You're Doing It Wrong”

Subscribe: RSS Leave a comment
37 Comments
This comment has been deemed insightful by the community.
Aaron Wolf (profile) says:

Re: forcing politics

It seems far more likely to me that the anti-230 folks are just ignorantly grasping at the idea of regulation.

They may have their political views, but they are not so thoughtful about forcing their politics and hiding their actions etc. Rather, they just want magic. It’s like that classic YouTube skit about "the expert".

The anti-230 folks are just non-experts asking the experts to do magic.

Stephen T. Stone (profile) says:

Re: Re:

It seems far more likely to me that the anti-230 folks are just ignorantly grasping at the idea of regulation.

Possible, but unlikely. The anti-230 side, from what I can tell, seems more focused on the idea of “fairness”. They want to know why “liberal” speech is allowed on Twitter, Facebook, etc. but “conservative” speech (allegedly) isn’t. Such fools think 230 is a barrier to “fairness”. But they don’t stop to think about what speech is being banned, why it’s being banned, and how 230 allows any platform to make that decision for itself.

This comment has been deemed insightful by the community.
Mike Masnick (profile) says:

Re: Re: Re: Re:

I think Aaron actually may have a good point. There are two groups of people attacking 230. He’s describing one, and you’re describing the other (of course, it’s too simplistic to say there are just two motivations… but… for simplicity’s sake we’ll run with it).

This comment has been deemed insightful by the community.
Scary Devil Monastery (profile) says:

Re: Re: forcing politics

"The anti-230 folks are just non-experts asking the experts to do magic."

Not usually, no. The most vocal anti-230 crowd stands out as having one of two common denominators.

The first one is the type who hold opinions which would have them banned from most platforms where a sizeable audience exist and are incensed that the platforms in questions are free to deny them their use of the platforms soapbox. That crowd sees the death of 230 as the guarantee that Twitter will henceforth be forced to let them spew bile on whatever minority they’ve a hateboner for.

The second type thinks getting rid of 230 will allow them to SLAPP any unfavorable mention of themselves out of existence – and here we find the by now rather plentiful industries of fraud and conmanship engaged in peddling whatever snake oil they can – whether that’s copyright/patent trolls, ambulance chasers, or purveyors of silver tonics with the lamentable side effect of coloring people permanently a vivid hue of zombie smurf.

Very few people today nagging about section 230, encryption, or any other legal or technical enabler of mass communication actually lacks an agenda.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:

As I often say, it more like they have seen the amount of extra traffic (and thus revenue) they get by piggy backing on a general mainstream platform than they could ever generate on their own. So, they want the platform to be forced to keep them there against their wishes.

There will be some true believers who genuinely don’t understand why their favoured politics are treated unequally (in the rare cases where that’s even remotely true), but a lot of them are grifters who don’t like their free ride on the gravy train coming to an end.

This comment has been flagged by the community. Click here to show it.

tz1 (profile) says:

At what point does moderation become publisher editoral?

The original reason for Sec. 230 was to protect PLATFORMS. If people post errant or fraudulent classified ads or such it make no sense to hold the printer liable.

There can be UNIFORM moderation to remove offensive words (which should be listed – your post contains XXX which violates our ToS).

The error which is growing is the Trust and Safety is now the Ministry of Truth picking not the most reasoned posts based on evidence with shown work, but the narrative. So they would censor lockdowns on 2/1/20, but censor anti-lockdown two months later. The truth didn’t change.

A “We Disagree (and/or disapprove) note is different than outright censorship. If these companies wish to be editors for their publication, they are publishers (and have claimed 1st amendment protection!) and not platforms.

There is a line, but they have crossed it a long time ago and wish to have it both ways. Protections both as publishers and platforms depending on the context. They should be forced to choose one or the other.

This comment has been flagged by the community. Click here to show it.

tz1 (profile) says:

At what point does moderation become publisher editoral?

The original reason for Sec. 230 was to protect PLATFORMS. If people post errant or fraudulent classified ads or such it make no sense to hold the printer liable.

There can be UNIFORM moderation to remove offensive words (which should be listed – your post contains XXX which violates our ToS).

The error which is growing is the Trust and Safety is now the Ministry of Truth picking not the most reasoned posts based on evidence with shown work, but the narrative. So they would censor lockdowns on 2/1/20, but censor anti-lockdown two months later. The truth didn’t change.

A “We Disagree (and/or disapprove) note is different than outright censorship. If these companies wish to be editors for their publication, they are publishers (and have claimed 1st amendment protection!) and not platforms.

There is a line, but they have crossed it a long time ago and wish to have it both ways. Protections both as publishers and platforms depending on the context. They should be forced to choose one or the other.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

If these companies wish to be editors for their publication, they are publishers (and have claimed 1st amendment protection!) and not platforms.

Insofar as it concerns 47 U.S.C. § 230, that distinction doesn’t matter. Any platform that moderates user-generated content is covered by 230. Twitter can ban basically any kind of speech it wants without legal consequence. So can any other platform.

Let’s say you run a small Mastodon instance. You decide that you don’t want racial slurs used on that instance, under any circumstances. What would you do, then, if the government said “you must allow people to say those things”? Because 230 stops the government from doing exactly that.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:

A "platform" is just that: a "dumb pipe" for speech that is NOT moderated based on political views. A PUBLISHER alters content to suit its editorial method, and selective censorship is one such way.

Masnick clearly censors postings that make sound attacks against Section 230, and each of his decisions is being logged for future reveal at a place he cannot censor.

This comment has been deemed insightful by the community.
Mike Masnick (profile) says:

Re: Re: Re: Re:

A "platform" is just that: a "dumb pipe" for speech that is NOT moderated based on political views.

There’s a lot going on in this statement, none of which is correct. The law does not refer to "platforms" so your legal distinction between platform and publisher is not only wrong, but meaningless.

Second, if you actually meant an interatcive computer service — which is what the law talks about, you’re still totally wrong. The law makes no reference to "dumb pipes" and the history and intent of the law shows that the purpose is actually the exact opposite of what you claim. I mean, the law literally calls out that it’s designed to encourage the "development and utilization of blocking and filtering technologies." So, at no point was it supposed to be to allow all things through. Indeed, the purpose behind the law was the reverse.

Indeed, this is doubly enforced by the parts of the law that specifically call out that an ICS may not be held liable for its moderation choices — and courts have long held that to be quite broad. So your claims are simply silly.

Masnick clearly censors postings that make sound attacks against Section 230, and each of his decisions is being logged for future reveal at a place he cannot censor.

I do no such thing, but if I did it would be perfectly legal and allowed under 230. The fact that your dumb posts sometimes get flagged by the community and/or caught by the spam filter does not change any aspect of that. We do not "censor" anyone, nor could we, since you have every right to post elsewhere — as you indicate you intend to do. And I eagerly await you revealing this "log" because it’s not going to show anything nefarious in the slightest, because it literally cannot.

Indeed, we’re not only obviously well within what 230 protects, we are much more open and allowing of insanely ignorant takes such as yours — which we do allow on the site and do not remove. Plenty of sites would delete such delusional comments outright — again, with no legal consequences whatsoever.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: At what point does moderation become publisher editoral?

Your premise is wrong, and section 230 does not say what you think it says. But even if it did, the correct remedy is "use a competing service", not "continue to provide revenue for the service you dislike while whining that they’re not bowing to your opinions".

This comment has been deemed insightful by the community.
Mike Masnick (profile) says:

Re: At what point does moderation become publisher editoral?

The original reason for Sec. 230 was to protect PLATFORMS.

No. The original reason for Section 230 was to protect free speech on the internet, and to enable internet services to feel free to create family friendly spaces. I mean, this history is pretty well known.

There can be UNIFORM moderation to remove offensive words (which should be listed – your post contains XXX which violates our ToS).

Spoken like someone who has never had to moderate a platform. If you had a list of forbidden words, people would immediately figure out how to write the same thing using slight modifications. This is an extremely naive take.

The error which is growing is the Trust and Safety is now the Ministry of Truth picking not the most reasoned posts based on evidence with shown work, but the narrative.

Oh, sorry. This is even more naive.

So they would censor lockdowns on 2/1/20, but censor anti-lockdown two months later. The truth didn’t change.

Actually, it did. "Truth" in this context is our collective understanding of what is likely to make everyone safest. And in a world where information is constantly changing and we don’t yet know what’s accurate, the way this works is as more information comes out, we adjust our ideas and theories about it.

That and, oh, no one ever censored news about lockdowns on 2/1/20. But whatever.

If these companies wish to be editors for their publication, they are publishers (and have claimed 1st amendment protection!) and not platforms.

First, there is no legal distinction, so this is a silly point. Second, yes, they are publishers for any content they create themselves, but they are not publishers of content they moderate.

Protections both as publishers and platforms depending on the context. They should be forced to choose one or the other.

Again, this is an incredibly naive take. Again, there is no legal distinction here. The issue is that some actions they take as interactive computer services — hosting content for others. Some actions they may do as publishers — creating their own content. You seem to be confusing the two and assuming that one can’t be the other. It depends on the specific action — and moderating content is firmly within the arena of being an interactive service, not creating your own content.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: At what point does moderation become publisher editoral?

Spoken like someone who has never had to moderate a platform. If you had a list of forbidden words, people would immediately figure out how to write the same thing using slight modifications. This is an extremely naive take.

Also, you run into this issue: https://en.wikipedia.org/wiki/Scunthorpe_problem. A person would have to be very naive to think that such a filter has not already been tried, and must be pretty new to the internet to not have seen examples of these problems in the wild.

Also, any attempt to react to the ways people try to get around the blocks would inevitably also block perfectly legitimate content, and correctly identifying context is very difficult even for human readers at times.

Once again, someone who thinks that this is a simple problem to fix does not understand the issue, and that’s even without to false conflation of publisher and platform.

This comment has been deemed insightful by the community.
This comment has been deemed funny by the community.
Anonymous Coward says:

Re: Re: Re: At what point does moderation become publisher edito

On that topic the TV Tropes page has a pretty hilarious example of how a boy got around the censors while he was testing it.

One of the developers of Toontown Online, wanting to get around this problem while at the same time allowing players to interact, suggested using a list of approved words and sentence fragments that a user could string together to form full sentences. This idea was shot down by one of the other developers who had tried the approach in another game. The 14-year old boy who was testing the software was able to, within a minute, construct the following sentence: "I want to stick my long-necked Giraffe up your fluffy white bunny".

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:2 At what point does moderation become publisher e

Yeah, as the saying goes "For every complex problem there is an answer that is clear, simple, and wrong". Euphemisms are deadly ground for this kind of thinking, and if the subject is sex or money the user will have all the incentives they need to bypass any such filter.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Re: Re: At what point does moderation become publisher edito

Also, any attempt to react to the ways people try to get around the blocks would inevitably also block perfectly legitimate content, and correctly identifying context is very difficult even for human readers at times.

An example that comes immediately to mind regarding the problems with blocks/filters, and one that has been covered on TD before is the anti-vaxxer lunatics vs real science. Since both of them are going to be using a lot of the same words and terminology a platform is going to have a hell of a time just creating up a filter to block the plague cultists without also hitting legitimate science and/or people pointing out how dangerously wrong they are.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed funny by the community.
Koby (profile) says:

Re: Re: At what point does moderation become publisher editoral?

In an excellent example, yesterday, conservative political commentator Candace Owens was suspended from Twitter. Basically for tweeting that she believes the governor of Michigan should allow people to go back to work.

https://thehill.com/blogs/blog-briefing-room/news/495814-candace-owens-twitter-account-suspended

There was nothing to moderate. There was no obscene language that needed to be toned down. This was pure political censorship. Every time one of the tech corporations censor a benign tweet like this, it is the corporations themselves who take a chip out of the Section 230 wall because of their willingness to engage in abuse.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Re: Re: 'My conservative views were censored!' 'Which?' 'Uhh...'

There was nothing to moderate.

Other than someone posting a wildly dangerous idea during a pandemic?

There was no obscene language that needed to be toned down.

Obscenity isn’t the only thing that gets moderated, so that point it moot.

This was pure political censorship.

No, it bloody well wasn’t, but have fun clutching that persecution complex if it makes you feel better rather than facing that they got the boot not because they were conservative but because they were a dangerous idiot.

If your ‘politics’ involve incredibly irresponsible and dangerous ideas, maybe take a look at your politics before you go complaining that someone else was ‘mean’ to you for presenting them.

Every time one of the tech corporations censor a benign tweet like this,

If ‘people should do something that is incredibly irresponsible and has the potential to get people killed‘ counts as benign in your mind I’d hate to see what counts as bad.

it is the corporations themselves who take a chip out of the Section 230 wall because of their willingness to engage in abuse.

Actions have consequences, if you don’t like a private platform’s rules for what they do and do not allow then make your own gorram platform and stop whining that they won’t let you say/do whatever you want on theirs.

It’s not ‘abuse’ for a platform to have rules that they enforce and/or choose who they let post simply because you don’t like those rules.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:2 'My conservative views were censored!' 'Which?' 'Uhh...'

"Actions have consequences"

The world would be a much better place if these people accepted that, in all sorts of ways.

"Obscenity isn’t the only thing that gets moderated, so that point it moot."

Yes, here’s Twitter’s current stated policy on the matter:

Under this guidance, we will require people to remove tweets that include

Specific claims around COVID-19 information that intends to manipulate people into certain behavior for the gain of a third party with a call to action within the claim

Specific and unverified claims that incite people to action and cause widespread panic, social unrest or large-scale disorder

The whined about tweet clearly fits on multiple levels, and a lot of these loons will be making multiple violations of the TOC, hence the banhammer.

Why are "leftists" not being banned? Possibly because they’re not telling people to gather in large contagion vectors in order to specifically overwhelm the ability for the government to cope with it, and in Ms. Dumbass’s tweet.

Anonymous Coward says:

Re: Re: Re:2 'My conservative views were censored!' 'Which?' 'Uhh...'

"If your ‘politics’ involve incredibly irresponsible and dangerous ideas, maybe take a look at your politics before you go complaining that someone else was ‘mean’ to you for presenting them"

Perhaps if the person you are following suggests random drugs they have heard on right-wing consipiracy shows without any testing or vetting, or suggests things that are lethal, like drinking sanatizers or Bleach, or perhaps putting yourself under or injecting "bright lights"… as a treatment option, you may want to re-evaluate your leadership candidate…

Just the good ol’ boys
Never meanin’ no harm
Beats all you never saw
Been in trouble with the law
Since the day they was born
Staightenin’ the curves
Flattenin’ the crornavirus – hills
Someday the Iranians might get ’em
But the law never will
Makin’ his way
The only way he knows how
That’s just a little bit more
Than the CIA will allow
Makin’ their way
The only way they know how yeah
That’s…

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re: At what point does moderation become publisher edito

"Basically for tweeting that she believes the governor of Michigan should allow people to go back to work."

Yes, she was inciting people to take action against the current advice of the government, which is not only irresponsible and stands to get a lot of people needlessly infected or worse, but is DIRECTLY against Twitter’s stated policy on the matter.

Don’t like the rules? Go somewhere else or stop breaking them.

"This was pure political censorship."

So? They don’t owe you a platform no matter what politics you have.

That One Guy (profile) says:

Re: Re: Re:2 At what point does moderation become publisher e

So? They don’t owe you a platform no matter what politics you have.

While I agree that they don’t owe any political party a platform I feel it’s a mistake to grant them even that much for the sake of the argument. Unless they want to argue that gross irresponsibility is a conservative position it most certainly wasn’t ‘political censorship’.

They were shown the door not because of their political leanings but because they were posting incredibly foolish and dangerous tweets in a time when that could very easily get people killed.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:3 At what point does moderation become publish

"Unless they want to argue that gross irresponsibility is a conservative position it most certainly wasn’t ‘political censorship’."

Well, given recent evidence they might not far off?

Either way, as usual for this type he misrepresented the original tweet. It didn’t read as "she believes the governor of Michigan should allow people to go back to work". It reads as "if enough of us ignore the orders at once, the government loses the ability to control the pandemic, so let’s do that!".

That may or may not be what she meant, but it’s what she said, and it shouldn’t take a genius to work out why this is unacceptable even if it weren’t directly against Twitter’s stated rules.

David says:

Come again?

Of course platforms have tons of incentive to moderate: much of it social pressure, but also the fact that if they’re just filled with garbage they’ll lose users (and advertisers).

Uh, even before the Internet, the highest-circulation newspapers were not the ones known for quality journalism.

Readers crave garbage.

This comment has been deemed insightful by the community.
TFG says:

Re: Come again?

I’m not sure the analogy is sound. For one, tabloids are a purchased product, while Twitter is free to view.

Secondly, if you look at the actual social media platforms and compare moderated vs. unmoderated … Twitter and Facebook have the largest imprint. Reddit has various moderation policies depending on the specific subreddit, but moderation does, in fact, happen – and those subreddits where it’s superlax don’t seem to have a large population.

Meanwhile, 4chan, known for being a cesspool, is certainly famous but doesn’t have nearly the same wide-spread cultural saturation. 8chan is small potatoes next to 4chan. Nobody really cares about Gab.

Based on this, it seems that, yes, when the place gets filled with garbage, most people leave.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re: Come again?

"the place being full of garbage is not necessarily the direct result of less moderation."

Not on its own, but these places are usually filled up with the garbage that comes from other places. 8chan was originally created by people who found 4chan too restrictive. Gab’s main user base is people who got kicked off Twitter. In both cases, it’s the relative lack of moderation that attracted them.

It may be possible that you can create a site with light moderation yet still be a useful community, or a heavily moderated place with a bad one, but the general trend is clearly that moderated sites at the moment are demonstrably better than those which are not moderated.

Anonymous Coward says:

Alternatively, they could moderate very thoroughly, and pull down lots of information. But that might actually include good and useful information.

I would replace thoroughly with extensively. Thoroughness implies a quality of work. In fact, i would argue that thoroughness is precisely the thing which cannot be done at scale.

Apologies if this seems merely nit-picky.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »