Hello! You've Been Referred Here Because You're Wrong About Section 230 Of The Communications Decency Act

from the duty-calls dept

Hello! Someone has referred you to this post because you've said something quite wrong about Section 230 of the Communications Decency Act.

I apologize if it feels a bit cold and rude to respond in such an impersonal way, but I've been wasting a ton of time lately responding individually to different people saying the same wrong things over and over again, and I was starting to feel like this guy:

Duty Calls

And... I could probably use more sleep, and my blood pressure could probably use a little less time spent responding to random wrong people. And, so, for my own good you get this. Also for your own good. Because you don't want to be wrong on the internet, do you?

Also I've totally copied the idea for this from Ken "Popehat" White, who wrote Hello! You've Been Referred Here Because You're Wrong About The First Amendment a few years ago, and it's great. You should read it too. Yes, you. Because if you're wrong about 230, there's a damn good chance you're wrong about the 1st Amendment too.

While this may all feel kind of mean, it's not meant to be. Unless you're one of the people who is purposefully saying wrong things about Section 230, like Senator Ted Cruz or Rep. Nancy Pelosi (being wrong about 230 is bipartisan). For them, it's meant to be mean. For you, let's just assume you made an honest mistake -- perhaps because deliberately wrong people like Ted Cruz and Nancy Pelosi steered you wrong. So let's correct that.

Before we get into the specifics, I will suggest that you just read the law, because it seems that many people who are making these mistakes seem to have never read it. It's short, I promise you. If you're in a rush, just jump to part (c), entitled Protection for “Good Samaritan” blocking and screening of offensive material, because that's the only part of the law that actually matters. And if you're in a real rush, just read Section (c)(1), which is only 26 words, and is the part that basically every single court decision (and there have been many) has relied on.

With that done, we can discuss the various ways you might have been wrong about Section 230.

If you said "Once a company like that starts moderating content, it's no longer a platform, but a publisher"

I regret to inform you that you are wrong. I know that you've likely heard this from someone else -- perhaps even someone respected -- but it's just not true. The law says no such thing. Again, I encourage you to read it. The law does distinguish between "interactive computer services" and "information content providers," but that is not, as some imply, a fancy legalistic ways of saying "platform" or "publisher." There is no "certification" or "decision" that a website needs to make to get 230 protections. It protects all websites and all users of websites when there is content posted on the sites by someone else.

To be a bit more explicit: at no point in any court case regarding Section 230 is there a need to determine whether or not a particular website is a "platform" or a "publisher." What matters is solely the content in question. If that content is created by someone else, the website hosting it cannot be sued over it.

Really, this is the simplest, most basic understanding of Section 230: it is about placing the liability for content online on whoever created that content, and not on whoever is hosting it. If you understand that one thing, you'll understand most of the most important things about Section 230.

To reinforce this point: there is nothing any website can do to "lose" Section 230 protections. That's not how it works. There may be situations in which a court decides that those protections do not apply to a given piece of content, but it is very much fact-specific to the content in question. For example, in the lawsuit against Roommates.com for violating the Fair Housing Act, the court ruled against Roommates, but not that the site "lost" its Section 230 protections, or that it was now a "publisher." Rather, the court explicitly found that some content on Roommates.com was created by 3rd party users and thus protected by Section 230, and some content (namely pulldown menus designating racial preferences) was created by the site itself, and thus not eligible for Section 230 protections.

If you said "Because of Section 230, websites have no incentive to moderate!"

You are wrong. If you reformulated that statement to say that "Section 230 itself provides no incentives to moderate" then you'd be less wrong, but still wrong. First, though, let's dispense with the idea that thanks to Section 230, sites have no incentive to moderate. Find me a website that doesn't moderate. Go on. I'll wait. Lots of people say things like one of the "chans" or Gab or some other site like that, but all of those actually do moderate. There's a reason that all such websites do moderate, even those that strike a "free speech" pose: (1) because other laws require at least some level of moderation (e.g., copyright laws and laws against child porn), and (2) more importantly, with no moderation, a platform fills up with spam, abuse, harassment, and just all sorts of garbage that make it a very unenjoyable place to spend your internet time.

So there are many, many incentives for nearly all websites to moderate: namely to keep users happy, and (in many cases) to keep advertisers or other supporters happy. When sites are garbage, it's tough to attract a large user base, and even more difficult to attract significant advertising. So, to say that 230 means there's no incentive to moderate is wrong -- as proven by the fact that every site does some level of moderation (even the ones that claim they don't).

Now, to tackle the related argument -- that 230 by itself provides no incentive to moderate -- that is also wrong. Because courts have ruled Section (c)(1) to have immunized moderation choices, and Section (c)(2) explicitly says that sites are not liable for their moderation choices, sites actually have a very strong incentive provided by 230 to moderate. Indeed, this is one key reason why Section 230 was written in the first place. It was done in response to a ruling in the Stratton Oakmont v. Prodigy lawsuit, in which Prodigy, in an effort to provide a "family friendly" environment, did some moderation of its message boards. The judge in that case rules that since Prodigy did moderate the boards, that meant it would be liable for anything it left up.

If that ruling had stood and been adopted by others, it would, by itself, be a massive disincentive to moderation. Because the court was saying that moderation itself creates liability. And smart lawyers will say that the best way to avoid that kind of liability is not to moderate at all. So Section 230 explicitly overruled that judicial decision, and eliminated liability for moderation choices.

If you said "Section 230 is a massive gift to big tech!"

Once again, I must inform you that you are very, very wrong. There is nothing in Section 230 that applies solely to big tech. Indeed, it applies to every website on the internet and every user of those websites. That means it applies to you, as well, and helps to protect your speech. It's what allows you to repeat something someone else said on Facebook and not be liable for it. It's what protects every website that has comments, or any other third-party content. It applies across the entire internet to every website and every user, and not just to big tech.

The "user" protections get less attention, but they're right there in the important 26 words. "No provider or user of an interactive computer service shall be treated as the publisher or speaker...." That's why there are cases like Barrett v. Rosenthal where someone who forwarded an email to a mailing list was held to be protected by Section 230, as a user of an interactive computer service who did not write the underlying material that was forwarded.

And it's not just big tech companies that rely on Section 230 every day. Every news organization (even those that write negative articles about Section 230) that has comments on its website is protected thanks to Section 230. This very site was sued, in part, over comments, and Section 230 helped protect us as well. Section 230 fundamentally protects free speech across the internet, and thus it is more properly called out as a gift to internet users and free speech, not to big tech.

If you said "A site that has political bias is not neutral, and thus loses its Section 230 protections"

I'm sorry, but you are very, very, very wrong. Perhaps more wrong than anyone saying any of the other things above. First off, there is no "neutrality" requirement at all in Section 230. Seriously. Read it. If anything, it says the opposite. It says that sites can moderate as they see fit and face no liability. This myth is out there and persists because some politicians keep repeating it, but it's wrong and the opposite of truth. Indeed, any requirement of neutrality would likely raise significant 1st Amendment questions, as it would be involving the law in editorial decision making.

Second, as described earlier, you can't "lose" your Section 230 protections, especially not over your moderation choices (again, the law explicitly says that you cannot face liability for moderation choices, so stop trying to make it happen). If content is produced by someone else, the site is protected from lawsuit, thanks to Section 230. If the content is produced by the site, it is not. Moderating the content is not producing content, and so the mere act of moderation, whether neutral or not, does not make you lose 230 protections. That's just not how it works.

If you said "Section 230 requires all moderation to be in "good faith" and this moderation is "biased" so you don't get 230 protections"

You are, yet again, wrong. At least this time you're using a phrase that actually is in the law. The problem is that it's in the wrong section. Section (c)(2)(a) does say that:

No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected

However, that's just one part of the law, and as explained earlier, nearly every Section 230 case about moderation hasn't even used that part of the law, instead relying on Section (c)(1)'s separation of an interactive computer service from the content created by users. Second, the good faith clause is only in half of Section (c)(2). There's also a separate section, which has no good faith limitation, that says:

No provider or user of an interactive computer service shall be held liable on account of... any action taken to enable or make available to information content providers or others the technical means to restrict access to material....

So, again, even if (c)(2) applied, most content moderation could avoid the "good faith" question by relying on that part, (c)(2)(B), which has no good faith requirement.

However, even if you could somehow come up with a case where the specific moderation choices were somehow crafted such that (c)(1) and (c)(2)(B) did not apply, and only (c)(2)(A) were at stake, even then, the "good faith" modifier is unlikely to matter, because a court trying to determine what constitutes "good faith" in a moderation decision is making a very subjective decision regarding expression choices, which would create massive 1st Amendment issues. So, no, the "good faith" provision is of no use to you in whatever argument you're making.

If you said "Section 230 is why there's hate speech online..."

Ooof. You're either the The NY Times or very confused. Maybe both. The 1st Amendment protects hate speech in the US. Elsewhere not so much. Either way, it has little to do with Section 230.

If you said "Section 230 means these companies can never be sued!"

I regret to inform you that you are wrong. Internet companies are sued all the time. Section 230 merely protects them from a narrow set of frivolous lawsuits, in which the websites are sued either for the content created by others (in which case the actual content creators remain liable) or in cases where they're being sued for the moderation choices they make, which are mostly protected by the 1st Amendment anyway (but Section 230 helps get those frivolous lawsuits kicked out faster). The websites can and do still face lawsuits for many, many other reasons.

If you said "Section 230 is a get out of jail card for websites!"

You're wrong. Again, websites are still 100% liable for any content that they themselves create. Separately, Section 230 explicitly exempts federal criminal law -- meaning that stories that blame things like sex trafficking and opioid sales on 230 are very much missing the point as well. The Justice Department is not barred by Section 230. It says so quite clearly:

Nothing in this section shall be construed to impair the enforcement of... any other Federal criminal statute

So many of the complaints about criminal activity are not about Section 230, but about a lack of enforcement.

If you said "Section 230 is why there's piracy online"

You again may be the NY Times or someone who has not read Section 230. Section 230 explicitly exempts intellectual property law:

Nothing in this section shall be construed to limit or expand any law pertaining to intellectual property.

If you said "Section 230 gives websites blanket immunity!"

The courts have made it clear this is not the case at all. In fact, many courts have highlighted situations in which Section 230 does not apply, from the Roommates case, to the Accusearch case, to the Doe v. Internet Brands case, to the Oberdorf v. Amazon case, we see plenty of cases where judges have made it clear that there are limits to Section 230 protections, and the immunity conveyed by Section 230 is not as broad as people claim. At the very least, the courts seem to have little difficulty targeting what they consider to be "bad actors" with regards to the law.

If you said "Section 230 is why big internet companies are so big!"

You are, again, incorrect. As stated earlier, Section 230 is not unique to big internet companies, and indeed, it applies to the entire internet. Research shows that Section 230 actually helps incentivize competition, in part because without Section 230, the costs of running a website would be massive. Without Section 230, large websites like Google and Facebook could handle the liability, but smaller firms would likely be forced out of business, and many new competitors might never get started.

If you said "Section 230 was designed to encourage websites to be neutral common carriers"

You are exactly 100% wrong. We've already covered why it does not require neutrality above, but it was also intended as the opposite of requiring websites to be "common carriers." Specifically, as mentioned above, part of the impetus for Section 230 was to enable services to create "family friendly" spaces, in which plenty of legal speech would be blocked. A common carrier is a very specific thing that has nothing to do with websites and less than nothing to do with Section 230.

If you said "If all this stuff is actually protected by the 1st Amendment, then we can just get rid of Section 230"

You're still wrong, though perhaps not as wrong as everyone else making these bad takes. Without Section 230, and relying solely on the 1st Amendment, you still open up basically the entire internet to nuisance suits. Section 230 helps get cases dismissed early, whereas using the 1st Amendment would require lengthy and costly litigation. 230 does rely strongly on the 1st Amendment, but it provides a procedural advantage in getting vexatious, frivolous nuisance lawsuits shut down much faster than they would be otherwise.

There seems to be more and more wrong stuff being said about Section 230 nearly every day, but hopefully this covers most of the big ones. If you see someone saying something wrong about Section 230, and you don't feel like going over all of their mistakes, just point them here, and they can be educated.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: cda 230, free speech, intermediary liability, neutrality, platform, publisher, section 230, speech, wrong, you're wrong


Reader Comments

The First Word

In re: the “family friendly” stuff, I’m posting the on-the-Congressional-record words of Republican lawmaker Chris Cox, who helped craft 47 U.S.C. § 230, so we can all see his exact intent in that regard:

We want to encourage people like Prodigy, like CompuServe, like America Online, like the new Microsoft network, to do everything possible for us, the customer, to help us control, at the portals of our computer, at the front door of our house, what comes in and what our children see.

[O]ur amendment will do two basic things: First, it will protect computer Good Samaritans, online service providers, anyone who provides a front end to the Internet, let us say, who takes steps to screen indecency and offensive material for their customers. It will protect them from taking on liability such as occurred in the Prodigy case in New York that they should not face for helping us and for helping us solve this problem. Second, it will establish as the policy of the United States that we do not wish to have content regulation by the Federal Government of what is on the Internet, that we do not wish to have a Federal Computer Commission with an army of bureaucrats regulating the Internet because frankly the Internet has grown up to be what it is without that kind of help from the Government. In this fashion we can encourage what is right now the most energetic technological revolution that any of us has ever witnessed. We can make it better. We can make sure that it operates more quickly to solve our problem of keeping pornography away from our kids, keeping offensive material away from our kids, and I am very excited about it.

—Stephen T. Stone

Subscribe: RSS

View by: Thread


  1. icon
    That One Guy (profile), 23 Jun 2020 @ 1:52pm

    Re:

    Oh I'm not either, and I imagine many others would be the same, I was merely pointing out that if someone for whatever reason wants both political ads and a ban on 'misinformation' they only get one of those, not both.


Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here



Subscribe to the Techdirt Daily newsletter




Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Follow Techdirt
Special Affiliate Offer

Advertisement
Report this ad  |  Hide Techdirt ads
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Advertisement
Report this ad  |  Hide Techdirt ads
Recent Stories
Advertisement
Report this ad  |  Hide Techdirt ads
.

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.