Explainer: How Letting Platforms Decide What Content To Facilitate Is What Makes Section 230 Work

from the Congress-got-this-right dept

There seems to be some recurrent confusion about Section 230: how can it let a website be immune from liability for its users' content, and yet still get to affect whether and how that content is delivered? Isn't that inconsistent?

The answer is no: platforms don't lose Section 230 protection if they aren't neutral with respect to the content they carry. There are a few reasons, one being constitutional. The First Amendment protects editorial discretion, even for companies.

But another big reason is statutory, which is what this post is about. Platforms have the discretion to choose what content to enable, because making those moderating choices is one of the things that Section 230 explicitly gives them protection to do.

The key here is that Section 230 in fact provides two interrelated forms of protection for Internet platforms as part of one comprehensive policy approach to online content. It does this because Congress actually had two problems that it was trying to solve when it passed it. One was that Congress was worried about there being too much harmful content online. We see this evidenced in the fact that Section 230 was ultimately passed as part of the "Communications Decency Act," a larger bill aimed at minimizing undesirable material online.

Meanwhile Congress was also worried about losing beneficial online content. This latter concern was particularly acute in the wake of the Stratton Oakmont v. Prodigy case, where an online platform was held liable for its user's content. If platforms could be held liable for the user content they facilitated, then they would be unlikely to facilitate it, which would lead to a reduction in beneficial online activity and expression, which, as we can see from the first two subsections of Section 230 itself, was something Congress wanted to encourage.

To address these twin concerns, Congress passed Section 230 with two complementary objectives: encourage the most good content, and the least bad. Section 230 was purposefully designed to achieve both these ends by providing online platforms with what are ultimately two complementary forms of protection.

The first is the one that people are most familiar with, the one that keeps platforms from being held liable for how users use their systems and services. It's at 47 U.S.C. Section 230(c)(1).

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

It's important to remember that all this protection provision does is say that the platform cannot be held liable for what users do online; it in no way prohibits users themselves from being held liable. It just means that platforms won't have to be afraid of its users' online activity and thus feel pressured to overly restrict it.

Meanwhile, there's also another lesser-known form of protection built into Section 230, at 47 U.S.C. Section 230(c)(2). What this protection does is also make it safe for platforms to moderate their services if they choose to. Because it means they can choose to.

No provider or user of an interactive computer service shall be held liable on account of (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

Some courts have even read subsection (c)(1) to also cover these moderation decisions too. But ultimately, the wisdom of Section 230 is that it recognizes that to get the best results – the most good content and also the least bad – it needs to ensure platforms can feel safe to do what they can to advance both of these things. If they had to fear liability for how they chose to be platforms, they would be much less effective partners in achieving either. For instance, if a platform had to fear legal consequences for removing user content, they simply wouldn't. (We know this from FOSTA, which, by severely weakening Section 230 has created disincentives for platforms to try to police user content.) And if platforms had to fear liability for enabling user activity on its systems, they also wouldn't do that either. They would instead end up engaging in undue censorship, or cease to exist at all. (We also know this is true from FOSTA, which, by weakening Section 230, has driven platforms to censor wide swaths of content, or even cease to provide platform services to lawful expression.)

But even if Section 230 protected platforms for only one of these potential forms of liability, not only would it not be nearly as effective at achieving Congress's overall goal of getting both the most good and least bad online as protecting them in both ways would, but it wouldn't be nearly as effective for achieving even just one of those outcomes as a more balanced approach would. The problem is that if ever platforms find themselves in the position of needing to act defensively, out of fear of liability, it tends to undermine their ability to deliver the best results on either of these fronts. The fear of legal liability forces platforms to divert their resources away from the things they could be doing to best ensure they facilitate the most good, and least bad, content and instead spend them on only what will protect them from whatever the threat of legal liability is causing them to spend outsized attention on.

As an example, see what happens under the DMCA, where Section 230 is inapplicable and liability protection for platforms is so conditional. Platforms are so fearful of copyright liability that this fear regularly causes them to overly delete lawful, and even often beneficial, content, despite such a result being inconsistent with Congress's legislative intent, or waste resources weeding out the bad takedown demands. It's at least fortunate that the DMCA expressly does not demand that platforms actively police their users' content for infringement. Because if they had to spend their resources policing content in this way it would come at the expense of policing their content in a way that would be more valuable to the user community and public at large. Section 230 works because it ensures that platforms can be free to devote their resources to being the best platforms they can be to enable the most good and disable the most bad content, instead of having to spend them on activities that are focused only what protects them from liability.

To say, then, that a platform that monitors user content must then lose its Section 230 protection is simply wrong, because Congress specifically wanted platforms to do this. Furthermore, even if you think that platforms, even with all this protection, still don't do a good enough job meeting Congress's objectives, it would still be a mistake to strip them of them of what protection they have, since removing it will not help any platform, current or future, from ever doing any better.

What tends to confuse people is that curating user content appearing on a platform does not turn the content into something the platform should now be liable for. When people throw around the imaginary "publisher/platform" distinction as a basis for losing Section 230 protection they are getting at this idea that by exercising editorial discretion over the content appearing on their sites it somehow makes the content become something that the platforms should now be liable for.

But that's not how the law works. Nor how could it work. And Congress knew that. At minimum, platforms simply facilitate way too much content for them to be held accountable for any of it. Even when they do moderate content, it is still often at a scale beyond which it could ever be fair or reasonable to hold them accountable for whatever still remains online.

Section 230 never required platform neutrality as a condition for a platform getting to benefit from its protection. Instead, the question of whether a platform can benefit from its protection against liability in user content has always been contingent on who created that content. So long as the "information content provider" (whoever created the content) is not the "interactive computer service provider" (the platform), then Section 230 applies. Curating, moderating, and even editing that user content to some degree doesn't change this basic equation. Under Section 230 it is always appropriate to seek to hold responsible whomever created the objectionable content. But it is never ok to hold liable the platform they used to create it, which did not.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: balance, cda 230, congress, content moderation, free speech, section 230


Reader Comments

Subscribe: RSS

View by: Thread


  1. identicon
    Bruce C., 21 Jun 2019 @ 7:55am

    The bug in the system...

    The problem with section 230 is that under the provisions you describe above, a platform that simply allows users to upload content and then does post-hoc moderation is exempt from pretty much all forms of liability related to that content. An organization (like a traditional publisher) that does pre-publication moderation/curation and enters into a paid agreement with a creator does not gain the same immunity.

    This encourages a "slave labor" market for content creators. Or at best, an expansion of the "gig economy" where creators are dependent on the scraps of income they can get from ad revenue sharing, viewer donations and merch sales. We're encouraging freedom of the "press" at the expense of further hollowing out of incomes for the middle class. Only a select few get the views and clicks that earn them a steady income.

    While this "luck of the draw" has always been true in entertainment media, the time investment has changed to make the effort more risky for creators. In the old model, a creator has to audition or apply for projects. But once they get accepted for a project, they have a contracted rate of pay. Applying for a project requires time investment for updating your C.V., rehearsing audtion scripts, or whatever, but if the projects are available, you can pretty much apply continuously until you get a gig.
    Under the platform model, a creator has to select a small number of projects and focus on them more intensely over a longer period of time to actually create the content, with no guarantee of reward. The risk increase comes from the fact that there are fewer eggs in the basket of opportunities under the new model. Some content creators will enjoy the new "lotto" economy: benefits to creators under the new system include a greater share of the income if they do score big. Others prefer less risk and more stability.

    There's an unresolved tension of benefits and trade-offs between the old and new models. I'm not saying section 230 needs to change, but a new publication model that mitigates the economic risks of pure "platform" publication while still maintaining the Section 230 immunities for the platform host would be useful here.


Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here



Subscribe to the Techdirt Daily newsletter




Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Follow Techdirt
Insider Shop - Show Your Support!

Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Recent Stories
.

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.