Techdirt Podcast Episode 272: Section 230 Matters, With Ron Wyden & Chris Cox

from the celebrating-25-years dept

Last week, we hosted Section 230 Matters, a virtual Techdirt fundraiser featuring a panel discussion with the two lawmakers who wrote the all-important text and got it passed 25 years ago: Chris Cox and Senator Ron Wyden. It was informative and entertaining, and for this week’s episode of the podcast, we’ve got the full audio of the panel discussion about the history, evolution, and present state of Section 230.

Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.

Filed Under: , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Techdirt Podcast Episode 272: Section 230 Matters, With Ron Wyden & Chris Cox”

Subscribe: RSS Leave a comment
23 Comments

This comment has been flagged by the community. Click here to show it.

cynoclast (profile) says:

230 matters

And is great for sites unlike facebook, twitter and reddit that don’t abuse it.

Either you’re responsible for all of the content or none of it.

This nonsense of censoring users and not being responsible at the same time is both an abuse of section 230 and a flagrant circumvention of the first amendment by Democrat dominated companies.

Just look at their employee donations on opensecrets.org. Every single one of them, their employees and excecs donate 90%+ to Democrats and Democrat run PACs.

They should be forced to carry a (D-CA) designation just like the news likes to brand congress with to avoid any nasty critical thinking.

Techdirt and their increasing on sided authoritarianism is making me regret buying a techdirt tshirt.

It used to be a company that revealed the dirt in tech. Now they’re just concealing it if it’s done for the Left reasons.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

You should read more about 230. Specifically, the rest of this comment.

[230] is great for sites unlike facebook, twitter and reddit that don’t abuse it.

230 is what protects the right of Facebook, Twitter, Reddit, Gab, Parler, and every other interactive web service that accepts third party speech to moderate that speech however its admins wish. Saying a service can “abuse” 230 is like saying someone can abuse the right to decide who can and can’t enter their homes: The idea is absurd no matter the angle from which you look at it.

Either you’re responsible for all of the content or none of it.

Except that isn’t how 230 works. 230 holds services accountable for speech directly created/published by employees of said services. To wit: Twitter can only be held liable for someone defaming AOC via Twitter if that someone works for Twitter.

This nonsense of censoring users and not being responsible at the same time is both an abuse of section 230 and a flagrant circumvention of the first amendment by Democrat dominated companies.

You put so much wrongness into this one sentence that it’s gonna take a list to address it all.

  1. Moderation isn’t censorship. Quit using loaded terms in an attempt to distort the argument with emotion.
  2. Moderation is about being responsible — responsible for curating a community and doing whatever is necessary to keep things running smoothly.
  3. Responsibility for a post should always lie with whoever made the post. Don’t blame Toyota when someone uses a Corolla to run someone over.
  4. As I said above, moderation is not an abuse of either 230 or the First Amendment. It is exercising the legal right of association, which the First Amendment grants and 230 protects.
  5. Whether a company is “dominated” by supporters of a given political party is irrelevant to the law. 230 protects the right of Gab admins to moderate with a conservative bent. Saying otherwise means you believe moderation should either have no political bias whatsoever (good fuckin’ luck with making that happen) or only have political bias if it favors conservatives. Speaking of which…

Just look at their employee donations on opensecrets.org. Every single one of them, their employees and excecs donate 90%+ to Democrats and Democrat run PACs.

So what? Neither the First Amendment nor Section 230 dictate who can moderate websites based on political affiliation/bias.

They should be forced to carry a (D-CA) designation just like the news likes to brand congress with to avoid any nasty critical thinking.

…fucking what

Techdirt and their increasing on sided authoritarianism is making me regret buying a techdirt tshirt.

You’ll find no authoritarianism here. Nobody on this site thinks Twitter should run the country — or that the government should run Twitter, for that matter.

In general, Techdirt and the commentariat support the right of services to moderate as they see fit. We don’t always agree with how services moderate. That doesn’t change our support for their having that right. Twitter can ban antifascists to placate conservatives; we’ll say “that’s bullshit”, but we’ll also say “it’s still their right to be that shitty”. That isn’t supporting authoritarianism. It’s supporting the freedom of association — and freedom from the compelled association/hosting of speech.

It used to be a company that revealed the dirt in tech. Now they’re just concealing it if it’s done for the Left reasons.

Have you seriously not seen the articles that have shittalked Google, Facebook, Twitter, and other “Big Tech” companies for making horrible/ignorant/petty decisions that rarely benefit anyone other than themselves? Because I don’t think you have.

Leave if you’re dissatisfied with the political leanings of this site (and its commentariat). Whining about shit you can’t change in the hopes that your whining will make people suddenly see the light and bend over backwards to make you (and only you) happy is an offshoot of hatereading. Trust me when I say doing that shit only ever fucks you up.

…all of which is to get down to One Simple Question. Yes or no: Do you believe the government should have the legal right to compel any privately owned interactive web service into hosting legally protected speech that the owners/operators of said service don’t want to host?

This comment has been flagged by the community. Click here to show it.

cynoclast (profile) says:

Re: Re: Re:

Moderation isn’t censorship. Quit using loaded terms in an attempt to distort the argument with emotion.

Moderation is censorship no matter how many times you say it isn’t.

Moderation is about being responsible — responsible for curating a community and doing whatever is necessary to keep things running smoothly.

Moderation is censorship no matter how many times you call censorship moderation.

Responsibility for a post should always lie with whoever made the post. Don’t blame Toyota when someone uses a Corolla to run someone over.

Completely agree. So why do companies remove posts that are entirely legal, aren’t spam? Section 230 protections do not extend to companies that act as a service and a publisher. Pick one.

As I said above, moderation is not an abuse of either 230 or the First Amendment.

Moderation is censorship no matter how many times you say it isn’t.

It is exercising the legal right of association, which the First Amendment grants and 230 protects.

I award you gold for mental gymnastics. Censorship is in no fucking sane world, protected by the first amendment or 230.

Whether a company is “dominated” by supporters of a given political party is irrelevant to the law.

Ah, so when you’re morally wrong (engaging in censorship) fall back on legality instead. You would not be saying the same thing if they were all biased against democrats, I guarantee it.

230 protects the right of Gab admins to moderate with a conservative bent.

No it doesn’t. And they don’t.

Saying otherwise means you believe moderation should either have no political bias whatsoever (good fuckin’ luck with making that happen)

or only have political bias if it favors conservatives. Speaking of which…

What makes you think I’m a conservative? I’m a registered Pacific Green and have never voted republican in my life.

You’ll find no authoritarianism here. Nobody on this site thinks Twitter should run the country — or that the government should run Twitter, for that matter.

That’s a cute strawman version of authoritarianism. You should be ashamed of using a strawman fallacy.

Have you seriously not seen the articles that have shittalked Google, Facebook, Twitter, and other “Big Tech” companies for making horrible/ignorant/petty decisions that rarely benefit anyone other than themselves? Because I don’t think you have.

None of them have attacked their authoritarian censorship. None. Which is precisely the problem I just complained about. techdirt has done nothing but defend their censorship and misdefine it as "moderation" because they know censorship is bad.

Leave if you’re dissatisfied with the political leanings of this site (and its commentariat).

At least you admit to your bias, but not any of the wrongdoings because of it.

Whining about shit you can’t change in the hopes that your whining will make people suddenly see the light and bend over backwards to make you (and only you) happy is an offshoot of hatereading. Trust me when I say doing that shit only ever fucks you up.

Ahh yes criticism is whining when it’s directed at you, but criticism when directed at the authoritarians.

In general, Techdirt and the commentariat support the right of services to moderate as they see fit. We don’t always agree with how services moderate. That doesn’t change our support for their having that right. Twitter can ban antifascists to placate conservatives; we’ll say “that’s bullshit”, but we’ll also say “it’s still their right to be that shitty”. That isn’t supporting authoritarianism. It’s supporting the freedom of association — and freedom from the compelled association/hosting of speech.

Nobody’s compelling anyone to interact on a website. This is just a ginned up excuse to censor people who commit wrongthink. Calling it banning and censoring "supporting the freedom of association might just earn you the second gold medal in mental gymnastics.

…all of which is to get down to One Simple Question. Yes or no: Do you believe the government should have the legal right to compel any privately owned interactive web service into hosting legally protected speech that the owners/operators of said service don’t want to host?

Of course not. But we should stop pretending that censorship by other than government means is still censorship. And companies that censor content by their users deserve no, and should receive no section 230 protections. That free speech is important, and that most speech today is online, and it’s becoming a huge problem that 4-5 companies with the same political leaning are targeting and censoring people who disagree with them, while rendering the first amendment moot without the government having to do a thing.

If this doesn’t alarm you, you’re a fool.

Stephen T. Stone (profile) says:

Re: Re: Re:

Oh good, you came back. Good to see a “dissenter” with some nuts’n’guts for a change.

Moderation is censorship

Define “censorship”, then explain how moderation is the same exact thing.

why do companies remove posts that are entirely legal, aren’t spam?

As I said before: Moderation is about curating a community. What kind of community would you want Twitter to curate: one where the use of racial slurs is a ban-worthy offense or one where the use of racial slurs is allowed and thus implicitly encouraged?

Section 230 protections do not extend to companies that act as a service and a publisher.

At least one court has said otherwise: “Under section 230, interactive computer service providers have broad immunity from liability for traditional editorial functions undertaken by publishers—such as decisions whether to publish, withdraw, postpone or alter content created by third parties. Because each of Murphy’s causes of action seek to hold Twitter liable for its editorial decisions to block content she and others created from appearing on its platform, we conclude Murphy’s suit is barred by the broad immunity conferred by the CDA.” (from the ruling in Meghan Murphy v. Twitter, Inc. by the California Court of Appeal for the First District; source, relevant Techdirt link)

What makes you think I’m a conservative?

Because the people calling for the death of 230 and the compelled hosting of speech are largely the people who think Twitter, Facebook, etc. have an anti-conservative bias and should be forced to host speech from conservatives. (Those would be conservatives, by-the-by.)

I award you gold for mental gymnastics. Censorship is in no fucking sane world, protected by the first amendment or 230.

Thank God I didn’t say “censorship”, then. I said “the legal right of association”. Lemme do you another explain.

Twitter admins allow third party speech on the Twitter service. As a result, Twitter implicitly associates with whatever speech goes on Twitter. They also choose to explicitly disassociate Twitter with certain kinds of speech — legally protected, no less! — through the Terms of Service and moderation actions that back the TOS. That is virtually the same thing as you deciding who can enter your home and what they can say inside your home.

The First Amendment gives both you and Twitter admins the right to choose the speech and persons with which you/Twitter will and will not associate. If you can decide whether you’ll let people say racial slurs on your private property, so can Twitter admins. The government can’t compel you or Twitter to associate with people who use racial slurs — nor should it have the right to do that.

You should be ashamed of using a strawman fallacy.

You should be ashamed of stumping for compelled association, and yet…

None of them have attacked their authoritarian censorship. None.

Techdirt would do that…if any of those companies I mentioned were engaged in authoritarian censorship. But here’s the thing: They’re not.

Moderation isn’t censorship. Refusing to carry links isn’t censorship. Refusing to give someone a platform isn’t censorship. They’re all forms of telling someone “we don’t do that here”. Nobody owes you an audience, access to an audience, or a soapbox on/made from their private property. That goes as much for Twitter and Facebook as it does for you vis-á-vis your home.

At least you admit to your bias

I can’t admit to something I’ve never hidden.

criticism is whining when it’s directed at you

It isn’t criticism when all you’re doing is complaining. You “criticize” Techdirt for not writing to your liking, but what the fuck makes you think whining about that fact is going to make Techdirt turn 360°, moonwalk away from what it currently does, and write in a way that pleases you?

Nobody’s compelling anyone to interact on a website.

Trying to tear down 230 in favor of some kind of forced “neutrality” is literally the same thing as trying to compel association via the hosting of speech.

This is just a ginned up excuse to censor people who commit wrongthink.

Twitter admins can’t censor anyone. They can stop people from using Twitter, but nobody has a right to use Twitter, so that literally can’t be censorship. Don’t be an “I have been silenced” guy.

Of course not. But

Big shock, you opened a “but” and exposed an asshole…

we should stop pretending that censorship by other than government means is still censorship

I know that censorship can be censorship when done by private parties. (“Don’t print this or ~you’ll be sorry~”, for example.) The difference between your definition of censorship and mine, however, is that I don’t think of people being told to fuck off from private property for, say, referring to queer people as f⸻ts is censorship.

Moderation can’t be censorship because it isn’t forcing people to shut up everywhere. The First Amendment disallows that. But forcing the owner of private property to host speech/persons they don’t want to host is compelled association — which the First Amendment also disallows.

And companies that censor content by their users deserve no, and should receive no section 230 protections.

Yes or no: Does a Matsodon instance deserve 230 protections if it boots someone for directing racial slurs at other users on- and off-instance? Reminder: Racial slurs are legally protected speech, so you must apply the standard you’ve set for yourself in this discussion (which is “moderating legally protected speech is censorship”).

it’s becoming a huge problem that 4-5 companies with the same political leaning are targeting and censoring people who disagree with them

You have a right to speak your mind. You don’t have a right to make other people listen — or give you an audience/a soapbox. Losing access to Twitter and the potential audience therein is not censorship, no matter how much your fact-ignoring fee-fees tell you otherwise.

By the way: I am sincerely concerned about the power and trust society has given to “Big Tech”. And I’ll be concerned about their power to censor people when they start censoring people.

This comment has been flagged by the community. Click here to show it.

cynoclast (profile) says:

Re: Re: Re:2 Re:

You have a right to speak your mind. You don’t have a right to make other people listen — or give you an audience/a soapbox.

Losing access to Twitter and the potential audience therein is not censorship, no matter how much your fact-ignoring fee-fees tell you otherwise.

Yes it is.

I’ll be concerned about their power to censor people when they start censoring people.

So you’re not concerned because you you agree with their censorship.

Stephen T. Stone (profile) says:

Re: Re: Re:3

(Preface: I’mma do you one more long-ass explain; the tl;dr is in the last paragraph. I cannot and will not help you understand any of this if you commit to misunderstanding me/arguing in bad faith. Do not taunt Happy Fun Ball.)

No law, statute, or “common law” court precedent in the United States gives you the right to make people listen, make people force other people into listening, and make a soapbox out of private property you don’t own. That includes the First Amendment, which protects your right to speak freely only from the government.

Losing access to a third party platform — to a soapbox on private property you don’t own — isn’t censorship. You can go to any other platform/property that will have you and speak your mind. The government can’t stop that; neither can the person(s) who booted you from their property.

Losing access to an audience — real or potential — isn’t censorship. Nobody has an obligation to give you attention. You lack the legal power to make people give you attention. And if people want to keep giving you attention after you’ve been booted from one platform, they’ll find you. The government can’t stop that; neither can the person(s) who booted you from the platform on which you had an audience.

I won’t break out my usual spiel about censorship and moderation here. Suffice to say, “you aren’t gonna say that anywhere” is the driving ethos of censorship. Twitter giving someone the boot isn’t censorship because it doesn’t suppress speech in any way — except on Twitter. And Twitter admins, as I’ve said, have every right to decide what speech will and will not be allowed on Twitter. The law can’t, and shouldn’t, force Twitter into hosting speech its operators don’t want to host. You may consider that “censorship”, but you would be objectively wrong.

I understand the mindset that you’re bringing to this discussion: “If it feels like censorship, it must be censorship!” And I can understand how a Twitter suspension/ban might feel like censorship. But it isn’t. It never has been and it never will be. Twitter banning you for posting a racial slur, anti-queer propaganda, or COVID-19 disinformation can’t, doesn’t, and will never stop you from going to Gab, Parler, 4chan, or whatever Freeze Peach Website you can think of and posting there the exact speech that earned you a Twitter ban.

Long story short (too late!): Having an audience is a privilege. Having access to a platform you don’t own is a privilege. Nobody owes you such privileges; losing them doesn’t keep you from speaking your mind. Twitter banning someone is Twitter enforcing its terms of service, however uneven that enforcement may be. You don’t have to like that. But don’t call that act censorship when you know it isn’t. Arguing in bad faith and being intentionally ignorant won’t do you any good.

Bob Wyman (user link) says:

Re: Re: Re:2 The Freedom to Listen or read...

You wrote:

The First Amendment gives both you and Twitter admins the right to choose the speech and persons with which you/Twitter will and will not associate.

If we parse out some of the conjunctions here, we get:

The First Amendment gives … you .. the right to choose…

Our right to choose with whom we will associate, to whom we will listen, or whose words we will read, is a precious thing and should be protected as much as our right to speak to those who wish to hear us. The mirror image of a "Freedom to Speak" is the "Freedom to Hear."

But, an alternative parsing of your text produces:

The First Amendment gives … Twitter admins the right to choose the speech and persons with which you .. will and will not associate.

This reading is more problematic. I believe that the First Amendment does not give anyone the "right" to limit the freedom of association of any other person. However, sometimes individuals will choose to grant to others permission to act as their proxy and thus limit their associations or exposure to the speech of others. In such a case, any limitation of association is an exercise of the grantor’s right of association, not a right of the grantee. Twitter admins may be given my permission to limit my association with others, but they have no right to do so without my permission.

Of course, my right to grant moderation or censorship privileges to others should be as granular as the granularity of the association contexts in which I participate. So, had Prodigy, in an attempt to enhance the "civility" of discussion, offered to prevent my seeing posts that used abusive terms such as "bitch," I might have enthusiastically granted them permission to do so in general contexts. However, even if "bitch" was banned generally, I would have still wanted the term to be used and useful in forums dedicated specifically to the discussion of dog breeding. (A "bitch," after all, is a perfectly correct and neutral designation for a female dog.) But, even in a dog breeding forum, I would have gladly been spared exposure to pejorative use of the term in declarations such as "Alice, the dog breeder, is a bitch."

Certainly, when one or more persons "assemble" to create a discussion context, they should be free to establish and maintain ground rules which they believe will facilitate what they consider to be the desired discussion. Thus, one group of dog breeders might choose to permit the use of the term "bitch" when applied to dogs, but not people. Another group of breeders might decide that it is too hard to build algorithms to distinguish the uses of the term and thus ban all of its uses. New potential adherents to the various groups would then be able to choose which discussion to join, and thus which rules of discussion to which they would adhere. Another example: I created and moderate a Facebook group for alumni of Berlin American High School (BAHS, now closed). I have strict rules concerning civility (i.e. "Act like adults, not high school students!") and about political discussion: You can talk about politics as it relates to the experience of attending the school (i.e. the impact of Kennedy’s or Reagan’s visits to Berlin) but if you even hint of discussing Trump’s attacks on Sec. 230, I will delete your post without notice or explanation… Those who don’t like my rules can join any one of several other similar groups. Those who choose to defer to my judgements appear happy to so. I moderate others’ speech with their permission, not by right.

But, the ability of one to create a limited discussion context should itself be limited by the specificity and scope of that context. While I may be free to create a strictly moderated BAHS discussion group for a couple thousand people who have many other forums in which they can speak or listen, my moderation would be on weaker ground if I had been successful in creating a service like Facebook, Gmail, USENET, etc. that claimed to be and became useful for the discussion of essentially all legal topics by millions, or billions, of people. At some point, my once legitimate control over the content of other’s discussions passes beyond being a service whose benefits are freely chosen by others and becomes an illegitimate imposition of my personal beliefs on others’ rights to speak or to listen. As with antitrust regulation, at some point, dominance in the realm of speech or association (although not necessarily a monopoly) becomes unacceptable and is reasonably regulated.

Where I think this line of reasoning leads is the conclusion that, in broad and large discussion contexts, it is inappropriate to allow platform providers to impose content moderation without the consent of individual participants. Thus, while small forums for dog breeders or defunct high schools might legitimately impose rules through a contract of adhesion (one in which only one party has a choice), such rules are inappropriate for something like the general contexts provided by Twitter, Facebook, or Gmail. In these broader contexts, which dominate substantial portions of society’s capacity to discuss, individual participants should be free to choose whether and how what they see, hear or say will be filtered by others. Facebook, Twitter, etc. might still invest a great deal in content moderation and offer it as a service, but users should choose individually whether or not to accept such services.

I would go a step further and suggest that participants (in at least general and "dominant" contexts) should be able to choose from among a variety of content moderation services or methods of achieving moderation. I should, for instance, be able to choose to have ratings made by third-party factchecking organizations (NYTimes, WaPo, etc.) used either on their own or in conjunction with those offered by a platform. I should also be free to create my own content moderation service and add it to the menu of services available to others. Of course, I should be able to sell my service as well. (Note: I am not a fool. I recognize the difficulty of providing and financing such a variety of services. Here, I’m talking about what should be, not necessarily what can be easily provided. The minimal need should be to allow one to accept or decline one or more of a "dominant" platform’s moderation services.)

So, if I were to amend Sec. 230 in light of these thoughts on an individual’s freedom to "hear" or to associate, I think what I would argue for is that the language which now allows a service to do content moderation should be rewritten to actually forbid the non-optional imposition of such moderation in any but constrained and limited contexts. Clearly, defining the contexts in which choice is required would be a challenge — in much the same way that it is hard to decide which enterprises should be constrained by antitrust regulation. Nonetheless, I am confident that we could arrive at some reasonable definition. Certainly, Twitter, Facebook (outside topic-specific forums), Gmail (except for optional mailing lists), and a number of others would fall in the "should be regulated" category.

In summary: Twitter admins have no right to choose the speech and persons with which you or I will and will not associate. If we change anything in Sec. 230, it should be to ensure protection of our right to choose with whom we will associate.

Stephen T. Stone (profile) says:

Re: Re: Re:3

Oh, am I glad I review replies to my recent comments on a semi-regular basis.

I believe that the First Amendment does not give anyone the "right" to limit the freedom of association of any other person.

It does. It absolutely does. Just not in the broader sense you’re thinking of.

The First Amendment protects the right of association, in that you can choose the people with whom you want to associate. But it doesn’t give you the right to make them associate with you. It also doesn’t give you the right to associate with people while on private property you don’t own. You can associate with other users on Twitter, but if Twitter kicks you out, those users can (and must) find other ways to associate with you. To say otherwise would mark you as a believer in compelled association vis-á-vis the owner of that private property (in this example, Twitter). The First Amendment doesn’t let you do that any more than it lets Twitter stop you from associating with Twitter users outside of Twitter.

Certainly, when one or more persons "assemble" to create a discussion context, they should be free to establish and maintain ground rules which they believe will facilitate what they consider to be the desired discussion.

And if they’re assembling on property at least one of the persons owns, they’re more than free to do that. But on property they don’t own, the owner makes the rules — and can enforce them however they see fit. To say otherwise is, again, to believe in compelled association.

I moderate others’ speech with their permission, not by right.

That’s a lovely approach. But I hope you realize that Facebook can literally tell you to moderate another way. Because Facebook owns the property on which your group exists (the servers). Zuckerberg himself could tell you to moderate more heavily “or else” and you could risk losing that group by defying his orders. He can moderate the speech of your group (and your own speech) by right, not by permission. The same goes for any other owner of private property you don’t own.

my moderation would be on weaker ground if I had been successful in creating a service like Facebook, Gmail, USENET, etc. that claimed to be and became useful for the discussion of essentially all legal topics by millions, or billions, of people. At some point, my once legitimate control over the content of other’s discussions passes beyond being a service whose benefits are freely chosen by others and becomes an illegitimate imposition of my personal beliefs on others’ rights to speak or to listen.

Let’s say that’s true. You would still have the right to impose those limits on others because, hey, you would still own the property on which they’re congregating.

The law doesn’t limit the right of association based on the size of a property or an amount of people. Your site could be used by ten people and you would still have the unassailable legal right to kick one of them out for saying “bitch” (in any context).

You’re stepping into the moral and ethical issues without first considering the already-settled legal ones. You’re also close to advocating for forced association, which…doesn’t speak well of you.

it is inappropriate to allow platform providers to impose content moderation without the consent of individual participants

I now have a few yes-or-no questions for you:

  1. Should services such as Twitter be forced by law to receive permission for moderation from every individual user on an individual basis?
  2. Should that moderation apply generally to all content produced by the user, or only to content that the user marks as “moderate-able”?
  3. Should rules set forth by Twitter before they ask for permission to moderate — e.g., the current terms of service — still apply if the answer from a user is “no”? (In other words: Should the TOS and its consequences still be applied if a user says “don’t moderate my shit” and proceeds to post an anti-queer slur?)
  4. Should Twitter be held legally liable for unlawful/illegal speech if that speech comes from a user who said “no” to having their content moderated?

Facebook, Twitter, etc. might still invest a great deal in content moderation and offer it as a service, but users should choose individually whether or not to accept such services.

By accepting the terms of service and subsequently using those services, they have made that choice. They can go elsewhere if they disagree with the moderation efforts of Twitter. I hear Parler is up and running again, by the by.

I would go a step further and suggest that participants (in at least general and "dominant" contexts) should be able to choose from among a variety of content moderation services or methods of achieving moderation.

Your suggestion would be largely unworkable. Individual people might demand moderation methods so finely granulated that they can only be applied to an individual user. That would defeat the purpose of site-wide moderation efforts. (It would also make moderation a vastly more costly affair in terms of both man hours and money.)

I should, for instance, be able to choose to have ratings made by third-party factchecking organizations (NYTimes, WaPo, etc.) used either on their own or in conjunction with those offered by a platform.

And you can do that…by making your own service that aggregates such ratings, should they already exist. For what reason should the burden for doing that lie on the shoulders of Twitter or Facebook, other than “they’re big”?

I recognize the difficulty of providing and financing such a variety of services. Here, I’m talking about what should be, not necessarily what can be easily provided.

At least you admit that. Some people can’t even be bothered to do that much.

The minimal need should be to allow one to accept or decline one or more of a "dominant" platform’s moderation services.

People can do that — by choosing to use, or not use, that service. The service operators tailor their moderation efforts to what they believe works best for the broader community. Moderation is an effort at community curation. No one gets it right all the time, but trying to curate individual experiences on an individual level is far, far, far more trouble than any possible benefits could be worth.

if I were to amend Sec. 230 in light of these thoughts on an individual’s freedom to "hear" or to associate, I think what I would argue for is that the language which now allows a service to do content moderation should be rewritten to actually forbid the non-optional imposition of such moderation in any but constrained and limited contexts

And there we have it: You’re literally arguing for legally compelled association.

Your idea would end up doing the same thing all the other 230 reform ideas would do: It would make services either undermoderate or stop accepting third-party speech altogether. (Overmoderation would likely not be an option with your idea, given how much you see to abhor the idea of any moderation whatsoever.) That means Twitter would either turn into 8kun or turn into a digital graveyard. Does either one of those two visions sound like the outcome you’ve been wishing for?

Clearly, defining the contexts in which choice is required would be a challenge

Try “impossible”. The United States has no law against “hate speech” for a similar reason: No one has come up with an objective definition for the term that doesn’t also whack away at legally protected speech. (And I’ll remind you here that racial slurs and what we would colloquially call “hate speech” is legally protected.) Trying to define specific contexts/categories for “non-optional” moderation efforts would hit a similar brick wall. For example: If “talk of eating disorders” could be moderated without “user permission”, what would stop a service from silencing proponents of eating disorders such as bulimia and survivors of that same disorder? For that matter, what would stop a service from silencing people who talk about anorexia and people who talk about ARFID, regardless of context? (Avoidant/Restrictive Food Intake Disorder, for the record.)

And far from providing ample opportunity for expression, such “regulated” moderation would stifle expression. After all, who would speak out against racism if their sharing of the racist words and actions flung at them would face instant moderation?

I am confident that we could arrive at some reasonable definition.

Your confidence in your idea is somewhat admirable, but ultimately foolish. Better people than you have tried and failed to thread this needle; you won’t accomplish what they couldn’t.

Twitter admins have no right to choose the speech and persons with which you or I will and will not associate.

They do have that right — on Twitter. Who the fuck are you to say they shouldn’t?

If we change anything in Sec. 230, it should be to ensure protection of our right to choose with whom we will associate.

Section 230 doesn’t prevent that. You can fuck around with Charlie Kirk or chill with Bernie Sanders, depending on your political preferences. 230 can’t stop you from doing that.

But 230 extends to cyberspace an important principle from meatspace: You can’t associate with people on private property you don’t own if the owners of that property tell you, the other people, or all of you to fuck every last mile of off. I can’t think of a good reason, and you have offered no good reason, to deny the admins of any social interaction network — Twitter, Facebook, Mastodon instances, even that fucking shitpit Gab — that right. I wish you luck in finding one.

…if you still think you can.

This comment has been deemed insightful by the community.
Samuel Abram (profile) says:

Re: 230 matters

And is great for sites unlike facebook, twitter and reddit that don’t abuse it.

Um, Facebook, Twitter, and Reddit were using §230 as it was intended. Cox and Wyden made that clear over and over and over again.

Either you’re responsible for all of the content or none of it.

Which will be the case if §230 goes away

This nonsense of censoring users and not being responsible at the same time is both an abuse of section 230 and a flagrant circumvention of the first amendment by Democrat dominated companies.

First of all, you’re confusing Moderation with Censorship.

Second of all, as I said, Cox and Wyden explicitly say that §230 is being used as intended.

Third of all, maybe you should look up what compelled speech is and that §230 is the only part of the CDA to survive Reno v. ACLU before you say that it violates the first amendment.

Just look at their employee donations on opensecrets.org. Every single one of them, their employees and excecs donate 90%+ to Democrats and Democrat run PACs.

Let’s see some Democrats that Mark Zuckerberg gave to:
Marco Rubio, Orrin Hatch, Paul Ryan. Yup, big group o’ donkeys they are! (In fairness, he does give to Dems like Majority Leader Schumer and Katie Porter, but it’s false that he only gives 10% to GOP members).

But what about Jack Dorsey? …Okay, that’s a fair point, he’s given entirely to Dems.

As for Steve Huffman, some disambiguation is needed, so I’ll assume it’s the one in San Jose. Also a fair point.

They should be forced to carry a (D-CA) designation just like the news likes to brand congress with to avoid any nasty critical thinking.

which is also how Fox News brands a Republican from California who is losing a statewide election.

Techdirt and their increasing on sided [sic] authoritarianism

Nice to know that believing in property rights and the first amendment not compelling speech is authoritarian and not a conservative virtue.

It used to be a company that revealed the dirt in tech.

Still does. But names are skin deep. Unless you’re one of those people who actually thinks Fox News does vulpine updates.

Now they’re just concealing it if it’s done for the Left reasons.

O RLY?

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re: 230 matters

You do realize, don’t you, that if a provider of a space that allows user comment does no moderation (or censorship, if you prefer that term) of any kind, it will very quickly devolve into not much more than posts of spam, porn, and scams?

If a company removes such posts, your opinion is that they then lose section 230 protections (from above, "And companies that censor content by their users deserve no, and should receive no section 230 protections.").

Proposing that a company that moderates should be liable for their moderation decisions is Stratton Oakmont v. Prodigy. It’s exactly what section 230 was written to prevent.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

No, tit shouldn’t! What really pees me off though is the number of ‘remove sec 230’ there have been that simply want to stop the protection it offers! All the articles that have been written, all the reasons given why 230 is needed yet the number of articles keeps increasing. Regardless of the number of attempts to explain why its needed also pees me off because it isn’t us readers who need convincing, its those colleagues of Ryden and Cox that need it but refuse to accept it. It can only be because those hell bent on removing it want to remove yet another vestige of privacy and/or are being paid by 3rd parties who want yo be able to spy on everyone but not be held accountable!

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...