Section 230 Basics: There Is No Such Thing As A Publisher-Or-Platform Distinction

from the foundational-understanding dept

We’ve said it before, many times: there is no such thing as a publisher/platform distinction in Section 230. But in those posts we also said other things about how Section 230 works, and perhaps doing so obscured that basic point. So just in case we’ll say it again here, simply and clearly: there is no such thing as a publisher/platform distinction in Section 230. The idea that anyone could gain or lose the immunity the statute provides depending on which one they are is completely and utterly wrong.

In fact, the word “platform” does not even show up in the statute. Instead the statute uses the term “interactive computer service provider.” The idea of a “service provider” is a meaningful one, because the whole point of Section 230 is to make sure that the people who provide the services that facilitate others’ use of the Internet are protected in order for them to be able to continue to provide those services. We give them immunity from the legal consequences of how people use those services because without it they wouldn’t be able to ? it would simply be too risky.

But saying “interactive computer service provider” is a mouthful, and it also can get a little confusing because we sometimes say “internet service provider” to mean just a certain kind of interactive computer service provider, when Section 230 is not nearly so specific. Section 230 applies to all kinds of service providers, from ISPs to email services, from search engines to social media providers, from the dial-up services we knew in the 1990s back when Section 230 was passed to whatever new services have yet to be invented. There is no limit to the kinds of services Section 230 applies to. It simply applies to anyone and everyone, including individual people, who are somehow providing someone else the ability to use online computing. (See Section 230(f)(2).)

So for shorthand people have started to colloquially refer to protected service providers as “platforms.” Because statutes are technical creatures it is not generally a good idea to use shorthand terms in place of the precise ones used by the statutes; often too much important meaning can be lost in the translation. But in this case “platform” is a tolerable synonym for most of our policy discussions because it still captures the essential idea: a Section 230-protected “platform” is the service that enables someone else to use the Internet.

Which brings us to the term “publisher,” which does appear in the statute. In particular it appears in the critically important provision at Section 230(c)(1), which does most of the work making Section 230 work:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

In this provision the term “publisher” (or “speaker”) refers to the creator of the content at issue. Who did? Was it the provider of the computer service, aka the platform itself? Or was it someone else? Because if it had been someone else, if the information at issue had been “provided by another information content provider,” then we don’t get to treat the platform as the “publisher or speaker” of that information ? and it is therefore immune from liability for it.

Where the confusion has arisen is in the use of the term “publisher” in another context as courts have interpreted Section 230. Sometimes the term “publisher” itself means “facilitator” or “distributor” of someone else’s content. When courts first started thinking about Section 230 (see, e.g., Zeran v. AOL) they sometimes used the term because it helped them understand what Section 230 was trying to accomplish. It was trying to protect the facilitator or distributor of others’ expression ? or, in other words, the platform people used to make that expression ? and using the term “publisher” from our pre-Section 230 understanding of media law helped the courts recognize the legal effect of the statute.

Using the term did not, however, change that effect. Or the basic operation of the statute. The core question in any Section 230 analysis has always been: who originated the content at issue? That a platform may have “published” it by facilitating its appearance on the Internet does not make it the publisher for purposes of determining legal responsibility for it, because “publishing” is not the same as “creating.” And Section 230 ? and all the court cases interpreting it ? have made clear that it is only the creator who can be held liable for what was created.

There are plenty of things we can still argue about regarding Section 230, but whether someone is a publisher versus a platform should not be one of them. It is only the creator v. facilitator distinction that matters.

Filed Under: , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Section 230 Basics: There Is No Such Thing As A Publisher-Or-Platform Distinction”

Subscribe: RSS Leave a comment
51 Comments
This comment has been deemed insightful by the community.
Anonymous Coward says:

Unfortunately, due to the fact that the U.S. is in "silly season", the biggest motivation behind attacks on Sec230 has been completely overlooked: I haven’t seen it so much as alluded to in ANY post or discussion here, for months. But we have the facts.

FACT: While posturers are politicking (and vice versa) about Sec230, all the laws proposed (so close to an election) won’t have time to wend their weary ways through the hallowed halls. They’re dead as the bills of very dead ducks. The legislators could have done something anytime these past two or four or six years if they’d taken time out from the ritual grimacing. But they didn’t.

FACT: The actual lawsuits derailed by Sec230, as discussed here, were civil lawsuits. Ambulance-chasing lawyers looking to blame VERY rich companies for every private sociopath’s action, have been repeatedly frustrated in their attempts to win large settlements, or even to threaten those rich companies with such large legal expenses that the companies would settle to make them go away.

So what is really going on here? WHO is being hurt by Sec230? The politicians? They like to say so, but it is the proper purpose of the press to hurt politicians, and the First Amendment stands athwart politicians’ efforts to curb free speech. And their advisors know that perfectly well.

No, it’s the ambulance-chasing lawyers who are being hurt, and they are making large contributions to the lawmakers to purchase greater freedom to sue the wrong rich people.

We saw something very like this in Texas recently, where one very rich tortuous tort lawyer basically boasted that he would buy as many Supreme Court judges as he neededm to keep his unconscionable jury awards from being overturned. (Striesand effect–his boast backfired.)

Sec230 is not your protection from the guv’mint–it is your protection from untrammelled runaway lawsuits.

Richard Gadsden (profile) says:

"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."

This raises a question for me. Does that mean that Techdirt (a provider of an interactive computer service) is not the publisher or speaker of this article because it was provided by Cathy Gellis (another information content provider).

If so, does that mean that anyone libelled by an article in any online-only publication only has recourse against the author of the article?

Or will a court rule that Cathy Gellis was acting in her capacity as an employee of Techdirt and therefore Techdirt has liability?

If so, at what point does an identified person cease to be acting on behalf of their employer and start to be writing on their own behalf using their employer as a provider of an interactive computer service?

I’m guessing there is real caselaw on this question.

Stephen T. Stone (profile) says:

Re:

The question that matters most in this regard goes as follows: “Did Techdirt employees actively aid in the creation or publication of this article?” The answer to that question is also the answer to whether Techdirt holds liability for the article.

And speaking as someone who isn’t an employee of Techdirt but has two articles to his name on this site: Unlike my comments, I didn’t get those articles onto this site on my own. Perhaps that answers your question.

This comment has been deemed insightful by the community.
Mike Masnick (profile) says:

Re: Re: Re:

Unlike my comments, I didn’t get those articles onto this site on my own. Perhaps that answers your question.

Just to be clear, there are a few rulings on the books that disagree with that (perhaps two of the most controversial rulings). In Batzel someone forwarded an email, but was still found to be protected under 230. Also Barrett v. Rosenthal. In Dirty World the site chose to post a submission by a user (and added commentary) and was still found to be protected.

So, who gets it on the site (or to the list) is not the deciding factor under current law.

Appreciative user (profile) says:

Re: Re: Re: Re:

Sounds like possibly exactly the type of thing to point out as evidence of "taking section 230 too broadly". If I knowingly spread misinformation, I should be responsible for the misinformation. That seems like a different thing than just having an open forum and not being responsible for things people post openly and without moderation.

To be clear, the government has taken a stance like this in other places which are kind of parallel to this. For example, assume my company makes gizmo’s that do x,y,z but is not approved by the FDA to cure an disease. if someone writes a review about my companies gizmo on a site I do not control (e.g. Amazon) and says it "cures" something, I am not responsible. If I take that review and all I do is post it on my website as "look what someone said about our product!", the FDA will knock on my door and ask me why I am selling a drug that has not been registered. And if it is not, my case will be taken by the FTC for false advertising. According to our company attorneys, you don’t even have to do the posting. If there is a review system on our website and someone posted a review that says it cures x,y,z we could be responsible. If we have absolutely any moderation control (like being able to delete reviews from our website), we would be taking responsibility for claims made by the reviews by keeping it on there.

Stephen T. Stone (profile) says:

Re: Re: Re:2

If I knowingly spread misinformation, I should be responsible for the misinformation.

And you are. According to 230, as the author of the speech, you are legally liable for anything in said speech that may violate the law.

That seems like a different thing than just having an open forum and not being responsible for things people post openly and without moderation.

Should I be held legally liable if I welcome someone into my home and they open a window to yell something profane within earshot of children, regardless of whether I subsequently kick the asshole out of my home?

Appreciative user (profile) says:

Re: Re: Re:3 Re:

"Should I be held legally liable if I welcome someone into my home and they open a window to yell something profane within earshot of children, regardless of whether I subsequently kick the asshole out of my home?"

I’m not sure if things change when we are talkinga bout a business and having a profit motive. If someone comes to Mcdonalds and yells outside the window that this burger cures cancer and I continue allowing them to yell this. Then I profit from the fact that people are coming in thinking it will cure cancer. I think I will be liable. That is the impression our attorneys have left (they didn’t use this exact example).

Stephen T. Stone (profile) says:

Re: Re: Re:4

That isn’t really the same thing as the complaints about social media. Twitter doesn’t profit from people posting tweets; it profits from people seeing ads (and companies buying those ads). If someone says “Big Macs cure cancer” on Twitter, Twitter shouldn’t be held liable for that speech. (Especially if it’s a @dril tweet. And if you believe a @dril tweet, you have bigger problems.)

R.H. (profile) says:

Re: Re: Re:3 Re:

In some industries, yeah you would be liable. For example, FINRA doesn’t allow for the use of testimonials in advertising for financial services. If a financial advisor has a FaceBook profile and someone posts something like, "This guy helped me make such great returns I retired early!" FINRA will come down on that advisor like a ton of bricks. Given the size of the potential fines, I’m surprised that I haven’t seen any advisors try to argue that section 230 protects them from liability.

Anonymous Coward says:

Re: Re: Re:4 Re:

For example, FINRA…

FINRA Regulatory Notice 10-06 “Guidance on Blogs and Social Networking Web Sites” discusses “third-party posts” under Q’s 8 thru 10.

A8: As a general matter, FINRA does not treat posts by customers or other third parties as the firm’s communication with the public subject to Rule 2210. . . .

Now I certainly know next-to-nothing about FINRA rules, but the information that FINRA itself provides to the public — quite simply does not appear to be exactly consistent with what you’re telling us here.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:5 Re:

That’s what I was thinking. I can certainly imagine that it’s something that would be monitored and punished if necessary, as it seems like a very big loophole to allow testimonials there and not on other types of media, ripe for astroturfing. Big fines for people who regularly and willingly abuse that loophole would seem apt.

But equally if you can be randomly fined for comments on your Facebook page without warning, why would anyone allow comments, and why would any reputable association keep fining its members for actions they couldn’t control?

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:2 Re:

If there is a review system on our website and someone posted a review that says it cures x,y,z we could be responsible.

That is what 230 protects you from. If your company makes the claims they get the blame. If a user posts review with false claims, you are protected, and you can moderate the content without gaining liability for anything you miss. If however you encourage some user to make such a claim, and it can be proven, you once again get the blame.

This comment has been flagged by the community. Click here to show it.

Appreciative user (profile) says:

Re: Re: Re:3 Re:

According to our attorneys, you would be incorrect. In my example, we would in fact be responsible for not removing it. As it would be implying our endorsement since we have the ability to remove it. They said they have dealt with the FDA/FTC in these exact type of scenarios.

I don’t know if things get muddied by the fact that we profit from this misinformation and I don’t know if there needs to be proof of us knowing the misinformation was there and we decided not to do something about it.

I also don’t know if their standard may be different than what is ultimately being discussed here. They aren’t looking for some theoretical liability that we can be vindicated from by taking a case to the supreme court. So maybe on further dissection it may turn out you are right. But we would have to go through the FDA/FTC investigations, lawsuits,appeals, etc just to get to the bottom of it. Most would pay a penalty and fix whatever the infraction is.

PaulT (profile) says:

Re: Re: Re:6 Re:

"I was just giving the knowledge and experience I have (what our companies attorneys told us) in good faith."

Yes, which without context is an appeal to the authority of your attorneys.

Do you have a reason why your attorneys are correct, or why others are wrong, other than you paid those guys to give what you are assuming is good advice (an assumption you don’t give anyone enough information to verify against their own legal advisors)?

Scary Devil Monastery (profile) says:

Re: Re: Re:4 Re:

"I don’t know if things get muddied by the fact that we profit from this misinformation and I don’t know if there needs to be proof of us knowing the misinformation was there and we decided not to do something about it."

It does. If the platform itself posts misinformation likely to deceive with profit motive in mind then we aren’t talking about free speech anymore. The label most oftenly applied is fraud.

Now, a basis of law you may want to take note of and google is mens rea. It basically means that if there is a law and you break it your action must have been intentional. It’s why an accidental killing can never be considered murder but instead is manslaughter, and why there are laws regarding "casual negligence" in operating heavy machinery in an unsafe but not maliciously intended manner, for instance. Ignorance that an act may be unlawful is not an excuse but there must be intent to perform it for it to be sanctionable.

The same holds true when it comes to defrauding the public. If Wal-mart tries to astroturf yelp with glorius and shining 5-star reviews then yelp isn’t liable for fraud for not removing those reviews – although yelp will pretty soon drop in credibility once normal consumers start painting a more accurate picture.

But if yelp themselves start publishing positive reviews about itemry sold by one of their hypothetical subsidiaries in full knowledge that they are being untruthful…that’s intentional fraudulent behavior with direct profit motive.

So it can be broken down to;
Who performed the criminal or unlawful act?
Was the act performed with intent?
Is there a law exculpating the specific use of the act in the manner in which it was performed?

A good case study of where this can matter would be Prenda and ACS:Law who intentionally practiced fraud of a similar kind.

Now, I am not a lawyer, but these are fairly basic principles of law anyone should know about.

Appreciative user (profile) says:

Re: Re: Re:4 Re:

Thank you for the link. After reading through it, I admit I am not sure if you are agreeing or disagreeing with me! It seems the case was in fact decided against the defendants. The writer of the article did huff and puff about the decision, but ultimately it doesn’t matter as FTC prevailed.

To be fair, LeadClick did seem to go above and beyond my example (like telling affiliates to make changes, where in my example we wouldn’t have any contact with the customer leavings the review)

But anyways, this makes my point. I have been going back and forth with you, stone and Masnik about this similar topic across a few articles. Even if you guys believe there is current case law and the law is clear in on way. It doesn’t seem like that is a guarantee and tomorrow it can easily change, as it is already used differently in other circumstances (like with the FTC). There is always so much open to interpretation and just like the article says "Once this court makes this doctrinal cheat, LeadClick didn’t have a chance". All of a sudden things like knowledge of foul play come into play. Ability to control. Deception. etc.. Next thing you k now, the legal standard will be just like the article discussed "a defendant may be held liable for engaging in deceptive practices or acts if, with knowledge of the deception, it either directly participates in a deceptive scheme or has the authority to control the deceptive content at issue". Again, all that this all hinged on was the interpretation that knowingly ALLOWING the deceptive action to continue taking place when they could have stopped it, was direct precipitation and making them part of being the content creator.

That is my laymans understanding atleast. Our attorneys didn’t cite case law for us, they just let us know that the situation I presented opened us up to litigation and substantial risk.

This comment has been deemed insightful by the community.
Scary Devil Monastery (profile) says:

Re: Re: Re:2 Re:

"Sounds like possibly exactly the type of thing to point out as evidence of "taking section 230 too broadly"."

It really doesn’t. In fact, your following arguments make the cas efor the precise opposite.

"To be clear, the government has taken a stance like this in other places which are kind of parallel to this."

It really hasn’t, looking at your example;

"…the FDA will knock on my door and ask me why I am selling a drug that has not been registered. And if it is not, my case will be taken by the FTC for false advertising."

Four words; Commercial Entity, Fraud Law
Also false equivalence; If You, yourself post a shady review on your own website then you are in trouble with the government not because of your website but because you, yourself, were the poster of the message.

It’s pretty clear, really. If Zuckerberg starts posting actionable shit on Facebook then the company may be in the clear but Zuckerberg personally will not.

This should be self-evident and obvious and your presentation of a pair of moved goal posts and a false equivalence hypothesis makes me suspect you are not arguing this in good faith.

"According to our company attorneys, you don’t even have to do the posting."

Because what you are discussing has ceased to be "free speech" and has entered the domain of "deliberate fraud" where mens rea applies. After which it’s more or less up to a jury to determine whether you had the intent to defraud.

I’m not sure where you are going with trying to conflate the concept of "free speech" and "criminal accountability".

Anonymous Coward says:

Re: Re:

[A]t what point does an identified person cease to be acting on behalf of their employer and start to be writing on their own behalf using their employer as a provider of an interactive computer service?

Professor Eric Goldman’s paper, “An Overview of the United States’ Section 230 Internet Immunity” (Dec 2018), on p.4, states only:

Employee-authored content normally qualifies as first-party content [note 20]…

[Note 20] But see Delfino v. Agilent Technologies, Inc., 145 Cal. App. 4th 790 (Cal. App. Ct. 2006) (employer qualified for
Section 230 immunity for employee activity).

In Delfino

A series of anonymous messages were sent over the Internet that constituted threats to Michelangelo Delfino and Mary E. Day (collectively, plaintiffs). The messages consisted of electronic mail messages (e-mails) sent to Delfino and messages that were posted on Internet bulletin boards. These e-mails and postings were ultimately traced to Cameron Moore. Plaintiffs brought suit against Moore and his former employer, Agilent Technologies, Inc. (Agilent). . . .

According to the California Court of Appeals, the trial court granted summary judgment to Agilent on the basis of § 230(c)(1). That appellate court concluded:

[S]ummary judgment was properly granted.

 

I did note that at the outset you specified an “an identified person”, and apparently Mr Moore was not quite “identified” yet when the “anonymous messages” were sent. Nevertheless, Mr Moore obviously did become “an identified person” later on, so I hope this is still quite responsive to your question.

This comment has been deemed insightful by the community.
Tim R (profile) says:

A hypothetical thought exercise for a Tuesday afternoon:

Let’s say I defamed random politician Joe Smith. Let’s say that I don’t do anything halfway, and that there is really no gray area here. I defamed him, and I defamed him good. I have all of $20 and a coupon for a free Slurpee as my assets, and Mr. Smith’s representation knows this. Without liability protection, which includes section 230, other court precedent, and, well, you know, common fucking sense. Mr. Smith’s lawyers might file a suit naming:

  • Me (as a matter of course).
  • The social media site that displayed my defamatory content.
  • My broadband provider, who’s copper I transmitted my defamatory content.
  • The manufacturer of my computer, which I used to write the defamatory content.
  • The landlord of my apartment, where my computer is located.
  • The local electric utility, for providing me power to that computer.
  • My employer, for providing me a source of income to pay power and rent during the defamation.
  • My roommate, for subsidizing my living expenses incurred during the defamation (by splitting room and board).
  • Antifa for radicalizing me (not true and can’t be proved, but when has that stopped anybody in our gov’t).
  • John Does 1-25, because council is just absolutely convinced that I couldn’t have accomplished such a dastardly deed on my own.

I’m sure your first reaction to this is that nobody would ever try and go that far, that it’s an egregious use of judicial resources to even attempt it, and none of those associations would stick anyway.

I’m sure five years ago, you also would have agreed that somebody who was having to protect their very freedom and liberty in a court of law after voicing protected political opinions anonymously on the internet while pretending to be a cow was absurd, too. We are living in an era where nothing is too outrageous to make a buck off of.

No matter how absurd, every one of those named entities would have to expend money to defend themselves against baseless accusations, money that they wouldn’t be able to recover. Now multiply that by 100 a day. And the bottom feeders that represent Mr. Smith would just keep raking in the billable hours.

If the past twenty years has taught us anything, it’s that if you give unscrupulous opportunists even a little opening, they’ll try to use it to establish precedent. How about we just take care of it now and do the right thing.

Anonymous Coward says:

I can change my browser anytime I can change my search engine anytime
Google allows me to install 3rd party apps unlike apple
If you live in most of America for broadband you have maybe 2 choices Att Comcast etc
Millions have no acess to fast broadband
People choose Google cos it works its free
Android has a wide range of choice i can buy a good android phone for 100 euros
Meanwhile the customer is facing rising
Prices or reduced choice cos of mega mergers of
Telecom and old media corporations
Maybe politicans do not like Google cos it makes
Apps and services that are attractive to consumers based on merit and quality
and they don’t make big donations to politicans

A good case could be made for breaking up att as
It has almost a monoply in certain areas
But it won’t happen as it is almost fused to the USA intelligence service at this point

Anonymous Coward says:

Re: Re:

I can change my browser anytime I can change my search engine anytime

Google forces degraded performance on their sites if you don’t use their browser. Google has a tendency to push web "standards" like MS did back during the IE vs Netscape era.

Most "other" search engines use Google’s results. At best, they anonymize the query before sending it to Google. At worse they are nothing more than a reskinned google.com. Depending on your country, you may even be subject to Google’s ToS when using the "other" search engines.

Google allows me to install 3rd party apps unlike apple

Third party apps that have Google’s seal of approval on them, just like Apple. Oh were you talking about Android? Read on then…..

Android has a wide range of choice i can buy a good android phone for 100 euros

Is this the same Android that moves most of it’s critical services into a proprietary blob that can only be installed if you (or your device manufacturer) agree to Google’s ToS?

The same Android that without the Play Store (and it’s ToS) would be useless for most people? (No appy apps!)

The same Android that demands the device owner never be allowed to control or have say over their device and it’s actions or random crap stops working? ( SafetyNet, KNOX, Mobile Payments, Random Games / Apps )

The same Android that is either a Samsung / Pixel device or some rebadged MediaTek / Qualcomm SoC with vendor pre-installed, and often irremovable, shovelware that degrades the device over time, artificially disables functionality so it can be sold back to you as a subscription service, and adds even more invasive tracking and adverts everywhere?

do not like Google cos it makes
Apps and services that are attractive to consumers based on merit and quality

The same Apps that track everything the consumer does? That reads all of their email / documents for selling info to third parties? That wind up as either half baked betas that get axed or constantly changing alphas that annoy even their biggest financial supporters / content creators?

The Chromebooks that require login through Google and will not allow any other means of independent authentication? Chromebooks that force the use of GSuite for administration and forbid other MDMs? Chromebooks that deny schools the ability to monitor students’ online activities despite it being mandated by federal law?

There is a case to be made against Google here.

If you live in most of America for broadband you have maybe 2 choices Att Comcast etc
Millions have no acess to fast broadband

Meanwhile the customer is facing rising
Prices or reduced choice cos of mega mergers of
Telecom and old media corporations

A good case could be made for breaking up att as
It has almost a monoply in certain areas

You’re right. Plenty of cases to be made there as well. Unfortunately, the current regulatory body that’s supposed to manage them is complicit with their wishes. Yet another case to be made…..

BugMN (profile) says:

Re: Re: Re:

I have removed google search from my Android phone, as well as most of google’s bloatware.

Doing so required me rooting the phone and engaging in relatively tortuous and risky behaviour, devoting hours of my time to the task and running the risk of bricking the phone, and I probably violated Terms of Service doing so. The process was not complete, I still use a google account and some google services to maintain functionality that is essential for me. Maybe I could find some ROM that goes beyond that, but it would involve additional work, risk and loss of functionality.

TL;DR: You can get rid of a lot of google stuff in your android phone. But it is hard enough and risky enough that I do not think it qualifies as an alternative to Google’s control of the Android OS. Yes, Apple is even worse, but that is not the issue.

PaulT (profile) says:

Re: Re: Re: Re:

"You can get rid of a lot of google stuff in your android phone. But it is hard enough and risky enough that I do not think it qualifies as an alternative to Google’s control of the Android OS"

This raises a number of questions. First off, which version of Android did you use? There are by now a great many forks and altered versions that make that process much easier, although of course that might mean trusting a Chinese provider. If your phone manufacturer chose not to make installing a non-supported OS easy, that might also not be Google’s fault, as they manufacturers are incentivised to keep their own Android mods intact.

The second is if people are willing to go through all this effort to use a Google-originated product without actually having to use Google, why not just use something not made by Google? There are numerous competing OSes out there.

This then reveals the real problem here. The free market has spoken, and the market says that people would rather piss around for hours installing a modified Google Android ROM than they would install another OS. Therefore, nobody else gets a foot in the door – not because Google are doing something bad, but because all potential customers are still using Google products by choice.

"Yes, Apple is even worse, but that is not the issue."

Actually, when the conversation is "Google are evil and need to be broken up because of the way they handle their phone OS", then "their main competitor does this but worse" certainly should be on the table at least.

Tech 1337 (profile) says:

Some section 230 hypotheticals

I have some hypothetical questions here about editorial control exercised by platforms and the implications of such decisions…

  1. Suppose person A posts something libelous about person B on a forum. Person B demands a retraction on the same forum. The platform’s owner or algorithms decide this is a flame war, and ban person A from posting further, including banning their retraction and apology. Person B then sues A for libel having not seen any retraction.

  2. Suppose person A posts something about person B on a forum, but the platform’s algorithms censor that post by replacing some banned words with ****. Person B assumes the words are the worst possible ones (despite not knowing what precisely is on the banned list) and sues person A. Had they known the actual words used, it might not have come to that.

  3. Suppose person A posts something on a forum which has implemented (poorly) a shadowban technique for naughty words, such that the words don’t even appear for other readers. Person A’s post, instead of having **** where a word has been banned, actually omits that word entirely when displayed to other readers, actually changing the meaning of sentences. As a result, person A’s post becomes even more inflammatory and libelous to person B, who sues.

In each of these cases, the platform has omitted information (and they’re not compelled to allow all posts, right?) but by omitting information the platform has exerted editorial control of the conversation and changed the meaning.

In case 1, the conversation occurs over multiple distinct units ("posts"). By banning a following-up post, the apology has been not posted (or shadowbanned, which is worse because person A believe they have sent an apology without realising person B can’t see it). As such, the conversation has been shaped by the platform’s decisions. How is the platform not in some way responsible for subsequent legal action, given they had a hand in shaping the conversation?

In cases 2 and 3, the distinct unit is now "words". Replacing a word with a visible marker "****" makes it clear a modification has occurred, but not what the original word was, thus distorting the meaning. Omitting a word (or worse, shadowbanning a word) distorts the meaning but does so silently. Again, how is the platform not in some way reponsible for shaping the conversation?

Is it valid to do this per post, but not per word? If so, why? Where’s the dividing line? If it’s valid to omit individual words, or posts, or any unit of division in a conversation, then how is the platform/facilitator not also a publisher/creator of meaning? Editorial control amounts to authorship.

Now scale up the units of conversation, instead of omitting words, sentences, posts we now omit news articles and interviews. During an election. If platforms (being companies or being run by individuals) are allowed to have an opinion (and the first amendment seems to say the government can’t interfere with such opinions), the platforms can shape conversations on their platforms any way they wish. Nothing compels the platform to host content they do not want to host (including down to individual words). But does that not consequently mean the platform is exerting editorial control and thus is acting as co-creator of the conversations it hosts?

Please consider and discuss.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Some section 230 hypotheticals

How is the platform not in some way responsible for subsequent legal action, given they had a hand in shaping the conversation?

Because they didn’t, all the did was stop the conversation on their platform after having determined that it violated their rules. Keeping in mind in your hypothetical the ban happened after a violation of the rules, for all the moderators knew A could have just doubled-down and posted even more content of that sort that got them the ban making a ban the safer choice, as while keeping the platform flame-war free for other users is on the moderators making sure that those engaged in flame wars are able to have their say is not, as if they go down that route they wouldn’t be able to ban anyone.

Omitting a word (or worse, shadowbanning a word) distorts the meaning but does so silently. Again, how is the platform not in some way reponsible for shaping the conversation?

The following applies to #2: Because it would be just a wee bit absurd to hold a platform liable because someone thought that a blocked word was something bad when for all they knew it was utterly harmless or even positive. If someone sees a blocked word and assumes the worst that’s on them, not the platform.

And #3: Again, it’s not on the platform for what their filters might end up doing by blocking certain words, as similar to #1 once you go down the route you’ve basically made moderation of that sort impossible because there will always be the chance that in blocking certain words you end up ‘creating’ a problematic sentence where there might not have been one before, which would mean you couldn’t block words, and things would quickly turn even worse.

Is it valid to do this per post, but not per word? If so, why? Where’s the dividing line? If it’s valid to omit individual words, or posts, or any unit of division in a conversation, then how is the platform/facilitator not also a publisher/creator of meaning? Editorial control amounts to authorship.

In order:

Yes, because it’s their platform and they get to decide what’s on it.

Wherever they set the line.

Because unless they are intentionally and knowingly changing content to change it’s meaning then ultimately the responsible party is the one who posted the content, or at most dumb luck should a filter butcher a post and turn it into something it wasn’t intended to be.

And lastly no, blocking certain content does not authorship make, under that definition it could be argued that a spam filter is guilty of authorship because it chooses what to let through and as such is responsible for the content.

But does that not consequently mean the platform is exerting editorial control and thus is acting as co-creator of the conversations it hosts?

No, because they’re not creating the content, they are merely curating it by deciding what is and is not allowed on their platform, and the idea that curation equals creation(or at least liability) is specifically what 230 was meant to address, because under that mindset moderation at all poses a serious risk such that platforms are likely to either not allow user submitted content or not moderate at all and leave up things that very much should be removed.

Tech 1337 (profile) says:

Re: Re: Some section 230 hypotheticals

OK, I can see your arguments. I’m trying to understand where is the line between curation and creation. So, another hypothetical…

Let’s suppose a company with majority market share in online search or social media was to decide, over the course of several years, to lower the rank of any article mentioning your favourite politician or political party. Not too blatantly, but enough to drop top stories to third or fourth spot, or fourth spot stories to eighth spot. Still on the front page, but it will perhaps affect viewership of those stories.

Does that fall under your definition of curation?

Where is the line between spam filter and opinion filter? Does there need to be a line in law between those two, or not?

What accountability (if any) should apply for such algorithmic decisions?

Does the market share of the company impact your answers?

Scary Devil Monastery (profile) says:

Re: Re: Re: Some section 230 hypotheticals

"Where is the line between spam filter and opinion filter? Does there need to be a line in law between those two, or not?"

You mean for dictionary-definition purposes only? Because that’s the only place it makes sense to draw a line in sand between the two. Which would be evident when you consider the criteria of "spam" is in itself often based on subjective opinion, i.e. "We think our visitors do not want more commentary from that nigerian prince and may be just fine with their current penis size, if applicable".

"What accountability (if any) should apply for such algorithmic decisions?"

None, unless the platform is government-owned and operated. That’s how free speech and property ownership both work; A bar owner applies similar opinion-regulated filtering when he hangs up a sign saying "No shoes, no shirt, no service".
The concept that legislative sanction needs apply to either opinion or…other opinion…is in itself a false dichotomy.

"Does the market share of the company impact your answers?"

If the one you’re asking has a stake in the company, probably. Same as the part-owner of a bar will not imply the watering hole he has a vested interest in is, for instance, a cruddy bodega staffed by KKK redneck yokels serving watered bud light at premium tap price.

Tech 1337 (profile) says:

Re: Re: Re:2 Some section 230 hypotheticals

I agree, there’s no simple line between spam and opinion filtering, because spam is itself based on an opinion. My hypotheticals were intended to show how removing words can have the effect of distorting meaning, even without any intended malice, just as more subtle and deliberate rank-lowering can.

I’m not sure I agree with your example though. A bar owner can’t distort democracy the way a majority market share information service can. What patrons wear on their feet isn’t the same as the ability to censor what patrons can say in the establishment. What patrons can wear would probably also be posted on a visible sign so the rules are known up-front before patrons enter the establishment, whereas rank-lowering / de-listing /shadowbanning often occur behind the scenes with no up-front rules given. Any given bar also has competition down the street just a few minutes walk away, and they tend to patronised on a casual basis, whereas information services may have either no practical alternatives (e.g. local broadband providers), user lock-in practices (contracts, difficulty extracting your data, etc) and network effects (all your friends are on the same service), and tend to be used every day (sometimes for work in the case of search) not on the same kind of casual basis. So I don’t feel transferring agreed norms from other kinds of companies enlightens much in this case.

And as for the government / company distinction, the purpose of enumerating what the government can’t do in a document like the constitution is to prevent tyranny. But is governmental tyranny the only kind of tyranny we must defend ourselves from? If a company wields power comparable to a government in practice, does that mean it gets a free pass, by virtue of being a company, or by having theoretical competitors (even if they run a distant second)?

PaulT (profile) says:

Re: Re: Re:3 Some section 230 hypotheticals

"I agree, there’s no simple line between spam and opinion filtering, because spam is itself based on an opinion."

So, why bring it up, since it disproves your point? If spam can’t be controlled objectively, how can other forms of mass communication?

"What patrons wear on their feet isn’t the same as the ability to censor what patrons can say in the establishment"

The bar owner still reserves the right to tell you to STFU or get out of his bar either way, whether he’s running a hole in the wall bar or a main hall at Oktoberfest. Why do you wish to deny the same right to a social network that wants to control the abusive customers they find on their property?

"Any given bar also has competition down the street just a few minutes walk away"

Whereas all competing social networks are available with less effort than that.

"If a company wields power comparable to a government in practice, does that mean it gets a free pass, by virtue of being a company, or by having theoretical competitors (even if they run a distant second)?"

There’s nothing theoretical about there being competition in social networks, unless you define things deliberately to pretend that, say, Facebook and Twitter don’t compete with each other. You also miss the fact that millions of users use both of these services. That’s part of the reason why these "monopoly" arguments ring so hollow – unlike physical services such as ISPs, not only can customers have the choice to use competitors, they often do that simultaneously.

Tech 1337 (profile) says:

Re: Re: Re:4 Some section 230 hypotheticals

The bar owner still reserves the right to tell you to STFU or get out of his bar either way, whether he’s running a hole in the wall bar or a main hall at Oktoberfest. Why do you wish to deny the same right to a social network that wants to control the abusive customers they find on their property?

A valid question, although I was talking about platforms shaping conversations by changing/banning speech, not people, although I think these are on a continuum of censorial approaches.

OK, another hypothetical, just to take a concrete example, are you saying that you’d be perfectly fine if Facebook were to decide that your preferred political leader (or even every member of their political party) is "abusive" and ban them from their platform or from making posts?

I’m choosing Facebook for this hypothetical on a market share argument which I said earlier seems to be a factor in my thinking. If that company is dominant in social media, having a larger proportion of the the US population signed up than any other social media company (which I think may be true), then my argument that size+influence matters should make that company a good example. (I’m also choosing a concrete example of Facebook here because I’m not finding the bar analogy particularly compelling and would rather talk about actual instances of companies who might be affected by the political debates ongoing in this area.)

I’m not saying that’s happening, by the way. I know politicians bleat about things like this all the time, and they’re really just complaining that the megaphone isn’t loud enough for them and that the same megaphone is also being used by those other guys who should clearly just shut up because they’re ne’er-do-wells.

But if it did happen, to your preferred politician, would you bat an eyelid? Would you hope a reason would be required, or would no reason be required for you (it’s the company’s right after all)?

PaulT (profile) says:

Re: Re: Re:5 Some section 230 hypotheticals

"A valid question, although I was talking about platforms shaping conversations by changing/banning speech, not people, although I think these are on a continuum of censorial approaches."

You have to define your boundaries if you want to say that platforms that step over those boundaries should be punished. On that continuum, you’re potentially supporting banning the platform from having their own speech and community standards, and forcing abuse upon the community.

"OK, another hypothetical, just to take a concrete example, are you saying that you’d be perfectly fine if Facebook were to decide that your preferred political leader (or even every member of their political party) is "abusive" and ban them from their platform or from making posts?"

Yes, I would, as I’m fine with Stormfront and Breitbart doing so. I don’t believe Facebook is a good platform for political speech, and they have many competitors to use for those types of conversation.

What I’m not happy with is people trying to co-opt platforms that have already told them that their behaviour is not welcome because they realised they can’t grift as much money on other platforms.

"But if it did happen, to your preferred politician, would you bat an eyelid? "

Actually, this is not hypothetical. Misinformation spread on Facebook had a major impact on the discussion surrounding Brexit, which i believe to be a horrific mistake. But, the answer to that is not for Facebook to have been forced to host or ban certain types of speech arbitrarily at the will of the sitting government.

Tech 1337 (profile) says:

Re: Re: Re:6 Some section 230 hypotheticals

you’re potentially supporting banning the platform from having their own speech and community standards

Yes, there’s the potential to curtail a lot of speech here, depending on the details, so as you say the boundaries matter. What relationship does a company enforcing community standards have to company speech? Banning/shaping/curating conversations isn’t the same as directly stating an opinion but there’s a relationship, which is what I find not as clear cut as I’d like about the "facilitator/creator" distinction in section 230.

Yes, I would [be fine with Facebook banning a politician], as I’m fine with Stormfront and Breitbart doing so. I don’t believe Facebook is a good platform for political speech, and they have many competitors to use for those types of conversation.

OK, I think that’s an important point. My earlier argument was based on the idea that since Facebook has some dominant market share that different rules should apply. I can see an argument for narrowing the focus to its use in political speech (as that was the hypothetical), and it might not be seen or used by the same percentage of people who are seeking news or discussion of a political nature. In that domain, it may not have the largest market share, so it’s harder to argue that any kind of proposed political neutrality rule should apply there and not apply to the Breitbarts or Stormfronts too.

However, again, I think this is where some of this political argument that’s been happening gains traction with people. If a platform up front says "we’re political, we support party X", people have to like it or leave, but many platforms are seen as not political, e.g. Facebook, so because the don’t advertise themselves that way, if they do express a political opinion through whatever corporate speech or community standards they apply, that’s seen as changing the rules or violating some assumed social contract to be "neutral". And even if they haven’t expressed any overtly political opinion, politicians can always spin it as if they had, to apply pressure.

But, the answer to that is not for Facebook to have been forced to host or ban certain types of speech arbitrarily at the will of the sitting government.

Well, I agree with that. To be clear, I’m not arguing for the government to be able to force the hosting of certain speech or certain speakers on platforms. Rather, I think that government censorship is a problem for democracy, and that corporate censorship/spin/bias/coversation-shaping could (given sufficient scale and influence of the companies involved) also be a problem for democracy.

I think some valid points have been made that the nature of competition means the latter is less of a problem than the former, and that trying to fix any perceived corporate speech problem risks throwing out the baby with the bathwater by putting at risk speech of other companies and ultimately individuals. In other words, that curtailing moderation efforts of companies amounts to curtailing speech, which ultimately negatively impacts the very thing that these suggestions were intended to improve which was the ability of individuals to speak freely.

PaulT (profile) says:

Re: Re: Re:7 Some section 230 hypothetica

"What relationship does a company enforcing community standards have to company speech?"

Same as they have offline.

"what I find not as clear cut as I’d like about the "facilitator/creator" distinction in section 230."

You might want to actually read section 230 at some point. It contains none of the words you just mentioned.

Maybe that’s why you’re having a problem here? You’re focussing on a version of the law that doesn’t exist?

"My earlier argument was based on the idea that since Facebook has some dominant market share that different rules should apply."

Which is not a good position to hold, and becomes really problematic when natural market forces change your own position in that market.

"If a platform up front says "we’re political, we support party X", people have to like it or leave, but many platforms are seen as not political, e.g. Facebook, so because the don’t advertise themselves that way"

My local bar isn’t advertised as a political venue, but they can tell people to STFU or GTFO if they start spouting a position that other customers don’t like. So?

"I’m not arguing for the government to be able to force the hosting of certain speech or certain speakers on platforms"

But, you are…

"In other words, that curtailing moderation efforts of companies amounts to curtailing speech, which ultimately negatively impacts the very thing that these suggestions were intended to improve which was the ability of individuals to speak freely."

The point is anyone can speak freely without interference from government – including the people who own the property you’re standing on when you choose to speak. The people currently saying this is a bad thing are the people who have found that when they spout their ignorant offensive nonsense on a platform often used for non-political speech, they’ll be told to get the hell of the owner’s property. This is not controversial, unless you want to pretend that the platform owner should lose the ability to control their own property because someone whines loud enough.

Scary Devil Monastery (profile) says:

Re: Re: Re:3 Some section 230 hypotheticals

"A bar owner can’t distort democracy the way a majority market share information service can. "

I think we can agree that it is never a good thing if democracy is distorted by lies and vested interests spinning facts. Hell, in my first real job as a database sysadmin I came up with the idea that if I only had my way every system is fixable. Just find the biggest idiots abusing it and subject them to draconian and inhumane punishment in public as warnings unto others and all the problems will magically vanish.

However, I’m sure you can see the inherent problem with this. Idiots, liars and information system abusers are what we need to have to put up with if we are to preserve the core principles of the system.

In this case the Bar owner and the massive platform owner are both covered by the same principle. A very basic and fundamental one, at that. I’m pretty sure we don’t want to rewrite freedom of speech to mean except for the animals we consider to be more equal than others.

"What patrons wear on their feet isn’t the same as the ability to censor what patrons can say in the establishment."

Actually, the bar owner can indeed toss you out on your ass if your speech upsets him, his staff, or other patrons. No questions asked. The same as any major platform. False equivalence.

"What patrons can wear would probably also be posted on a visible sign so the rules are known up-front before patrons enter the establishment…"

And the same holds true for the ToS you sign to every time you make an account on your chosen digital gossipmonger. You imply this is not identical? In fact the rules are more likely to be far more transparent on the digital platform than for the bar.

"…whereas information services may have either no practical alternatives (e.g. local broadband providers), user lock-in practices (contracts, difficulty extracting your data, etc) and network effects (all your friends are on the same service), and tend to be used every day (sometimes for work in the case of search) not on the same kind of casual basis."

Sorry, are we discussing natural monopolies (ISP’s, Telcos), Service platforms (Oracle, SAP, etc) or digital gossipmongers (account-based services such as Facebook and Twitter)? They are all distinctly different. That argument is like sweeping your local bicycle repairman under the same roof as GM and FedEx speditors regarding the difficulties of the transportation industry.

"So I don’t feel transferring agreed norms from other kinds of companies enlightens much in this case."

The principles involved are the fundamental principles supposed to be equal for everyone which makes it a very risky business to tinker with them based on "Oh, but these guys are so popular we need to condition the first amendment in their case". Bluntly put you either believe there are core principles everyone needs to respect…or you do not, in which case that’s it for the constitution and every other fundament of national integrity.

"But is governmental tyranny the only kind of tyranny we must defend ourselves from?"

Yes.
A corporation, even in the worst possible case, can not legally send thugs to incarcerate you or seal your mouth. The government, holding the violence monopoly, can.

"If a company wields power comparable to a government in practice…"

That’s a nice hypothetical with no bearing in the real world. No corporation holds power comparable to a government, because no corporation can send the DHS to cart you off to gitmo or Abu Ghraib, or put you on a blacklist which renders you a second-class citizen in every aspect of society.

There is a caveat here about corporations unduly influencing government and getting government to act or legislate on its behalf – Disney, AT&T, a number of oil, arms, and tobacco companies, monsanto, etc spring to mind – but the irony is that specifically the online platforms we discuss here appear to have very little of that sway, being in direct conflict with much of the establishment from both sides of the aisle.

Your rhetoric here implies that "for the greater good" we can not allow "freedom" because some animals must be more equal than others. And for that purpose a number of your arguments are…less than valid, as can be seen with your comparison of Bars somehow being more clear about expected behavior than the private platform which is guaranteed to spell out the detailed rules in your face upon entry.

Tech 1337 (profile) says:

Re: Re: Re:4 Some section 230 hypotheticals

"But is governmental tyranny the only kind of tyranny we must defend ourselves from?"

Yes.

OK, that’s interesting. I can see your argument that the threat of violence elevates governments in terms of what must be considered tyranny.

Yet, governments don’t engage in tyranny on their own, they often have complicit citizens and companies doing their bidding too. So, from that point of view, government control of companies must have limits. But likewise, corporate power must also be limited.

I wasn’t talking about the power of violence, I was specifically talking about the power to speak and be heard. And in that domain, media conglomerates and internet companies have a sizeable power. And my question what distorting effects do corporate opinions have on democracy, and how can they be held accountable?

I can see the argument that the internet companies we’re talking about are not in the same arena as tobacco and oil companies. But saying they seem to be good guys doesn’t make the threat of their power any the less. What happens when large corporations decide what the populace does or does not need to know about, with or without urging from the government?

Governments have been toppled by news media cartels. To think this cannot happen via internet companies who promise not to be evil seems overly hopeful.

PaulT (profile) says:

Re: Re: Re:5 Some section 230 hypotheticals

"Yet, governments don’t engage in tyranny on their own, they often have complicit citizens and companies doing their bidding too"

So, you want to block the free speech of citizens as well, or is the silencing only for companies providing the platform they choose to speak from?

"But saying they seem to be good guys"

That’s not being said, only that removing their freedom of speech and free association won’t improve anything. In fact, it makes misinformation and election interference so much easier.

Tech 1337 (profile) says:

Re: Re: Re:6 Some section 230 hypotheticals

I can see your concern, but the risk I was thinking of was a certain social media founder who, because he sits at the top of a mountain of technology, can just decide to do what the government wants and censor millions of people’s posts, without needing to be compelled to do it by any law.

PaulT (profile) says:

Re: Re: Re:7 Some section 230 hypothetica

"can just decide to do what the government wants and censor millions of people’s posts, without needing to be compelled to do it by any law"

Yes, and if he did did that because someone in political office "encouraged" him to do so that would be highly illegal.

But, if he’s doing is because that’s his opinion, or that market research showed that a majority of customers want that type of "censorship", then it’s both his freedom of speech and good business respectively.

What’s your alternative? The government step in and remove his ability to speak or his ability to control his own property for the benefit of customers?

I’ll also remind you that despite your fantasies, Zuckerberg is not in a god/dictator position. Facebook is a publicly traded company and the CEO answers to the board of directors. If he does what the government wants in your scenario and it’s bad for business, he won’t be doing it for long. If he does what’s right for business, just who are you insisting should be stepping in between a business’s management and it’s ability to manage its business?

PaulT (profile) says:

Re: Re: Re: Some section 230 hypotheticals

"Does that fall under your definition of curation?"

I’d personally say this depends on the context, but the counter-argument to that is – does the level of moderation in that case really affect how they should be dealt with. To give recent examples – it’s well known that openly biased right-wing forums will just delete and block users with other political leanings and refuse to host stories with the slants. But, Facebook have recently been revealed to have made changes to their algorithms to throttle traffic from certain websites (usually preferring more right-wing ones from my understanding).

Are these both "curation", or does the more subtle method need a different kind of treatment just because they’re not announcing that they’re suppressing dissent? Does the hard blocking get a pass just because they don’t allow for subtlety, or are sites to be encouraged not to allow grey areas lest they be accused of something?

"Where is the line between spam filter and opinion filter?"

Define "spam". Sometimes it’s obvious, sometimes it is less obvious. If it’s more subjective on certain types of post, then surely that’s an opinion being asserted as to whether the post is spam. The definition of spam is "irrelevant or unsolicited messages". Who makes the determination as to whether they’re relevant or solicited?

"Does the market share of the company impact your answers?"

Should it? At what level of users does a company go from being able to freely moderate their platform to being held liable for doing so?

Tech 1337 (profile) says:

Re: Re: Re:2 Some section 230 hypotheticals

Personally, I’m less concerned about smaller forums being single-minded and dedicated to a particular world view, especially if they’re up-front about it, than I am concerned with larger ubiquitous platforms subtly biasing what’s presented to members of the public, or creating filter bubbles that divide society along political lines.

Scale seems to have an impact in my assessment of the issue. Companies that are used by over 50% of the population have the ability to distort democratic society. The same is true of media moguls who own too many media outlets. I think that’s part of my assessment of why a one-size-fits-all approach to companies sits poorly with me. It’s not just the size, of course, it’s their power, their reach, their influence. Too much consolidation of media sources or conduits into fewer hands seems problematic to democracy.

Perhaps what’s needed is more competition. The problem is the first companies in a new field seems to become so large that they cannot be challenged. Breaking up companies along product lines wouldn’t solve the problem of undue influence, e.g. if a company that does search cannot also make a web browser, that wouldn’t stop the search business dominating.

PaulT (profile) says:

Re: Re: Re:3 Some section 230 hypotheticals

"Personally, I’m less concerned about smaller forums being single-minded and dedicated to a particular world view"

So, what’s the magical dividing line between "smaller forum where moderation is not an issue" and "larger forum where they cannot moderate without ushering in societal collapse"? At what point in the growth of Twitter should they have lost the ability to moderate?

"I am concerned with larger ubiquitous platforms subtly biasing what’s presented to members of the public, or creating filter bubbles that divide society along political lines"

If you’re concerned about bubbles, I have a large number of smaller sites to show you. 4chan isn’t large in the grand scheme of things, but it sure as hell caused some very damaging bubbles, especially when the worst scum filtered off to 8chan and others.

"Perhaps what’s needed is more competition"

There’s plenty of competition, unless you try redefining each company’s activities to such a granular degree that they become meaningless. Most people use multiple social networking sites. The fact that Facebook, Twitter, Instagram and TikTok (among others) have so many users is because they count the same people among their membership.

These services are not popular because they lack competition. They are popular because a lot of people like using them.

Also, size and competition are meaningless among other factors. Fox News has never been watched by 50% of the population, yet the damage they’ve had on political discourse in the US is still incalculable.

"Breaking up companies along product lines wouldn’t solve the problem of undue influence, e.g. if a company that does search cannot also make a web browser, that wouldn’t stop the search business dominating"

Yes, so it’s an idiotic idea.

Anonymous Coward says:

Re: Re: Some section 230 hypotheticals

            3. Suppose… a shadowban technique for naughty words, such that the words don’t even appear for other readers.

And #3: Again, it’s not on the platform for what their filters might end up doing by blocking certain words…

Judge Kozinski’s opinion for the en banc 9th Circuit in Fair Housing Council of San Fernando Valley v Roommates.com (2008) discusses a hypothetical somewhat along these lines—

[A] website operator who edits in a manner that contributes to the alleged illegality — such as by removing the word "not" from a user’s message reading "[Name] did not steal the artwork" in order to transform an innocent message into a libelous one — is directly involved in the alleged illegality and thus not immune.

(Footnote omitted.)

That opinion goes on to clarify—

Our opinion is entirely consistent with that part of Batzel which holds that an editor’s minor changes to the spelling, grammar and length of third-party content do not strip him of section 230 immunity. None of those changes contributed to the libelousness of the message, so they do not add up to "development" as we interpret the term.

It seems to me that Judge Kozinski’s hypothetical imagines some degree of, if not malice, then at least negligence, or some culpable or blameworthy conduct on the part of the website operator.

You, on the other hand, don’t blame the website for poorly-implemented code. It’s just an automated fuck-up with no one at fault. Happens all the time. Get over it. God-damn computers.

Is there some other way you want to distinguish the hypotheticals?

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...