josh.king's Techdirt Profile

josh.king

About josh.king

Posted on Techdirt - 22 May 2020 @ 01:43pm

Why Content Moderation Codes Are More Guidelines Than Rules

Also, following on my last post: since the First Amendment protects site moderation and curation decisions, why all the calls to get rid of CDA 230?s content moderation immunity?

Having listened carefully and at length to the GOP Senators and law professors pitching this, the position seems to be a mix of bad faith soapboxing (?look at us take on these tech libs!?) and the idea that sites could be better held to account — contractually, via their moderation codes — if the immunity wasn?t there.

This is because the First Amendment doesn?t necessarily bar claims that various forms of ?deplatforming? — like taking down a piece of content, or suspending a user account — violate a site?s Terms of Use, Acceptable Use Policy, or the like. That?s the power of CDA 230(c)(2): it lets sites be flexible, experiment, and treat their moderation policies more as guidelines than rules.

Putting aside the modesty of this argument (rallying cry: ?let?s juice breach-of-contract lawsuits against tech companies?) and the irony of ?conservatives? arguing for fuller employment of trial attorneys, I?ll make two observations:

First of all, giving people a slightly-easier way to sue over a given content moderation decision isn?t going to lead to sites implementing a ?First Amendment standard.? Doing so — which would entail allowing posts containing all manner of lies, propaganda, hate speech, and terrorist content — would make any such site choosing this route an utter cesspool.

Secondly, what sites WOULD do in response to losing immunity for content moderation decisions is adopt much more rigid content moderation policies. These policies would have less play in them, less room for exceptions, for change, for context.

Don?t like our content moderation decision? Too bad; it complies with our policy.

You want an exception? Sorry; we don?t make exceptions to the policy.

Why not? Because some asshole will sue us for doing that, that?s why not.

Have a nice day.

CDA 230?s content moderation immunity was intended to give online forums the freedom to curate content without worrying about this kind of claim. In this way, it operates somewhat like an anti-SLAPP law, by providing the means for quickly disposing of meritless claims.

Though unlike a strong anti-SLAPP law, CDA 230(c)(2) doesn?t require that those bringing such claims pay the defendant?s attorney fees.

Hey, now THERE?s an idea for an amendment to CDA 230 I could get behind!

Reposted from the Socially Awkward blog.

Posted on Techdirt - 21 May 2020 @ 03:33pm

Let's Talk About 'Neutrality' — And How Math Works

So if the First Amendment protects site moderation and curation decisions, why are we even talking about ?neutrality??

It?s because some of the bigger tech companies — I?m looking at you, Google and Facebook — naively assumed good faith when asked about ?neutrality? by congressional committees. They took the question as inquiring whether they apply neutral content moderation principles, rather than as Act I in a Kabuki play where bad-faith politicians and pundits would twist this as meaning that the tech companies promised ?scrupulous adherence to political neutrality? (and that Act II, as described below, would involve cherry-picking anecdotes to try to show that Google and Facebook were lying, and are actually bastions of conversative-hating liberaldom).

And here?s the thing — Google, Twitter, and Facebook probably ARE pretty damn scrupulously neutral when it comes to political content (not that it matters, because THE FIRST AMENDMENT, but bear with me for a little diversion here). These are big platforms, serving billions of people. They?ve got a vested interest in making their platforms as usable and attractive to as many people as possible. Nudging the world toward a particular political orthodoxy? Not so much.

But that doesn?t stop Act II of the bad faith play. Let?s look at how unmoored from reality it is.

Anecdotes Aren?t Data

Anecdotes — even if they involve multiple examples — are meaningless when talking about content moderation at scale. Google processes 3.5 billion searches per day. Facebook has over 1.5 billion people looking at its newsfeed daily. Twitter suspends as many as a million accounts a day.

In the face of those numbers, the fact that one user or piece of content was banned tells us absolutely nothing about content moderation practices. Every example offered up — from Diamond & Silk to PragerU — is but one little greasy, meaningless mote in the vastness of the content moderation universe.

??Neutrality?? You keep using that word . . .?

One obvious reason that any individual content moderation decision is irrelevant is simple numbers: a decision representing 0.00000001 of all decisions made is of absolutely no statistical significance. Random mutations — content moderation mistakes — are going to cause exponentially more postings or deletions than even a compilation of hundreds of anecdotes can provide. And mistakes and edge cases are inevitable when dealing with decision-making at scale.

But there?s more. Cases of so-called ?political bias? are, if it is even possible, even less determinative, given the amount of subjectivity involved. If you look at the right-wing whining and whinging about their ?voices being censored? by the socialist techlords, don?t expect to see any numerosity or application of basic logic.

Is there any examination of whether those on ?the other side? of the political divide are being treated similarly? That perhaps some sites know their audiences don?t want a bunch of over-the-top political content, and thus take it down with abandon, regardless of which political perspective it?s coming from?

Or how about acknowledging the possibility that sites might actually be applying their content moderation rules neutrally — but that nutbaggery and offensive content isn?t evenly distributed across the political spectrum? And that there just might be, on balance, more of it coming from ?the right??

But of course there?s not going to be any such acknowledgement. It?s just one-way bitching and moaning all the way down, accompanied with mewling about ?other side? content that remains posted.

Which is, of course, also merely anecdotal.

Reposted from the Socially Awkward blog.

Posted on Techdirt - 19 May 2020 @ 01:37pm

No, CDA 230 Isn't The Only Thing Keeping Conservatives Off YouTube

Over the last year or so, there?s been a surge of claims that Google, Twitter, YouTube, etc. are ?biased against conservatives.?

The starting point of this bad faith argument is a presumption that sites should be ?neutral? about their content moderation decisions — decisions like which accounts Twitter suspends, how Google or Facebook rank content in search results or news feeds, or how YouTube promotes or obfuscates videos.

More about this ?neutrality? nonsense in a later post, but let?s move on with how this performative mewling works.

So after setting up the strawman standard of ?neutrality,? these self-styled ?conservatives? turn to anecdotes showing that their online postings were unpublished, de-monetized, shadow-banned, or otherwise not made available to the widest audience possible.

These anecdotes are, of course, offered as evidence that sites haven?t been ?neutral.?

And it?s not just some unfocused wingnut whining. This attitude is also driving a number of legislative proposals to amend and scale back CDA 230 — the law that makes the internet go.

Conservative Senators like Josh Hawley, Ted Cruz, and Lindsey Graham — lawyers all, who surely know better — bitch and moan about CDA 230?s content moderation immunity. If only sites didn?t have this freebie, they say — well, then, we?d see some neutrality and fair treatment, yessiree.

This is total bullshit.

Sure, CDA 230(c)(2) makes sites immune from being sued for their content moderation decisions. But that?s only important to the extent it keeps people from treating ?community guidelines? and ?acceptable use policies? as matters of contract that can be sued over.

Moderation? Curation? Promotion? All of that stuff is fully protected by the First Amendment.

Really, I can?t stress this enough:

CONTENT MODERATION DECISIONS ARE PROTECTED BY THE FIRST AMENDMENT.

Eliminating content moderation protections from CDA 230 doesn?t change this fact.

It can?t change this fact. Because CDA 230 is a statute and not the FIRST AMENDMENT.

So why all the arguing for CDA 230 to be carved back? Some of it is surely just bad-faith angst about “big tech,” misplaced in a way that would unduly harm small, innovative sites. But a lot of of it is just knee-jerk reaction from those who actually think that removing the immunity-for-moderation found in CDA 230(c)(2) will usher in a glorious new world where sites will have to publish everything.

Which, by the way, would be awful. Any site that just published virtually everything users posted (that?s the true ?First Amendment standard?) would be an unusable hellhole. No site is going to do that — and, again . . .

They don?t have to BECAUSE THE FIRST AMENDMENT PROTECTS CONTENT MODERATION DECISIONS.

Reposted from the Socially Awkward blog.

More posts from josh.king >>