House Democrats Decide To Hand Facebook The Internet By Unconstitutionally Taking Section 230 Away From Algorithms
from the this-is-not-a-good-idea dept
We’ve been pointing out for a while now that mucking with Section 230 as an attempt to “deal” with how much you hate Facebook is a massive mistake. It’s also exactly what Facebook wants, because as it stands right now, Facebook is actually losing users to its core product, and the company has realized that burdening competitors with regulations — regulations that Facebook can easily handle with its massive bank account — is a great way to stop competition and lock in Facebook’s dominant position.
And yet, for reasons that still make no sense, regulators (and much of the media) seem to believe that Section 230 is the only regulation to tweak to get at Facebook. This is both wrong and shortsighted, but alas, we now have a bunch of House Democrats getting behind a new bill that claims to be narrowly targeted to just remove Section 230 from algorithmically promoted content. The full bill, the “Justice Against Malicious Algorithms Act of 2021 is poorly targeted, poorly drafted, and shows a near total lack of understanding of how basically anything on the internet works. I believe that it’s well meaning, but it was clearly drafted without talking to anyone who understands either the legal realities or the technical realities. It’s an embarrassing release from four House members of the Energy & Commerce Committee who should know better (and at least 3 of the 4 have done good work in the past on important tech-related bills): Frank Pallone, Mike Doyle, Jan Schakowsky, and Anna Eshoo.
The key part of the bill is that it removes Section 230 for “personalized recommendations.” It would insert the following “exception” into 230.
(f) PERSONALIZED RECOMMENDATION OF INFORMATION PROVIDED BY ANOTHER INFORMATION CONTENT PROVIDER.—
‘‘(1) IN GENERAL.—Subsection (c)(1) does not apply to a provider of an interactive computer service with respect to information provided through such service by another information content provider if—
‘(A) such provider of such service—
‘‘(i) knew or should have known such provider of such service was making a personalized recommendation of such information; or
‘‘(ii) recklessly made a personalized recommendation of such information; and
‘‘(B) such recommendation materially contributed to a physical or severe emotional injury to any person.
So, let’s start with the basics. I know there’s been a push lately among some — including the whistleblower Frances Haugen — to argue that the real problem with Facebook is “the algorithm” and how it recommends “bad stuff.” The evidence to support this claim is actually incredibly thin, but we’ll leave that aside for now. But at its heart, “the algorithm” is simply a set of recommendations, and recommendations are opinions and opinions are… protected expression under the 1st Amendment.
Exempting Section 230 from algorithms cannot change this underlying fact about the 1st Amendment. All it means is that rather than getting a quick dismissal of the lawsuit, you’ll have a long, drawn out, expensive lawsuit on your hands, before ultimately finding out that of course algorithmic recommendations are protected by the 1st Amendment. For much more on the problem of regulating “amplification,” I highly, highly recommend reading Daphne Keller’s essay on the challenges of regulating amplification (or listen to the podcast I did with Daphne about this topic). It’s unfortunately clear that none of the drafters of this bill read Daphne’s piece (or if they did, they simply ignored it, which is worse). Supporters of this bill will argue that in simply removing 230 from amplification/algorithms, this is a “content neutral” approach. Yet as Daphne’s paper detailed, that does not get you away from the serious Constitutional problems.
Another way to think about this: this is effectively telling social media companies that they can be sued for their editorial choices of which things to promote. If you applied the same thinking to the NY Times or CNN or Fox News or the Wall Street Journal, you might quickly recognize the 1st Amendment problems here. I could easily argue that the NY Times’ constant articles misrepresenting Section 230 subject me to “severe emotional injury.” But of course, any such lawsuit would get tossed out as ridiculous. Does flipping through a magazine and seeing advertisements of products I can’t afford subject me to severe emotional injury? How is that different than looking at Instagram and feeling bad that my life doesn’t seem as cool as some lame influencer?
Furthermore, this focus on “recommendations” is… kinda weird. It ignores all the reasons why recommendations are often quite good. I know that some people have a kneejerk reaction against such recommendations but nearly every recommendation engine I use makes my life much better. Nearly every story I write on Techdirt I find via Twitter recommending tweets to me or Google News recommending stories to me — both based on things I’ve clicked on in the past. And both are (at times surprisingly) good at surfacing stories I would be unlikely to find otherwise, and doing so quickly and efficiently.
Yet, under this plan, all such services would be at significant risk of incredibly expensive litigation over and over and over again. The sensible thing for most companies to do in such a situation is to make sure that only bland, uncontroversial stuff shows up in your feed. This would be a disaster for marginalized communities. Black Lives Matter? That can’t be allowed as it might make people upset. Stories about bigotry, or about civil rights violations? Too “controversial” and might contribute to emotional injury.
The backers of this bill also argue that the bill is narrowly tailored and won’t destroy the underlying Section 230, but that too is incorrect. As Cathy Gellis just pointed out, removing the procedural benefits of Section 230 takes away all the benefits. Section 230 helps get you out of these cases much more quickly. But under this bill, now everyone will add in a claim under this clause that the “recommendation” cause “emotional injury” and now you have to litigate whether or not you’re even covered by Section 230. That means no more procedural benefit of 230.
The bill has a “carve out” for “smaller” companies, but again gets all that wrong. It seems clear that they either did not read, or did not understand, this excellent paper by Eric Goldman and Jess Miers about the important nuances of regulating internet services by size. In this case, the “carve out” is for sites that have 5 million or fewer “unique monthly visitors or users for not fewer than 3 of the preceding 12 months.” Leaving aside the rather important point that there really is no agreed upon notion of what a “unique monthly visitor” actually is (seriously, every stats package will give you different results, and now every site will have incentive to use a stats package that lies and gives you lower results to get beneath the number), that number is horrifically low.
Earlier this year, I suggested a test suite of websites that any internet regulation bill should be run against, highlighting that bills like these impact way more than Facebook and Google. And lots and lots of the sites I mention get way beyond 5 million monthly views.
So under this bill, a company like Yelp would face real risk in recommending restaurants to you. If you got food poisoning, that would be an injury you could now sue Yelp over. Did Netflix recommend a movie to you that made you sad? Emotional injury!
As Berin Szoka notes in a Twitter thread about the bill, this bill from Democrats, actually gives Republican critics of 230 exactly what they wanted: a tool to launch a million “SLAM” suits — Strategic Lawsuits Against Moderation. And, as such, he notes that this bill would massively help those who use the internet to spread baseless conspiracy theories, because THEY WOULD NOW GET TO SUE WEBSITES for their moderation choices. This is just one example of how badly the drafters of the bill misunderstand Section 230 and how it functionally works. It’s especially embarrassing that Rep. Eshoo would be a co-sponsor of a bill like this, since this bill would be a lawsuit free-for-all for companies in her district.
10/ In short, Republicans have long aimed to amend #Section230 to enable Strategic Lawsuits Against Moderation (SLAMs)
This new Democratic bill would do the same
Who would benefit? Those who use the Internet to spread hate speech and lies about elections, COVID, etc
— Berin Szóka ???? (@BerinSzoka) October 14, 2021
Another example of the wacky drafting in the bill is the “scienter” bit. Scienter is basically whether or not the defendant had knowledge that what they were doing was wrongful. So in a bill like this, you’d expect that the scienter would require the platforms to know that the information they were recommending was harmful. That’s the only standard that would even make sense (though would still be constitutionally problematic). However, that’s not how it is in the bill. Instead, the scienter is… that the platform knows they recommend stuff. That’s it. In the quote above the line that matters is:
such provider of a service knew or should have known such provider of a service was making a personalized recommendation of such information
In other words, the scienter here… is that you knew you were recommending stuff personally. Not that it was bad. Not that it was dangerous. Just that you were recommending stuff.
Another drafting oddity is the definition of a “personalized recommendation.” It just says it’s a personalized recommendation if it uses a personalized algorithm. And the definition of “personalized algorithm” is this bit of nonsense:
The term ‘personalized algorithm’ means an algorithm that relies on information specific to an individual.
“Information specific to an individual” could include things like… location. I’ve seen some people suggest that Yelp’s recommendations wouldn’t be covered by this law because they’re “generalized” recommendations, not “personal ones” but if Yelp is recommending stuff to me based on my location (kinda necessary) then that’s now information specific to me, and thus no more 230 for the recommendation.
It also seems like this would be hell for spam filters. I train my spam filter, so the algorithm it uses is specific to me and thus personalized. But I’m pretty sure that under this bill a spammer whose emails are put into a spam filter can now sue, claiming injury. That’ll be fun.
Meanwhile, if this passes, Facebook will be laughing. The services that have successfully taken a bite out of Facebook’s userbase over the last few years have tended to be ones that have a better algorithm for recommending things: like TikTok. The one Achilles heel that Facebook has — it’s recommendations aren’t as good as new upstarts — gets protected by this bill.
Almost nothing here makes any sense at all. It misunderstands the problems. It misdiagnoses the solution. It totally misunderstands Section 230. It creates massive downside consequences for competitors to Facebook and to users. It enables those who are upset about moderation choices to sue companies (helping conspiracy theorists and misinformation peddlers). I can’t see a single positive thing that this bill does. Why the hell is any politician supporting this garbage?
Filed Under: algorithms, anna eshoo, frank pallone, intermediary liability, jan schakowsky, mike doyle, news feeds, personalized recommendations, recommendations, section 230
Companies: facebook, yelp