Berin Szoka's Techdirt Profile

Berin Szoka

About Berin Szoka

Posted on Techdirt - 4 May 2022 @ 12:00pm

Musk, Twitter, Bluesky & The Future Of Content Moderation (Part II)

In Part I, we explained why the First Amendment doesn’t get Musk to where he seemingly wants to be: If Twitter were truly, legally the “town square” (i.e., public forum) he wants it to be, it couldn’t do certain things Musk wants (cracking down on spam, authenticating users, banning things equivalent to “shouting fire in a crowded theatre,” etc.). Twitter also couldn’t do the things it clearly needs to do to continue to attract the critical mass of users that make the site worth buying, let alone attract those—eight times as many Americans—who don’t use Twitter every day. 

So what, exactly, should Twitter do to become a more meaningful “de facto town square,” as Musk puts it?

What Objectives Should Guide Content Moderation?

Even existing alternative social media networks claim to offer the kind of neutrality that Musk contemplates—but have failed to deliver. In June 2020, John Matze, Parler’s founder and then its CEO, proudly declared the site to be an “a community town square, an open town square, with no censorship,” adding, “if you can say it on the street of New York, you can say it on Parler.” Yet that same day, Matze also bragged of “banning trolls” from the left.

Likewise, GETTR’s CEO has bragged about tracking, catching, and deleting “left-of-center” content, with little clarity about what that might mean. Musk promises to void such hypocrisy:

Let’s take Musk at his word. The more interesting thing about GETTR, Parler and other alternative apps that claim to be “town squares” is just how much discretion they allow themselves to moderate content—and how much content moderation they do. 

Even in mid-2020, Parler reserved the right to “remove any content and terminate your access to the Services at any time and for any reason or no reason,” adding only a vague aspiration: “although Parler endeavors to allow all free speech that is lawful and does not infringe the legal rights of others.” today, Parler forbids any user to “harass, abuse, insult, harm, defame, slander, disparage, intimidate, or discriminate based on gender, sexual orientation, religion, ethnicity, race, age, national origin, or disability.” Despite claiming that it “defends free speech,” GETTR bans racial slurs such as those by Miller as well as white nationalist codewords

Why do these supposed free-speech-absolutist sites remove perfectly lawful content? Would you spend more or less time on a site that turned a blind eye to racial slurs? By the same token, would you spend more or less time on Twitter if the site stopped removing content denying the Holocaust, advocating new genocides, promoting violence, showing animals being tortured, encouraging teenagers to cut or even kill themselves, and so on? Would you want to be part of such a community? Would any reputable advertiser want to be associated with it? That platforms ostensibly starting with the same goal as Musk have reserved broad discretion to make these content moderation decisions underscores the difficulty in drawing these lines and balancing competing interests.

Musk may not care about alienating advertisers, but all social media platforms moderate some lawful content because it alienates potential users. Musk implicitly acknowledges this imperative on user engagement, at least when it comes to the other half of content moderation: deciding which content to recommend to users algorithmically—an essential feature of any social media site. (Few Twitter users activate the option to view their feeds in reverse-chronological order.) When TED’s Chrius Anderson asked him about a tweet many people have flagged as “obnoxious,” Musk hedged: “obviously in a case where there’s perhaps a lot of controversy, that you would not want to necessarily promote that tweet.” Why? Because, presumably, it could alienate users. What is “obvious” is that the First Amendment would not allow the government to disfavor content merely because it is “controversial” or “obnoxious.”

Today, Twitter lets you block and mute other users. Some claim user empowerment should be enough to address users’ concerns—or that user empowerment just needs to work better. A former Twitter employee tells the Washington Post that Twitter has considered an “algorithm marketplace” in which users can choose different ways to view their feeds. Such algorithms could indeed make user-controlled filtering easier and more scalable. 

But such controls offer only “out of sight, out of mind” comfort. That won’t be enough if a harasser hounds your employer, colleagues, family, or friends—or organizes others, or creates new accounts, to harass you. Even sophisticated filtering won’t change the reality of what content is available on Twitter.

And herein lies the critical point: advertisers don’t want their content to be associated with repugnant content even if their ads don’t appear next to that content. Likewise, most users care what kind of content a site allows even if they don’t see it. Remember, by default, everything said on Twitter is public—unlike the phone network. Few, if anyone, would associate the phone company with what’s said in private telephone communications. But every Tweet that isn’t posted to the rare private account can be seen by anyone. Reporters embed tweets in news stories. Broadcasters include screenshots in the evening news. If Twitter allows odious content, most Twitter users will see some of that one way or another—and they’ll hold Twitter responsible for deciding to allow it.

If you want to find such lawful but awful content, you can find it online somewhere. But is that enough? Should you be able to find it on Twitter, too? These are undoubtedly difficult questions on which many disagree; but they are unavoidable.

What, Exactly, Is the Virtual Town Square?

The idea of a virtual town square isn’t new, but what, precisely, that means has always been fuzzy, and lofty talk in a recent Supreme Court ruling greatly exacerbated that confusion. 

“Through the use of chat rooms,” proclaimed the Supreme Court in Reno v. ACLU (1997), “any person with a phone line can become a town crier with a voice that resonates farther than it could from any soapbox. Through the use of Web pages, mail exploders, and newsgroups, the same individual can become a pamphleteer.” The Court wasn’t saying that digital media were public fora without First Amendment rights. Rather, it said the opposite: digital publishers have the same First Amendment rights as traditional publishers. Thus, the Court struck down Congress’s first attempt to regulate online “indecency” to protect children, rejecting analogies to broadcasting, which rested on government licensing of a “‘scarce’ expressive commodity.” Unlike broadcasting, the Internet empowers anyone to speak; it just doesn’t guarantee them an audience.

In Packingham v. North Carolina (2017), citing Reno’s “town crier” language, the Court waxed even more lyrical: “By prohibiting sex offenders from using [social media], North Carolina with one broad stroke bars access to what for many are the principal sources for knowing current events, checking ads for employment, speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge.” This rhetorical flourish launched a thousand conservative op-eds—all claiming that social media were legally public fora like town squares. 

Of course, Packingham doesn’t address that question; it merely said governments can’t deny Internet access to those who have completed their sentences. Manhattan Community Access Corp. v. Halleck (2019) essentially answers the question, albeit in the slightly different context of public access cable channels: “merely hosting speech by others” doesn’t “transform private entities into” public fora. 

The question facing Musk now is harder: what part, exactly, of the Internet should be treated as if it were a public forum—where anyone can say anything “within the bounds of the law”? The easiest way to understand the debate is the Open Systems Interconnection model, which has guided the understanding of the Internet since the 1970s:

Long before “net neutrality” was a policy buzzword, it described the longstanding operational state of the Internet: Internet service (broadband) providers won’t block, throttle or discriminate against lawful Internet content. The sky didn’t fall when the Republican FCC repealed net neutrality rules in 2018. Indeed, nothing really changed: You can still send or receive lawful content exactly as before. ISPs promise to deliver connectivity to all lawful content. The Federal Trade Commission enforces those promises, as do state attorneys general. And, in upholding the FCC’s 2015 net neutrality rules over then-Judge Brett Kavanaugh’s arguments that they violated the First Amendment, the D.C. Circuit noted that the rules applied only to providers that “sell retail customers the ability to go anywhere (lawful) on the Internet.” The rules simply didn’t apply to “an ISP making sufficiently clear to potential customers that it provides a filtered service involving the ISP’s exercise of ‘editorial intervention.’”)

In essence, Musk is talking about applying something like net neutrality principles, developed to govern the uncurated service ISPs offer at layers 1-3, to Twitter, which operates at layer 7—but with a major difference: Twitter can monitor all content, which ISPs can’t do. This means embroiling Twitter in trying to decide what content is lawful in a far, far deeper way than any ISP has ever attempted.

Implementing Twitter’s existing plans to offer users an “algorithm marketplace” would essentially mean creating a new layer of user control on top of Twitter. But Twitter has also been working on a different idea: creating a layer below Twitter, interconnecting all the Internet’s “soapboxes” into one, giant virtual town square while still preserving Twitter as a community within that square that most people feel comfortable participating in.

“Bluesky”: Decentralization While Preserving Twitter’s Brand

Jack Dorsey, former Twitter CEO, has been talking about “decentralizing” social media for over three years—leading some reporters to conclude that Dorsey and Musk “share similar views … promoting more free speech online.” In fact, their visions for Twitter seem to be very different: unlike Musk, Dorsey saw Twitter as a community that, like any community, requires curation.

In late 2019, Dorsey announced that Twitter would fund Bluesky, an independent project intended “to develop an open and decentralized standard for social media.” Bluesky “isn’t going to happen overnight,” Dorsey warned in 2019. “It will take many years to develop a sound, scalable, and usable decentralized standard for social media.” The project’s latest update detailed the many significant challenges facing the effort, but significant progress. 

Twitter has a strong financial incentive to shake up social media: Bluesky would “allow us to access and contribute to a much larger corpus of public conversation.” That’s lofty talk for an obvious business imperative. Recall Metcalfe’s Law: a network’s impact is the square of the number of nodes in the network. Twitter (330 million active users worldwide) is a fraction as large as its “Big Tech” rivals: Facebook (2.4 billion), Instagram (1 billion), YouTube (1.9 billion) and TikTok. So it’s not surprising that Twitter’s market cap is a much smaller fraction of theirs—just 1/16 that of Facebook. Adopting Bluesky should dramatically increase the value of Twitter and smaller companies like Reddit (330 million users) and LinkedIn (560 million users) because Bluesky would allow users of each participating site to interact easily with content posted on other participating sites. Each site would be more an application or a “client” than “platform”—just as Gmail and Outlook both use the same email protocols. 

Dorsey also framed Bluesky as a way to address concerns about content moderation. Days after the January 6 insurrection, Dorsey defended Trump’s suspension from Twitter yet noted concerns about content moderation:

Dorsey acknowledged the need for more “transparency in our moderation operations,” but pointed to Bluesky as a more fundamental, structural solution:

Adopting Bluesky won’t change how each company does its own content moderation, but it would make those decisions much less consequential. Twitter could moderate content on Twitter, but not on the “public conversation layer.” No central authority could control that, just as with email protocols and Bitcoin. Twitter and other participating social networks would no longer be “platforms” for speech so much as applications (or “clients”) for viewing the public conversation layer,  the universal “corpus” of social content.

Four years ago, Twitter banned Alex Jones for repeatedly violating rules against harassment. The conspiracy theorist par excellence moved to Gab, an alternative social network launched in 2017 that claims 15 million monthly visitors (an unverified number). On Gab, Jones now has only a quarter as many followers as he once had on Twitter. And because the site is much smaller overall, he gets much less engagement and attention than he once did. Metcalfe’s Law means fewer people talk about him.

Bluesky won’t get Alex Jones or his posts back on Twitter or other mainstream social media sites, but it might ensure that his content is available on the public conversation layer, where users of any app that doesn’t block him can see it. Thus, Jones could use his Gab account to seamlessly reach audiences on Parler, Getter, Truth Social, or any other site using Bluesky that doesn’t ban him. Each of these sites, in turn, would have a strong incentive to adopt Bluesky because the protocol would make them more viable competitors to mainstream social media. Bluesky would turn Metcalfe’s Law to their advantage: no longer separate, tiny town squares, these sites would be ways of experiencing the same town square—only with a different set of filters. 

But Mecalfe’s Law cuts both ways: even if Twitter and other social media sites implemented Bluesky, so long as Twitter continues to moderate the likes of Alex Jones, the portion of the “town square” enabled by Bluesky that Jones has access to will be limited. Twitter would remain a curated community, a filter (or set of filters) for experiencing the “public conversation layer.” When first announcing Bluesky, Dorsey said the effort would be good for Twitter not only for allowing the company “to access and contribute to a much larger corpus of public conversation” but also because Twitter could “focus our efforts on building open recommendation algorithms which promote healthy conversation.” With user-generated content becoming more interchangeable across services—essentially a commodity—Twitter and other social media sites would compete on user experience.

Given this divergence in visions, it shouldn’t be surprising that Musk has never mentioned Bluesky. If he merely wanted to make Bluesky happen faster, he could pour money into the effort—an independent, open source project—without buying Twitter. He could help implement proposals to run the effort as a decentralized autonomous organization (DAO) to ensure its long-term independence from any effort to moderate content. Instead, Musk is focused on cutting back Twitter’s moderation of content—except where he wants more moderation. 

What Does Political Neutrality Really Mean?

Much of the popular debate over content moderation revolves around the perception that moderation practices are biased against certain political identities, beliefs, or viewpoints. Jack Dorsey responded to such concerns in a 2018 congressional hearing, telling lawmakers: “We don’t consider political viewpoints—period. Impartiality is our guiding principle.” Dorsey was invoking the First Amendment, which bars discrimination based on content, speakers, or viewpoints. Musk has said something that sounds similar, but isn’t quite the same:

The First Amendment doesn’t require neutrality as to outcomes. If user behavior varies across the political spectrum, neutral enforcement of any neutral rule will produce what might look like politically “biased” results.

Take, for example, a study routinely invoked by conservatives that purportedly shows Twitter’s political bias in the 2016 election. Richard Hanania, a political scientist at Columbia University, concluded that Twitter suspended Trump supporters more often than Clinton supporters at a ratio of 22:1. Hanania postulated that this meant Trump supporters would have to be at least four times as likely to violate neutrally applied rules to rule out Twitter’s political bias—and dismissed such a possibility as implausible. But Hanania’s study was based on a tiny sample of only reported (i.e., newsworthy) suspensions—just a small percentage of overall content moderation. And when one bothers to actually look at Hanania’s data—something none of the many conservatives who have since invoked his study seem to have done—one finds exactly those you’d expect to be several times more likely to violate neutrally-applied rules: the American Nazi Party, leading white supremacists including David Duke, Richard Spencer, Jared Taylor, Alex Jones, Charlottesville “Unite the Right” organizer James Allsup, and various Proud Boys. 

Was Twitter non-neutral because it didn’t ban an equal number of “far left” and “far right” users? Or because the “right” was incensed by endless reporting in leading outlets like The Wall Street Journal of a study purporting to show that “conservatives” were being disproportionately “censored”?

There’s no way to assess Musk’s outcome-based conception neutrality without knowing a lot more about objectionable content on the site. We don’t know how many accounts were reported, for what reasons, and what happened to those complaints. There is no clear denominator that allows for meaningful measurements—leaving only self-serving speculation about how content moderation is or is not biased. This is one problem Musk can do something about.

Greater Transparency Would Help, But…

After telling Anderson “I’m not saying that I have all the answers here,” Musk fell back on something simpler than line-drawing in content moderation: increased transparency. If Twitter should “make any changes to people’s tweets, if they’re emphasized or de-emphasized, that action should be made apparent so anyone can see that action’s been taken, so there’s no behind the scenes manipulation, either algorithmically or manually.” Such tweet-by-tweet reporting sounds appealing in principle, but it’s hard to know what it will mean in practice. What kind of transparency will users actually find useful? After all, all tweets are “emphasized or de-emphasized” to some degree; that is simply what Twitter’s recommendation algorithm does.

Greater transparency, implemented well, could indeed increase trust in Twitter’s impartiality. But ultimately, only large-scale statistical analysis can resolve claims of systemic bias. Twitter could certainly help to facilitate such research by providing data—and perhaps funding—to bona fide researchers.

More problematic is Musk’s suggestion that Twitter’s content moderation algorithm should be “open source” so anyone could see it. There is an obvious reason why such algorithms aren’t open source: revealing precisely how a site decides what content to recommend would make it easy to manipulate the algorithm. This is especially true for those most determined to abuse the site: the spambots on whom Musk has declared war. Making Twitter’s content moderation less opaque will have to be done carefully, lest it fosters the abuses that Musk recognizes as making Twitter a less valuable place for conversation.

Public Officials Shouldn’t Be Able to Block Users

Making Twitter more like a public forum is, in short, vastly more complicated than Musk suggests. But there is one easy thing Twitter could do to, quite literally, enforce the First Amendment. Courts have repeatedly found that government officials can violate the First Amendment by blocking commenters on their official accounts. After then-President Trump blocked several users from replying to his tweets, the users sued. The Second Circuit held that Trump violated the First Amendment by blocking users because Trump’s Twitter account was, with respect to what he could do, a public forum. The Supreme Court vacated the Second Circuit’s decision—Trump left office, so the case was moot—but Justice Thomas indicated that some aspects of government officials’ accounts seem like constitutionally protected spaces. Unless a user’s conduct constitutes harassment, government accounts likely can’t block them without violating the First Amendment. Whatever courts ultimately decide, Twitter could easily implement this principle.

Conclusion

Like Musk, we definitely “don’t have all the answers here.” In introducing what we know as the “marketplace of ideas” to First Amendment doctrine, Justice Holmes’s famous dissent in Abrams v. United States (1919) said this of the First Amendment: “It is an experiment, as all life is an experiment.” The same could be said of the Internet, Twitter, and content moderation. 

The First Amendment may help guide Musk’s experimentation with content moderation, but it simply isn’t the precise roadmap he imagines—at least, not for making Twitter the “town square” everyone wants to go participate in actively. Bluesky offers the best of both worlds: a much more meaningful town square where anyone can say anything, but also a community that continues to thrive. 

Berin Szóka (@BerinSzoka) is President of TechFreedom. Ari Cohn (@AriCohn) is Free Speech Counsel at TechFreedom. Both are lawyers focused on the First Amendment’s application to the Internet.

Posted on Techdirt - 4 May 2022 @ 09:30am

Musk, Twitter, Why The First Amendment Can’t Resolve Content Moderation (Part I)

“Twitter has become the de facto town square,” proclaims Elon Musk. “So, it’s really important that people have both the reality and the perception that they’re able to speak freely within the bounds of the law.” When pressed by TED’s Chris Anderson, he hedged: “I’m not saying that I have all the answers here.” Now, after buying Twitter, his position is less clear: “I am against censorship that goes far beyond the law.” Does he mean either position literally?

Musk wants Twitter to stop making contentious decisions about speech. “[G]oing beyond the law is contrary to the will of the people,” he declares. Just following the First Amendment, he imagines, is what the people want. Is it, though? The First Amendment is far, far more absolutist than Musk realizes. 

Remember Neo-Nazis with burning torches and screaming “the Jews will not replace us!”? The First Amendment required Charlottesville to allow that demonstration. Some of them were arrested and prosecuted for committing acts of violence. One even killed a bystander with his car. The First Amendment permits the government to punish violent conduct but—contrary to what Musk believes—almost none of the speech associated with it. 

The Constitution protects “freedom for the thought that we hate,” as Justice Oliver Wendell Holmes declared in a 1929 dissent that has become the bedrock of modern First Amendment jurisprudence. In most of the places where we speak, the First Amendment does not set limits on what speech the host, platform, proprietor, station, or publication may block or reject. The exceptions are few: actual town squares, company-owned towns, and the like—but not social media, as every court to decide the issue has held

Musk wants to treat Twitter as if it were legally a public forum. A laudable impulse—and of course Musk has every legal right to do that. But does he really want to? His own statements indicate not. And on a practical level, it would not make much sense. Allowing anyone to say anything lawful, or even almost anything lawful, would make Twitter a less useful, less vibrant virtual town square than it is today. It might even set the site on a downward spiral from which it never recovers.

Can Musk have it both ways? Can Twitter help ensure that everyone has a soapbox, however appalling their speech, without alienating both users and the advertisers who sustain the site?  Twitter is already working on a way to do just that—by funding Bluesky—but Musk doesn’t seem interested. Nor does he seem interested in other technical and institutional improvements Twitter could make to address concerns about arbitrary content moderation. None of these reforms would achieve what seems to be Musk’s real goal: politically neutral outcomes. We’ll discuss all this in Part II.

How Much Might Twitter’s Business Model Change?

A decade ago, a Twitter executive famously described the company as “the free speech wing of the free speech party.” Musk may imagine returning to some purer, freer version of Twitter when he says “I don’t care about the economics at all.” But in fact, increasing Twitter’s value as a “town square” will require Twitter to continue striking a careful balance between what individual users can say and creating an environment that so many people want to use so regularly.

User Growth. A traditional public forum (like Lee Park in Charlottesville) is indifferent to whether people choose to use it. Its function is simply to provide a space for people to speak. But if Musk didn’t care how many people used Twitter, he’d buy an existing site like Parler or build a new one. He values Twitter for the same reason any network is valuable: network effects. Digital markets have always been ruled by Metcalfe’s Law: the impact of any network is equal to the square of the number of nodes in the network. 

No, not all “nodes” are equal. Twitter is especially popular among journalists, politicians and certain influencers. Yet the site has only 39.6 million active daily U.S. users. That may make Twitter something like ten times larger than Parler, but it’s only one-seventh the size of Facebook—and only the world’s fifteenth-largest social network. To some in the “very online” set, Twitter may seem like everything, but 240 million Americans age 13+ don’t use Twitter every day. Quadrupling Twitter’s user base would make the site still only a little more than half as large as Facebook, but Metcalfe’s law suggests that would make Twitter roughly sixteen times more impactful than it is today. 

Of course, trying to maximize user growth is exactly what Twitter has been doing since 2006. It’s a much harder challenge than for Facebook or other sites premised on existing connections. Getting more people engaged on Twitter requires making them comfortable with content from people they don’t know offline. Twitter moderates harmful content primarily to cultivate a community where the timid can express themselves, where moms and grandpas feel comfortable, too. Very few Americans want to be anywhere near anything like the Charlottesville rally—whether offline or online.

User Engagement. Twitter’s critics allege the site highlights the most polarizing, sensationalist content because it drives engagement on the site. It’s certainly possible that a company less focused on its bottom line might change its algorithms to focus on more boring content. Whether that would make the site more or less useful as a town square is the kind of subjective value judgment that would be difficult to justify under the First Amendment if the government attempted to legislate it.

But maximizing Twitter’s “town squareness” means more than maximizing “time on site”—the gold standard for most sites. Musk will need to account for users’ willingness to actually engage in dialogue on the site. 

https://twitter.com/ARossP/status/1519062065490673670

Short of leaving Twitter altogether, overwhelmed and disgusted users may turn off notifications for “mentions” of them, or limit who can reply to their tweets. As Aaron Ross Powell notes, such a response “effectively turns Twitter from an open conversation to a set of private group chats the public can eavesdrop on.” It might be enough, if Musk truly doesn’t care about the economics, for Twitter to be a place where anything lawful goes and users who don’t like it can go elsewhere. But the realities of running a business are obviously different from those of traditional, government-owned public fora. If Musk wants to keep or grow Twitter’s user base, and maintain high engagement levels, there are a plethora of considerations he’ll need to account for.

Revenue. Twitter makes money by making users comfortable with using the site—and advertisers comfortable being associated with what users say. This is much like the traditional model of any newspaper. No reputable company would buy ads in a newspaper willing to publish everything lawful. These risks are much, much greater online. Newspapers carefully screen both writers before they’re hired and content before it’s published. Digital publishers generally can’t do likewise without ruining the user experience. Instead, users help a mixture of algorithms and human content moderators flag content potentially toxic to users and advertisers. 

Even without going as far as Musk says he wants to, alternative “free speech” platforms like Gab and Parler have failed to attract any mainstream advertisers. By taking Twitter private, Musk could relieve pressure to maximize quarterly earnings. He might be willing to lose money but the lenders financing roughly half the deal definitely aren’t. The interest payments on their loans could exceed Twitter’s 2021 earnings before interest, taxes, depreciation, and amortization. How will Twitter support itself? 

Protected Speech That Musk Already Wants To Moderate

As Musk’s analysts examine whether the purchase is really worth doing, the key question they’ll face is just what it would mean to cut back on content moderation. Ultimately, Musk will find that the First Amendment just doesn’t offer the roadmap he thinks it does. Indeed, he’s already implicitly conceded that by saying he wants to moderate certain kinds of content in ways the First Amendment wouldn’t allow. 

Spam. “If our twitter bid succeeds,” declared Musk in announcing his takeover plans, “we will defeat the spam bots or die trying!” The First Amendment, if he were using it as a guide for moderation, would largely thwart him.

Far from banning spam, as Musk proposes, the 2003 CAN-SPAM Act merely requires email senders to, most notably, include unsubscribe options, honor unsubscribe requests, and accurately label both subject and sender. Moreover, the law defines spam narrowly: “the commercial advertisement or promotion of a commercial product or service.” Why such a narrow approach? 

Even unsolicited commercial messages are protected by the First Amendment so long as they’re truthful. Because truthful commercial speech receives only “intermediate scrutiny,” it’s easier for the government to justify regulating it. Thus, courts have also protected the constitutional right of public universities to block commercial solicitations. 

But, as courts have noted, “the more general meaning” of “spam” “does not (1) imply anything about the veracity of the information contained in the email, (2) require that the entity sending it be properly identified or authenticated, or (3) require that the email, even if true, be commercial in character.” Check any spam folder and you’ll find plenty of messages that don’t obviously qualify as commercial speech, which the Supreme Court has defined as speech which does “no more than propose a commercial transaction.” 

Some emails in your spam folder come from non-profits, political organizations, or other groups. Such non-commercial speech is fully protected by the First Amendment. Some messages you signed up for may inadvertently wind up in your spam filter; plaintiffs regularly sue when their emails get flagged as spam. When it’s private companies like ISPs and email providers making such judgments, the case is easy: the First Amendment broadly protects their exercise of editorial judgment. Challenges to public universities’ email filters have been brought by commercial spammers, so the courts have dodged deciding whether email servers constituted public fora. These courts have implied, however, that if such taxpayer-funded email servers were public fora, email filtering of non-commercial speech would have to be content- and viewpoint-neutral, which may be impossible.

Anonymity. After declaring his intention to “defeat the spam bots,” Musk added a second objective of his plan for Twitter: “And authenticate all real humans.” After an outpouring of concern, Musk qualified his position:

Whatever “balance” Musk has in mind, the First Amendment doesn’t tell him how to strike it. Authentication might seem like a content- and viewpoint-neutral way to fight tweet-spam, but it implicates a well-established First Amendment right to anonymous and pseudonymous speech.

Fake accounts plague most social media sites, but they’re a bigger problem for Twitter since, unlike Facebook, it’s not built around existing offline connections and Twitter doesn’t even try to require users to use their real names. A 2021 study estimated that “between 9% and 15% of active Twitter accounts are bots” controlled by software rather than individual humans. Bots can have a hugely disproportionate impact online. They’re more active than humans and can coordinate their behavior, as that study noted, to “manufacture fake grassroots political support, promote terrorist propaganda and recruitment, manipulate the stock market, and disseminate rumors and conspiracy theories.” Given Musk’s concerns about “cancel culture,” he should recognize that online harassment, especially targeting employers and intimate personal connections, as a way that lawful speech can be wielded against lawful speech.

When Musk talks about “authenticating” humans, it’s not clear what he means. Clearly, “authentication” means more than simply requiring captchas to make it harder for machines to create Twitter accounts. Those have been shown to be defeatable by spambots. Surely, he doesn’t mean making real names publicly visible, as on Facebook. After all, pseudonymous publications have always been a part of American political discourse. Presumably, Musk means Twitter would, instead of merely requiring an email address, somehow verify and log the real identity behind each account. This isn’t really a “middle ground”: pseudonyms alone won’t protect vulnerable users from governments, Twitter employees, or anyone else who might be able to access Twitter’s logs. However such logs are protected, the mere fact of collecting such information would necessarily chill speech by those concerned of being persecuted for their speech. Such authentication would clearly be unconstitutional if a government were to do it.

“Anonymity is a shield from the tyranny of the majority,” ruled the Supreme Court in McIntyre v. Ohio Elections Comm’n (1995). “It thus exemplifies the purpose behind the Bill of Rights and of the First Amendment in particular: to protect unpopular individuals from retaliation . . . at the hand of an intolerant society.” As one lower court put it, “the free exchange of ideas on the Internet is driven in large part by the ability of Internet users to communicate anonymously.” 

We know how these principles apply to the Internet because Congress has already tried to require websites to “authenticate” users. The Child Online Protection Act (COPA) of 1998 required websites to age-verify users before they could access material that could be “harmful to minors.” In practice, this meant providing a credit card, which supposedly proved the user was likely an adult. Courts blocked the law and, after a decade of litigation, the U.S. Court of Appeals for the Eighth Circuit finally struck it down in 2008. The court held that “many users who are not willing to access information non-anonymously will be deterred from accessing the desired information.” The Supreme Court let that decision stand. The United Kingdom now plans to implement its own version of COPA, but First Amendment scholars broadly agree: age verification and user authentication are constitutional non-starters in the United States.

What kind of “balance” might the First Amendment allow Twitter to strike? Clearly, requiring all users to identify themselves wouldn’t pass muster. But suppose Twitter required authentication only for those users who exhibit spambot-like behavior—say, coordinating tweets with other accounts that behave like spambots. This would be different from COPA, but would it be constitutional? Probably not. Courts have explicitly recognized a right to engage send non-commercial spam (unsolicited messages), for example: “were the Federalist Papers just being published today via e-mail,” warned the Virginia Supreme Court in striking down a Virginia anti-spam law, “that transmission by Publius would violate the statute.” 

Incitement. In his TED interview, Musk readily agreed with Anderson that “crying fire in a movie theater” “would be a crime.” No metaphor has done more to sow confusion about the First Amendment. It comes from the Supreme Court’s 1919 Schenck decision, which upheld the conviction of the head of the U.S. Socialist Party for distributing pamphlets criticizing the military draft. Advocating obstructing military recruiting, held the Court, constituted a “clear and present danger.” Justice Oliver Wendell Holmes mentioned “falsely shouting fire in a theatre” as a rhetorical flourish to drive the point home.

But Holmes revised his position just months later when he dissented in a similar case, Abrams v. United States. “[T]he best test of truth,” he wrote, “is the power of the thought to get itself accepted in the competition of the market.” That concept guides First Amendment decisions to this day—not Schenk’s vivid metaphor. Musk wants the open marketplace of ideas Holmes lauded in Abrams—yet also, somehow, Schenck’s much lower standard. 

In Brandenburg v. Ohio (1969), the Court finally overturned Schenck: the First Amendment does not “permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.” Thus, a Klansman’s openly racist speech and calls for a march on Washington were protected by the First Amendment. The Brandenburg standard has proven almost impossible to satisfy when speakers are separated from their listeners in both space and time. Even the Unabomber Manifesto wouldn’t qualify—which is why The New York Times and The Washington Post faced no legal liability when they agreed to publish the essay back in 1995 (to help law enforcement stop the serial mail-bomber). 

Demands that Twitter and other social media remove “harmful” speech—such as COVID misinformation—frequently invoke Schenck. Indeed, while many expect Musk will reinstate Trump on Twitter, his embrace of Schenck suggests the opposite: Trump could easily have been convicted of incitement under Schenck’s “clear and present danger” standard.

Self-Harm. Musk’s confusion over incitement may also extend to its close cousin: speech encouraging, or about, self-harm. Like incitement, “speech integral to criminal conduct” isn’t constitutionally protected, but, also like incitement, courts have defined that term so narrowly that the vast majority of content that Twitter currently moderates under its suicide and self-harm policy is protected by the First Amendment.

William Francis Melchert-Dinkel, a veteran nurse with a suicide fetish, claimed to have encouraged dozens of strangers to kill themselves and to have succeeded at least five times. Using fake profiles, Melchert-Dinkel entered into fake suicide pacts (“i wish [we both] could die now while we are quietly in our homes tonite:)”), invoked his medical experience to advise hanging over other methods (“in 7 years ive never seen a failed hanging that is why i chose that”), and asked to watch his victims hang themselves. He was convicted of violating Minnesota’s assisted suicide law in two cases, but the Minnesota Supreme Court voided the statute’s prohibitions on “advis[ing]” and “encourag[ing]” suicide. Only for providing “step-by-step instructions” on hanging could Melchert-Dinkel ultimately be convicted.

In another case, the Massachusetts Supreme Court upheld the manslaughter conviction of Michelle Carter; “she did not merely encourage the victim,” her boyfriend, also age 17, “but coerced him to get back into the truck, causing his death” from carbon monoxide poisoning. Like Melchert-Dinkel, Carter provided specific instructions on completing suicide: “knowing the victim was inside the truck and that the water pump was operating — … she could hear the sound of the pump and the victim’s coughing — [she] took no steps to save him.”

Such cases are the tiniest tip of a very large iceberg of self-harm content. With nearly one in six teens intentionally hurting themselves annually, researchers found 1.2 million Instagram posts in 2018 containing “one of five popular hashtags related to self-injury: #cutting, #selfharm, #selfharmmm, #hatemyself and #selfharmawareness.” More troubling, the rate of such posts nearly doubled across that year. Unlike suicide or assisted suicide, self-harm, even by teenagers, isn’t illegal, so even supplying direct instructions about how to do it it would be constitutionally protected speech. With the possible exception of direct user-to-user instructions about suicide, the First Amendment would require a traditional public forum to allow all this speech. It wouldn’t even allow Twitter to restrict access to self-harm content to adults—for the same reasons COPA’s age-gating requirement for “harmful-to-minors” content was unconstitutional. 

Trade-Offs in Moderating Other Forms of Constitutionally Protected Content

So it’s clear that Musk doesn’t literally mean Twitter users should be able to “speak freely within the bounds of the law.” He clearly wants to restrict some speech in ways that the government could not in a traditional public forum. His invocation of the First Amendment likely refers primarily to moderation of speech considered by some to be harmful—which the government has very limited authority to regulate. Such speech presents one of the most challenging content moderation issues: how a business should balance a desire for free discourse with the need to foster the environment that the most people will want to use for discourse. That has to matter to Musk, however much money he’s willing to lose on supporting a Twitter that alienates advertisers.

Hateful & Offensive Speech. Two leading “free speech” networks moderate, or even ban, hateful or otherwise offensive speech. “GETTR defends free speech,” the company said in January after banning former Blaze TV host Jon Miller, “but there is no room for racial slurs on our platform.” Likewise, Gab bans “doxing,” the exposure of someone’s private information with the intent to encourage others to harass them. These policies clearly aren’t consistent with the First Amendment: hate speech is fully protected by the First Amendment, and so is most speech that might colloquially be considered “harassment” or “bullying.”

In Texas v. Johnson (1989), the Supreme Court struck down a ban on flag burning: “if there is a bedrock principle underlying the First Amendment, it is simply that the government may not prohibit the expression of an idea simply because society finds the idea itself offensive or disagreeable.” In Matal v. Tam (2017), the Supreme Court reaffirmed this principle and struck down a prohibition on offensive trademark registrations: “Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express the thought that we hate.” 

Most famously, in 1978, the American Nazi Party won the right to march down the streets of Skokie, Illinois, a majority-Jewish town where ten percent of the population had survived the Holocaust. The town had refused to issue a permit to march. Displaying the swastika, Skokie’s lawyers argued, amounted to “fighting words”—which the Supreme Court had ruled, in 1942, could be forbidden if they had a “direct tendency to cause acts of violence by the persons to whom, individually, the remark is addressed.” The Illinois Supreme Court disagreed: “The display of the swastika, as offensive to the principles of a free nation as the memories it recalls may be, is symbolic political speech intended to convey to the public the beliefs of those who display it”—not “fighting words.” Even the revulsion of “the survivors of the Nazi persecutions, tormented by their recollections … does not justify enjoining defendants’ speech.”

Protection of “freedom for the thought we hate” in the literal town square is sacrosanct. The American Civil Liberties Union lawyers who defended the Nazis’ right to march in Skokie were Jews as passionately committed to the First Amendment as was Justice Holmes (post-Schenck). But they certainly wouldn’t have insisted the Nazis be invited to join in a Jewish community day parade. Indeed, the Court has since upheld the right of parade organizers to exclude messages they find abhorrent.

Does Musk really intend Twitter to host Nazis and white supremacists? Perhaps. There are, after all, principled reasons for not banning speech, even in a private forum, just because it is hateful. But there are unavoidable trade-offs. Musk will have to decide what balance will optimize user engagement and keep advertisers (and those financing his purchase) satisfied. It’s unlikely that those lines will be drawn entirely consistent with the First Amendment; at most, it can provide a very general guide.

Harassment & Threats. Often, users are banned by social media platforms for “threatening behavior” or “targeted abuse” (e.g., harassment, doxing). The first category may be easier to apply, but even then, a true public forum would be sharply limited in which threats it could restrict. “True threats,” explained the Court in Virginia v. Black (2003), “encompass those statements where the speaker means to communicate a serious expression of an intent to commit an act of unlawful violence to a particular individual or group of individuals.” But courts split on whether the First Amendment requires that a speaker have the subjective intent to threaten the target, or if it suffices that a reasonable recipient would have felt threatened. Maximal protection for free speech means a subjective requirement, lest the law punish protected speech merely because it might be interpreted as a threat. But in most cases, it would be difficult—if not impossible—to establish subjective intent without the kind of access to witnesses and testimony courts have. These are difficult enough issues even for courts; content moderators will likely find it impossible to adhere strictly, or perhaps even approximately, to First Amendment standards.

Targeted abuse and harassment policies present even thornier issues; what is (or should be) prohibited in this area remains among the most contentious aspects of content moderation. While social media sites vary in how they draw lines, all the major sites “[go] far beyond,” as Musk put it, what the First Amendment would permit a public forum to proscribe.

Mere offensiveness does not suffice to justify restricting speech as harassment; such content-based regulation is generally unconstitutional. Many courts have upheld harassment laws insofar as they target not speech but conduct, such as placing repeated telephone calls to a person in the middle of the night or physically stalking someone. Some scholars argue instead that the consistent principle across cases is that proscribable harassment involves an unwanted physical intrusion into a listener’s private space (whether their home or a physical radius around the person) for the purposes of unwanted one-on-one communication. Either way, neatly and consistently applying legal standards of harassment to content moderation would be no small lift.

Some lines are clear. Ranting about a group hatefully is not itself harassment, while sending repeated unwanted direct messages to an individual user might well be. But Twitter isn’t the telephone network. Line-drawing is more difficult when speech is merely about a person, or occurs in the context of a public, multi-party discussion. Is it harassment to be the “reply guy” who always has to have the last word on everything? What about tagging a person in a tweet about them, or even simply mentioning them by name? What if tweets about another user are filled with pornography or violent imagery? First Amendment standards protect similar real-world speech, but how many users want to party to such conversation?

Again, Musk may well want to err on the side of more permissiveness when it comes to moderation of “targeted abuse” or “harassment.”  We all want words to keep their power to motivate; that remains their most important function. As the Supreme Court said in 1949: “free speech… may indeed best serve its high purpose when it induces a condition of unrest … or even stirs people to anger. Speech is often provocative and challenging. It may strike at prejudices and preconceptions and have profound unsettling effects as it presses for the acceptance of an idea.” 

But Musk’s goal is ultimately, in part, to attract users and keep them engaged. To do that, Twitter will have to moderate some content that the First Amendment would not allow the government to punish. Content moderators have long struggled on how to balance these competing interests. The only certainty is that this is, and will continue to be, an extremely difficult tightrope to walk—especially for Musk. 

Obscenity & Pornography. Twitter already allows pornography involving consenting adults. Yet even this is more complicated than simply following the First Amendment. On the one hand, child sexual abuse material (CSAM) is considered obscenity, which the First Amendment simply doesn’t protect. All social media sites ban CSAM (and all mainstream sites proactively filter for, and block, it). On the other hand, nonconsensual pornography involving adults isn’t obscene, and therefore is protected by the First Amendment. Some courts have nonetheless upheld state “revenge porn” laws, but those laws are actually much narrower than Twitter’s flat ban (“You may not post or share intimate photos or videos of someone that were produced or distributed without their consent.”) 

Critical to the Vermont Supreme Court’s decision to uphold the state’s revenge porn law were two features that made the law “narrowly tailored.” First, it required intent to “harm, harass, intimidate, threaten, or coerce the person depicted.” Such an intent standard is a common limiting feature of speech restrictions upheld by courts. Yet none of Twitter’s policies turn on intent. Again, it would be impossible to meaningfully apply intent-based standards at the scale of the Internet and outside the established procedures of courtrooms. Intent is a complex inquiry unto itself; content moderators would find it nearly impossible to make these decisions with meaningful accuracy. Second, the Vermont law excluded  “[d]isclosures of materials that constitute a matter of public concern,” and those “made in the public interest.” Twitter does have a public-interest exception to its policies, yet, Twitter notes:

At present, we limit exceptions to one critical type of public-interest content—Tweets from elected and government officials—given the significant public interest in knowing and being able to discuss their actions and statements. 

It’s unlikely that Twitter would actually allow public officials to post pornographic images of others without consent today, simply because they were public officials. But to “follow the First Amendment,” Twitter would have to go much further than this: it would have to allow anyone to post such images, in the name of the “public interest.” Is that really what Musk means?

Gratuitous Gore. Twitter bans depictions of “dismembered or mutilated humans; charred or burned human remains; exposed internal organs or bones; and animal torture or killing.” All of these are protected speech. Violence is not obscenity, the Supreme Court ruled in Brown v. Entertainment Merchants Association (2011), and neither is animal cruelty, ruled the Court in U.S. v. Stevens (2010). Thus, the Court struck down a California law barring the sale of “violent” video games to minors and requiring that they be labeled “18,” and a federal law criminalizing “crush videos” and other depictions of the torture and killing of animals.

The Illusion of Constitutionalizing Content Moderation

The problem isn’t just that the “bounds of the law” aren’t where Musk may think they are. For many kinds of speech, identifying those bounds and applying them to particular facts is a far more complicated task than any social media site is really capable of. 

It’s not as simple as whether “the First Amendment protects” certain kinds of speech. Only three things we’ve discussed fall outside the protection of the First Amendment altogether: CSAM, non-expressive conduct, and speech integral to criminal conduct. In other cases, speech may be protected in some circumstances, and unprotected in others.

Musk is far from the only person who thinks the First Amendment can provide clear, easy answers to content moderation questions. But invoking First Amendment concepts without doing the kind of careful analysis courts do in applying complex legal doctrines to facts means hiding the ball: it  conceals subjective value judgments behind an illusion of faux-constitutional objectivity. 

This doesn’t mean Twitter couldn’t improve how it makes content moderation decisions, or that it couldn’t come closer to doing something like what courts do in sussing out the “bounds of the law.” Musk would want to start by considering Facebook’s initial efforts to create a quasi-judicial review of the company’s most controversial, or precedent-setting, moderation decisions. In 2018, Facebook funded the creation of an independent Oversight Board, which appointed a diverse panel of stakeholders to assess complaints. The Board has issued 23 decisions in little more than a year, including one on Facebook’s suspension of Donald Trump for posts he made during the January 6 storming of the Capitol, expressing support for the rioters. 

Trump’s lawyers argued the Board should “defer to the legal principles of the nation state in which the leader is, or was governing.” The Board responded that its “decisions do not concern the human rights obligations of states or application of national laws, but focus on Facebook’s content policies, its values and its human rights responsibilities as a business.” The Oversight Board’s charter makes this point very clear. Twitter could, of course, tie its policies to the First Amendment and create its own oversight board, chartered with enforcing the company’s adherence to First Amendment principles. But by now, it should be clear how much more complicated that would be than it might seem. While constitutional protection of speech is clearly established in some areas, new law is constantly being created on the margins—by applying complex legal standards to a never-ending kaleidoscope of new fact patterns. The complexities of these cases keep many lawyers busy for years; it would be naïve to presume that an extra-judicial board will be able to meaningfully implement First Amendment standards.

At a minimum, any serious attempt at constitutionalizing content moderation would require hiring vastly more humans to process complaints, make decisions, and issue meaningful reports—even if Twitter did less content moderation overall. And Twitter’s oversight board would have to be composed of bona fide First Amendment experts. Even then, it may be that the decision of such a board might later be undercut by actual court decisions involving similar facts. This doesn’t mean that attempting to hew to the First Amendment is a bad idea; in some areas, it might make sense, but it will be far more difficult than Musk imagines.

In Part II, we’ll ask what principles, if not the First Amendment, should guide content moderation, and what Musk could do to make Twitter more of a “de facto town square.”

Berin Szóka (@BerinSzoka) is President of TechFreedom. Ari Cohn (@AriCohn) is Free Speech Counsel at TechFreedom. Both are lawyers focused on the First Amendment’s application to the Internet.

Posted on Techdirt - 10 February 2022 @ 03:30pm

The Top Ten Mistakes Senators Made During Today's EARN IT Markup

Today, the Senate Judiciary Committee unanimously approved the EARN IT Act and sent that legislation to the Senate floor. As drafted, the bill will be a disaster. Only by monitoring what users communicate could tech services avoid vast new liability, and only by abandoning, or compromising, end-to-end encryption, could they implement such monitoring. Thus, the bill poses a dire threat to the privacy, security and safety of law-abiding Internet users around the world, especially those whose lives depend on having messaging tools that governments cannot crack. Aiding such dissidents is precisely why it was the U.S. government that initially funded the development of the end-to-end encryption (E2EE) now found in Signal, Whatsapp and other such tools. Even worse, the bill will do the opposite of what it claims: instead of helping law enforcement crack down on child sexual abuse material (CSAM), the bill will actually help the most odious criminals walk free.

As with the July 2020 markup of the last Congress’s version of this bill, the vote was unanimous. This time, no amendments were adopted; indeed, none were even put up for a vote. We knew there wouldn’t be much time for discussion because Sen. Dick Durbin kicked off the discussion by noting that Sen. Lindsey Graham would have to leave soon for a floor vote. 

The Committee didn’t bother holding a hearing on the bill before rushing it to markup. The one and only hearing on the bill occurred just six days after its introduction back in March 2020. The Committee thereafter made major (but largely cosmetic) changes to the bill, leaving its Members more confused than ever about what the bill actually does. Today’s markup was a singular low-point in the history of what is supposed to be one of the most serious bodies in Congress. It showed that there is nothing remotely judicious about the Judiciary Committee; that most of its members have little understanding of the Internet and even less of how the, ahem, judiciary actually works; and, saddest of all, that they simply do not care.

Here are the top ten legal and technical mistakes the Committee made today.

Mistake #1: “Encryption Is not Threatened by This Bill”

Strong encryption is essential to online life today. It protects our commerce and our communications from the prying eyes of criminals, hostile authorian regimes and other malicious actors.

Sen. Richard Blumenthal called encryption a “red herring,” relying on his work with Sen. Leahy’s office to implement language from his 2020 amendment to the previous version of EARN IT (even as he admitted to a reporter that encryption was a target). Leahy’s 2020 amendment aimed to preserve companies’ ability to offer secure encryption in their products by providing that a company could not be found in violation of the law because it utilized secure encryption, doesn’t have the ability to decrypt communications, or fails to undermine the security of their encryption (for example, by building in a backdoor for use by law enforcement). 

But while the 2022 EARN IT Act contains the same list of protected activities, the authors snuck in new language that undermines that very protection. This version of the bill says that those activities can’t be an independent basis of liability, but that courts can consider them as evidence while proving the civil and criminal claims permitted by the bill’s provisions. That’s a big deal. EARN IT opens the door to liability under an enormous number of state civil and criminal laws, some of which require (or could require, if state legislatures so choose) a showing that a company was only reckless in its actions—a far lower showing than federal law’s requirement that a defendant have acted “knowingly.” If a court can consider the use of encryption, or failure to create security flaws in that encryption, as evidence that a company was “reckless,” it is effectively the same as imposing liability for encryption itself. No sane company would take the chance of being found liable for transmitting CSAM; they’ll just stop offering strong encryption instead. 

Mistake #2: The Bill’s Sponsors Readily Conceded that EARN IT Would Coerce Monitoring for CSAM

EARN IT’s sponsors repeatedly complained that tech companies aren’t doing enough to monitor for CSAM—and that their goal was to force them to do more. As Sen. Blumenthal noted, free software (PhotoDNA) makes it easy to detect CSAM, and it’s simply outrageous that some sites aren’t even using it. He didn’t get specific but we will: both Parler and Gettr, the alternative social networks favored by the MAGA right, have refused to use PhotoDNA. When asked about it, Parler’s COO told The Washington Post: “I don’t look for that content, so why should I know it exists?” The Stanford Internet Observatory’s David Thiel responded:

We agree completely—morally. So why, as Berin asked when EARN IT was first introduced, doesn’t Congress just directly mandate the use of such easy filtering tools? The answer lies in understanding why Parler and Gettr can get away with this today. Back in 2008, Congress required tech companies that become aware of CSAM to report it immediately to NCMEC, the quasi-governmental clearinghouse that administers the database of CSAM hashes used by PhotoDNA to identify known CSAM. Instead of requiring companies to monitor for CSAM, Congress said exactly the opposite: nothing in 18 U.S.C. § 2258A “shall be construed to require a provider to monitor [for CSAM].”

Why? Was Congress soft on child predators back then? Obviously not. Just the opposite: they understood that requiring tech companies to conduct searches for CSAM would make them state actors subject to the Fourth Amendment’s warrant requirement—and they didn’t want to jeopardize criminal prosecutions. 

Conceding that the purpose of EARN IT Act is to coerce searches for CSAM is a mistake, a colossal one, because it invites courts to rule that searching wasn’t voluntary.

Mistake #3: The Leahy Amendment Alone Won’t Protect Privacy & Security, or Avoid Triggering the Fourth Amendment

While Sen. Leahy’s 2020 amendment was a positive step towards protecting the privacy and security of online communications, and Lee’s proposal today to revive it is welcome, it was always an incomplete solution. While it protected companies against liability for offering encryption or failing to undermine the security of their encryption, it did not protect the refusal to conduct monitoring of user communications. A company offering E2EE products might still be coerced into compromising the security of its devices by scanning user communications “client-side” (i.e., on the device) prior to encrypting sent communications or after decrypting received communications. 

Apple recently proposed such a technology for such client-side scanning, raising concerns from privacy advocates and civil society groups. For its part, Apple assured that safeguards would limit use of the system to known CSAM to prevent the capability from being abused by foreign governments or rogue actors. But the capacity to conduct such surveillance presents an inherent risk of being exploited by malicious actors. Some companies may be able to successfully safeguard such surveillance architecture from misuse or exploitation. However, resources and approaches will vary across companies, and it is a virtual certainty that not all of them will be successful. And if done under coercion, create a risk that such efforts will be ruled state action requiring a warrant under the Fourth Amendment. 

Our letter to the Committee proposes an easy way to expand the Leahy amendment to ensure that companies won’t be held liable for not monitoring user content: borrow language directly from Section 2258A(f).

Mistake #4: EARN IT’s Sponsors Just Don’t Understand the Fourth Amendment Problem

Sen. Blumenthal insisted, repeatedly, that EARN IT contained no explicit requirement not to use encryption. The original version of the bill would, indeed, have allowed a commission to develop “best practices” that would be “required” as conditions of “earning” back the Section 230 immunity tech companies need to operate—hence the bill’s name. But dropping that concept didn’t really make the bill less coercive because the commission and its recommendations were always a sideshow. The bill has always coerced monitoring of user communications—and, to do that, the abandonment or bypassing of strong encryption—indirectly, through the threat of vast legal liability for not doing enough to stop the spread of CSAM. 

Blumenthal simply misunderstands how the courts assess whether a company is conducting unconstitutional warrantless searches as a “government actor.” “Even when a search is not required by law, … if a statute or regulation so strongly encourages a private party to conduct a search that the search is not ‘primarily the result of private initiative,’ then the Fourth Amendment applies.” U.S. v. Stevenson, 727 F.3d 826, 829 (8th Cir. 2013) (quoting Skinner v. Railway Labor Executives’ Assn, 489 U.S. 602, 615 (1989)). In that case, the court found that AOL was not a government actor because it “began using the filtering process for business reasons: to detect files that threaten the operation of AOL’s network, like malware and spam, as well as files containing what the affidavit describes as “reputational” threats, like images depicting child pornography.” AOL insisted that it “operate[d] its file-scanning program independently of any government program designed to identify either sex-offenders or images of child pornography, and the government never asked AOL to scan Stevenson’s e-mail.” Id. By contrast, every time EARN IT’s supporters explain their bill, they make clear that they intend to force companies to search user communications in ways they’re not doing today.

Mistake #2 Again: EARN IT’s Sponsors Make Clear that Coercion Is the Point

In his opening remarks today, Sen. Graham didn’t hide the ball:

“Our goal is to tell the social media companies ‘get involved and stop this crap. And if you don’t take responsibility for what’s on your platform, then Section 230 will not be there for you.’ And it’s never going to end until we change the game.”

Sen. Chris Coons added that he is “hopeful that this will send a strong signal that technology companies … need to do more.” And so on and so forth.

If they had any idea what they were doing, if they understood the Fourth Amendment issue, these Senators would never admit that they’re using liability as a cudgel to force companies to take affirmative steps to combat CSAM. By making intentions unmistakable, they’ve given the most vile criminals exactly what they need to to challenge the admissibility of CSAM evidence resulting from companies “getting involved” and “doing more.” Though some companies, concerned with negative publicity, may tell courts that they conducted searches of user communications for “business reasons,” we know what defendants will argue: the companies’ “business reason” is avoiding the wide, loose liability that EARN IT subjected them to. EARN IT’s sponsors said so.

Mistake #5: EARN IT’s Sponsors Misunderstanding How Liability Would Work 

Except for Sen. Mike Lee, no one on the Committee seemed to understand what kind of liability rolling back Section 230 immunity, as EARN IT does, would create. Sen. Blumenthal repeatedly claimed that the bill requires actual knowledge. One of the bill’s amendments (the new Section 230(e)(6)(A)) would, indeed, require actual knowledge by enabling civil claims under 18 U.S.C. § 2255 “if the conduct underlying the claim constitutes a violation of section 2252 or section 2252A,” both of which contain knowledge requirements. This amendment is certainly an improvement over the original version of EARN IT, which would have explicitly allowed 2255 claims under a recklessness standard. 

But the two other changes to Section 230 clearly don’t require knowledge. As Sen. Lee pointed out today, a church could be sued, or even prosecuted, simply because someone posted CSAM on its bulletin board. Multiple existing state laws already create liability based on something less than actual knowledge of CSAM. As Lee noted, a state could pass a law creating strict liability for hosting CSAM. Allowing states to hold websites liable for recklessness (or even less) while claiming that the bill requires actual knowledge is simply dishonest. All these less-than-knowledge standards will have the same result: coercing sites into monitoring user communications, and into abandoning strong encryption as an obstacle to such monitoring. 

Blumenthal made it clear that this is precisely what he intends, saying: “Other states may wish to follow [those using the “recklessness” standard]. As Justice Brandeis said, states are the laboratories of democracy … and as a former state attorney general I welcome states using that flexibility. I would be loath to straightjacket them in their adoption of different standards.”

Mistake #6: “This Is a Criminal statute, This Is Not Civil Liability”

So said Sen. Lindsey Graham, apparently forgetting what his own bill says. Sen. Dianne Feinstein added her own misunderstanding, saying that she “didn’t know that there was a blanket immunity in this area of the law.” But if either of those statements were true, the EARN IT Act wouldn’t really do much at all. Section 230 has always explicitly carved out federal criminal law from its immunities; companies can already be charged for knowing distribution of child sexual abuse material (CSAM) or child sexual exploitation (CSE) under federal criminal statutes. Indeed, Backpage and its founders were criminally prosecuted even without SESTA’s 2017 changes to Section 230. If the federal government needs assistance in enforcing those laws, it could adopt Sen. Mike Lee’s amendment to permit state criminal prosecutions when the conduct would constitute a violation of federal law. Better yet, the Attorney General could use an existing federal law (28 U.S.C. § 543) to deputize state, local, and tribal prosecutors as “special attorneys” empowered to prosecute violations of federal law. Why no AG has bothered to do so yet is unclear.

What is clear is that EARN IT isn’t just about criminal law. EARN IT expressly carves out civil claims under certain federal statutes, and also under whatever state laws arguably relate to “the advertisement, promotion, presentation, distribution, or solicitation of child sexual abuse material” as defined by federal law. Those laws can and do vary, not only with respect to the substance of what is prohibited, but also the mental state required for liability. This expansive breadth of potential civil liability is part of what makes this bill so dangerous in the first place.

Mistake #7: “If They Can Censor Conservatives, They Can Stop CSAM!”

As at the 2020 markup, Sen. Lee seemed to understand most clearly how EARN IT would work, the Fourth Amendment problems it raises, and how to fix at least some of them. A former Supreme Court Clerk, Lee has a sharp legal mind, but he seems to misunderstand much of how the bill would work in practice, and how content moderation works more generally.

Lee complained that, if Big Tech companies can be so aggressive in “censoring” speech they don’t like, surely they can do the same for CSAM. He’s mixing apples and oranges in two ways. First, CSAM is the digital equivalent of radioactive waste: if a platform gains knowledge of it, it must take it down immediately and report it to NCMEC, and faces stiff criminal penalties if it doesn’t. And while “free speech” platforms like Parler and Gettr refuse to proactively monitor for CSAM (as discussed below), every mainstream service goes out of its way to stamp out CSAM on unencrypted service. Like AOL in the Stevenson case, they do so for business and reputational reasons.

By contrast no website even tries to block all “conservative” speech; rather, mainstream platforms must make difficult judgment calls about taking down politically charged content, such as Trump’s account only after he incited an insurrection in an attempted coup and misinformation about the 2020 election being stolen. Republicans are mad about where tech companies draw such lines.

Second, social media platforms can only moderate content that they can monitor. Signal can’t moderate user content and that is precisely the point: end-to-end-encryption means that no one other than the parties to a communication can see it. Unlike normal communications, which may be protected by lesser forms of “encryption,” the provider isn’t standing in the middle of the communication and it doesn’t have the keys to unlock the messages that it is passing back and forth. Yes, some users will abuse E2EE to share CSAM, but the alternative is to ban it for everyone. There simply isn’t a middle ground.

There may indeed be more that some tech companies could do about content they can see—both public content like social media posts and private content like messages (protected by something less than E2EE). But their being aggressive about, say, misinformation about COVID or the 2020 election has nothing whatsoever to do with the cold, hard reality that they can’t moderate content protected by strong encryption.

It’s hard to tell whether Lee understands these distinctions. Maybe not. Maybe he’s just looking to wave the bloody shirt of “censorship” again. Maybe he’s saying the same thing everyone else is saying, essentially: “Ah, yes, but if only Facebook, Apple and Google didn’t use end-to-end encryption for their messaging services, then they could monitor those for CSAM just like they monitor and moderate other content!” Proposing to amend the bill to require actual knowledge under both state and federal law suggests he doesn’t want this result, but who knows?

Mistake #8: Assuming the Fourth Amendment Won’t Require Warrants If It Applies

Visibility to the provider relates to one important legal distinction not discussed at all today—but that may well explain why the bill’s sponsors don’t seem to care about Fourth Amendment concerns. It’s an argument Senate staffers have used to defend the bill since its introduction. Even if compulsion through vast legal liability did make tech companies government actors, the Fourth Amendment requires a warrant only for searches of material for which users have a reasonable expectation of privacy. Kyllo v. United States, 533 U.S. 27, 33 (2001); see Katz v. United States, 389 U.S. 347, 361 (1967) (Harlan, J., concurring). Courts long held that users had no such expectations for digital messages like email held by third parties. 

But that began to change in 2010. If searches of emails trigger the Fourth Amendment—and U.S. v. Warshak, 631 F.3d 266 (6th Cir. 2010) said they do—searches of private messaging certainly would. The entire purpose of E2EE is to give users rock-solid expectations of privacy in their communications. More recently, the Supreme Court has said that, “given the unique nature of cell phone location records, the fact that the information is held by a third party does not by itself overcome the user’s claim to Fourth Amendment protection.” Carpenter v. United States, 138 S. Ct. 2206, 2217 (2018). These cases draw the line Sen. Lee is missing: no, of course users don’t have reasonable expectations of privacy in public social media posts—which is what he’s talking about when he points to “censorship” of conservative speech. EARN IT could avoid the Fourth Amendment by focusing on content providers can see, but it doesn’t, because it’s intended to force companies to be able to see all user communications.

Mistake #9: What They didn’t Discuss: Anonymous Speech

The Committee didn’t discuss how EARN IT would affect speech protected by the First Amendment. No, of course CSAM isn’t protected speech, but the bill would affect lawful speech by law-abiding citizens—primarily by restricting anonymous speech. Critically, EARN IT doesn’t just create liability for trafficking in CSAM. The bill also creates liability for failing to stop communications that “solicit” or “promote” CSAM. Software like PhotoDNA can flag CSAM (by matching cryptographic hashes to known images in NCMEC’s database) but identifying “solicitation” or “promotion” is infinitely more complicated. Every flirtatious conversation between two adult users could be “solicitation” of CSAM—or it might be two adults doing adult things. (Adults sext each other—a lot. Get over it!) But “on the Internet, nobody knows you’re a dog”—and there’s no sure way to distinguish between adults and children. 

The federal government tried to do just that in the Communications Decency Act (CDA) of 1996 (nearly all of which, except Section 230, was struck down) and the Child Online Protection Act (COPA) of 1998. Both laws were struck down as infringing on the First Amendment right to accessing lawful content anonymously. EARN IT accomplishes much the same thing indirectly, the same way it attacks encryption: basing liability on anything less than knowledge means you can be sued for not actively monitoring, or for not age-verifying users, especially when the risks are particularly high (such as when you “should have known” you were dealing with minor users). 

Indeed, EARN IT is even more constitutionally suspect. At least COPA focused on content deemed “harmful to minors.” Instead of requiring age-gating for sites that offered porn and sex-related content (e.g., LGBTQ teen health), EARN IT would affect all users of private communications services, regardless of the nature of the content they access or exchange. Again, the point of E2EE is that the service provider has no way of knowing whether messages are innocent chatter or CSAM. 

EARN IT could raise other novel First Amendment problems. Companies could be held liable not only for failing to age-verify all users—a clear First Amendment violation— but also for failing to bar minors from using E2EE services so that their communications can be monitored or failing to use client-side monitoring on minors’ devices, and even failing to segregate adults from minors so they can’t communicate with each other. 

Without the Lee Amendment, EARN IT leaves states free to base liability on explicitly requiring age-verification or limits on what minors can do. 

Mistake #10: Claiming the Bill Is “Narrowly Crafted”

If you’ve read this far, Sen. Blumenthal’s stubborn insistence that this bill is a “narrowly targeted approach” should make you laugh—or sigh. If he truly believes that, either he hasn’t adequately thought about what this bill really does or he’s so confident in his own genius that he can simply ignore the chorus of protest from civil liberties groups, privacy advocates, human rights activists, minority groups, and civil society—all of whom are saying that this bill is bad policy.

If he doesn’t truly believe what he’s saying, well… that’s another problem entirely.

Bonus Mistake!: A Postscript About the Real CSAM problem

Lee never mentioned that the only significant social media services that don’t take basic measures to identify and block CSAM are Parler, Gettr and other fringe sites celebrated by Republicans as “neutral public fora” for “free speech.” Has any Congressional Republican sent letters to these sites asking why they refuse to use PhotoDNA? 

Instead, Lee did join Rep. Ken Buck in March 2021 to interrogate Apple about its decision to take down the Parler app. Answer: Parler hadn’t bothered setting any meaningful content moderation system. Only after Parler agreed to start doing some moderation of what appeared in its Apple app (but not its website) did Apple reinstate the app.

Posted on Techdirt - 3 November 2020 @ 12:00pm

What The Election Means For Tech

If Trump Wins…

For Republicans, bashing “Big Tech” has become as central to the Culture War as bashing the “Big Three Networks” once was. Demanding “neutrality” from social media companies has become what “net neutrality” has been for Democrats: the issue that sucks up all the oxygen in the room — except far more politically useful.

ISPs aren’t in the content moderation business, but social media would be unusable without it. (Just try using 8Kun or Gab!) Democrats have always struggled to identify real-world examples of net neutrality violations, but Republicans find “anti-conservative bias” everywhere, everyday. Content moderation at the scale of billions of posts is wildly imperfect, so anyone can find examples of decisions that seem unfair. But Republicans won’t settle for mere “neutrality.” They want to end Section 230’s legal protections for moderating hate speech, misinformation, using fake accounts to game algorithms, and most foreign election interference. All of these tend to benefit Republicans, so moderating them seems to prove the claim that “Big Tech” is out to get conservatives.

This won’t just be empty rhetoric anymore. Making every tech issue about “bias” will make most tech legislation impossible, but Trump won’t really need new legislation. He’ll finally weaponize the two independent agencies that regulate tech: the Federal Communications Commission and the Federal Trade Commission. Their current chairmen are traditional Republicans and serious lawyers uninterested in playing political games. But in August, Trump abruptly withdrew the renomination of Republican stalwart Mike O’Rielly after he obliquely criticized Trump’s Executive Order demanding political “neutrality” of social media. Trump quickly nominated the junior administration staffer behind the White House’s crackdown. No one should doubt that the next FCC and FTC chairmen will be Trump loyalists unencumbered by legal or constitutional scruples — and eager to turn the screws on Trump’s “enemies.” Each agency will become ever more a political battleground in which “tech” issues serve as proxy war for deeper cultural conflict.

If Biden Wins…

Trump called “Sleepy Joe” a tool of the “radical, socialist left.” Biden insisted his primary victory was a mandate for centrist pragmatism. Perhaps nowhere will Biden’s leadership be tested more than in tech policy.

Congress hasn’t passed substantial tech legislation since 1996 — and even that overhaul of the Communications Act (of 1934!) mostly reflected pre-Internet assumptions and fears. Congress used to make regular course-corrections through biennial reauthorization of federal agencies — but stopped in 1998, the year Congress became pure political spectacle. The FCC and FTC have since been left to improvise. The FCC’s long been a “junior varsity Congress” — same political baggage, no electoral accountability. The more serious FTC is trending that way. Each change of the White House means increasingly large shifts in tech policy.

These problems are as thorny as our broken judicial nomination process — and equally unlikely to be corrected through our broken legislative process. If Biden wants to be remembered for resolving them, he’ll need to do for tech what he’s proposed for the courts: convene an expert bipartisan commission with a clear mandate to develop once-in-a-century legislation, and then get ‘er done.

Biden’s nominations for FCC and FTC Chairs will reveal whether he’s genuinely interested in leading on tech — or content, like Trump and Obama, to exploit tech issues to excite his base. Strong Chairs in Biden’s mold could build Congressional consensus for significant, but viable, and therefore moderate, legislation. But if he picks bomb-throwers over problem-solvers, we’ll have four more years of the same digital culture wars — and creating a stable digital-era regulatory framework may have to wait several more presidencies.

Section 230

If Trump Wins…

Republican fulmination about “anti-conservative bias” will continue to escalate. Don’t expect Republicans to pass any legislation. But they’ve always been more interested in stoking resentment among their base — and using threats of legal action to coerce large tech companies to change their content moderation practices in ways that help Republicans.

The FCC will proceed with a rulemaking to sharply limit Section 230’s protections. The only question is whether Ajit Pai issues a more restrained proposal on transparency mandates before he leaves the FCC. If not, Brendan Carr (or whoever Trump might appoint to replace Pai) could propose most or all of what NTIA has asked for. This dynamic will make it difficult for bipartisan legislation to pass amending 230, but something like the EARN IT Act and other amendments targeted at unlawful content might pass.

If Biden Wins…

Many Republicans will blame “Big Tech” for their losses, and claim that “election interference” (by Big Tech) delegitimized the new administration. They’ll do everything they can to deter content moderation beyond narrow categories of porn, dirty words, illegal content, promoting terrorism, self harm, and harassment (narrowly defined). Most Democrats want exactly the opposite: to coerce tech companies into moderating misinformation as a condition of maintaining their 230 protections. There simply is no common ground here.

So unless Democrats win enough Senate seats to abolish the filibuster, the debate over content moderation won’t be resolved anytime soon. Instead, Democrats will focus on liability for third-party content that isn’t moderated — which is what nearly all 230 cases are actually about. The EARN IT Act already has bipartisan support, as does making 230 protection contingent on removing unlawful content, and requiring websites to prove that their practices are “reasonable.” Each is deeply problematic, but practical details of real-world implementation don’t seem to matter much.

Biden has said he wants to ?revoke? Section 230 ?immediately,? but there?s little reason to expect repeal to happen. Instead, expect him to focus on ?hold[ing] social media companies accountable for knowingly platforming falsehoods,? as a Biden spokesman put it after Trump?s Executive Order in May.

Here, more than in any other area, an expert commission is the only way out of this debate. The issue is simply too complicated ? both legally and technically ? for Congress to handle.

Net Neutrality

If Trump Wins…

Status quo: The FCC will maintain its hands-off approach to broadband regulation and net neutrality legislation will remain stalled in Congress. At most, a Democratic House and Senate might pass legislation purporting to revive the 2015 Open Internet Order, but Trump would veto it — and it’s far from clear that’s even a valid way to legislate. Instead, expect activists to focus on pushing for state-level broadband legislation. The courts are unlikely to allow that so long as the FCC retains broad preemption. But for some activists, the point has always been to keep the fight going forever, not to actually win in court.

If Biden Wins…

Even a centrist FCC Chair would face overwhelming activist pressure to revive the FCC’s 2015 Open Internet Order. But will they want to be remembered merely for playing yet another round of Title II ping-pong — or for finally convincing Congress to resolve this issue? There’s been a bipartisan consensus on the core of net neutrality since Republican Chairman Michael Powell gave his “Four Freedoms” speech in 2004. Democratic Chairman Genachowski pushed hard for legislation. He resorted to issuing the 2010 Open Order only after Republicans pulled out of legislative talks, calculating that they’d have more leverage after the midterms.

Resolving this issue could be the key to broader telecom reforms that Congress has been unable to tackle since passing the 1996 Telecom Act — a law based on markedly pre-digital assumptions about the future. Democrats should be careful not to overplay their hand: the D.C. Circuit decision upholding the FCC’s 2015 Order made clear that the FCC’s rules only applied to companies that held themselves out as offering “unedited” services anyway, meaning that ISPs could opt-out of Title II if they really wanted to.

Tech & Antitrust

If Trump Wins…

Expect more antitrust lawsuits like the Google suit. But if the Google suit is the strongest case this Administration has, they’re unlikely to win any significant remedies in court. And even if those suits do succeed, they’re unlikely to significantly address Republicans’ real concerns about “bias.” So don’t expect Republicans’ current “litigate but don’t legislate” approach to last long. Trump is famous for turning on a dime, and Congressional Republicans will face enormous pressure, especially if Democrats take the Senate, to “strengthen” the antitrust laws. Ken Buck’s minority report indicates where populist Republicans might find common ground with anti-corporate populists on the left.

If Biden Wins…

There’s enormous political pressure from all quarters to “do something” about antitrust. But don’t assume that legislation will be anywhere near as radical as what Congressional Democrats have proposed. Even Rep. David Cicilline’s much-hyped proposal to turn antitrust law on its head is careful to note that it represents only the views of his staff — not the Committee or its members.

It’s one thing for Democrats to talk about flipping the burden of proof in merger cases, but giving the government such leverage would have, for example, made it easy for Trump to force AT&T to spin off CNN — or to make editorial changes as implicit conditions of the Time Warner deal. Democrats pushing such ideas simply haven’t thought through the implications of what they’re proposing. Do they really want to make it easier for Republicans to use the antitrust laws as political weapons against the media, both new and old? A more considered, serious approach from the administration would focus on increased funding, more aggressive enforcement, and carefully targeted statutory changes.

Federal Privacy Legislation

If Trump Wins…

Status quo: Absent a court decision striking down state privacy laws on dormant commerce grounds — hard cases to win, which usually take years — Republicans will continue to insist on national privacy legislation to prevent every state from layering its own set of data rules on top of California’s. But Democrats have little political incentive to negotiate for any legislation that would displace California’s approach, which they claim as a win despite its glaring amateurishness and many practical pitfalls.

If Biden Wins…

If Democrats also take the Senate, they’ll have no excuse for not finally passing the comprehensive baseline privacy legislation they’ve talked about for years. Preemption should be an easier “give” for Democrats if they have more leverage in writing the legislation and are assured of handling at least the crucial first 3-4 years of enforcing the new law. Passing a federal law, even if it overlaps significantly with California’s, would allow the Administration to take credit for addressing the top complaint about “Big Tech”: not bigness per se, but a perceived lack of control over data collection.

Treatment of Chinese Tech Companies

If Trump Wins…

Status quo: The White House will raise legitimate concerns about Chinese tech companies giving the Chinese government access to private user data and influence over content moderation decisions. They’ll hype “deals” like TikTok’s partnership with Oracle, but Chinese entities will retain control. The only real winners will be American companies favored by the White House. It’ll be cronyist mercantilism veiled in talk of privacy and free speech. Republicans will increasingly find themselves in a quandary: the greatest beneficiaries of their push to hamstring American “Big Tech” companies will be Chinese companies that have achieved the scale necessary to expand into the U.S. market, as TikTok has done.

If Biden Wins…

Republicans will hammer the Biden Administration for any perceived weakness on China — especially when it comes to tech. Expect the White House to try to depoliticize CFIUS and treat the review process as more of a law enforcement exercise than policymaking driven by the White House. If Democrats are smart, they’ll try to insulate themselves from inevitable Republican attacks by drawing clearer statutory lines about foreign ownership of tech companies serving the U.S. market. The real test will come the first time CFIUS declines to take action against a Chinese company: will the White House intervene under political pressure?

And If the Election is Contested…?

If there’s no clear, quick election result, the stage will be set for the “mother of all battles” over online speech. If Trump and his supporters claim victory and insist that ballots that “changed the result on election night” must be fraudulent, “Big Tech” companies will apply warning labels to such content — and block paid ads making the same claims. Republicans will go absolutely ballistic. They’ll throw every legal theory they can against the wall. Don’t expect any of it to stick: website operators have a clear First Amendment right to reject, or put disclaimers around, third party content — just as newspapers do with letters to the editor. But that won’t stop Republicans from filing multiple lawsuits and complaints with federal regulators, including the Federal Election Commission. Expect the Trump administration to get creative in finding ways to “stick it” to tech companies in interregnum.

As ugly and politicized as tech policy is today, if tech policy becomes wrapped up in a “Florida recount but worse” fight, we’ll quickly come to look back at today’s tech policy battles as mild by comparison.

Posted on Techdirt - 30 September 2020 @ 12:07pm

Why Do Republican Senators Seem To Want To Turn Every Website Into A Trash Heap Of Racism & Abuse?

Imagine if you could be sued for blocking other users on Twitter, or limiting who could see your Facebook posts. Or if every website were full of racial slurs, conspiracy theories, and fake accounts. Parental control tools could no longer prevent your kids from seeing such heinous content. If that sounds like the Internet you’ve always wanted, then you’ll love Republicans’ new “Online Freedom and Viewpoint Diversity Act” and “Online Content Policy Modernization Act!”

In 1996, Congress agreed, almost unanimously, that users, websites, and filtering tool developers shouldn’t face such legal risks and that it was imperative “to remove disincentives for the development and utilization of blocking and filtering technologies.” That’s why Congress enacted Section 230 of the Communications Decency Act. But a few weeks ago, after yet another Trump tweet raging about “biased Big Tech,” three Republican Senators rushed to introduce legislation that would turn the law on its head. Sen. Lindsay Graham followed suit with his own bill that would do essentially the same thing. Trump’s Department of Justice has proposed to gut Section 230. Never mind that Section 230 was authored by a Republican congressman who still defends the law.

Today, Section 230 broadly protects users, websites, and developers of filtering tools (built into operating systems, search engines, or services like YouTube — or that you can install yourself) when they exercise their First Amendment rights to decide what content or users to block or “restrict access” to. This new bill would sharply curtail such content moderation. To avoid liability, a defendant would have to prove the content was “obscene, lewd, lascivious, filthy, excessively violent, harassing, promoting self-harm, promoting terrorism, or unlawful.” That covers only a fraction of the Internet’s awfulness. Even the vilest statements could not be removed or filtered unless tied to the harassment of specific users or the clear glorification of violence. The bill doesn’t cover spam, fake accounts, clear hate speech, or clear misinformation. That last exclusion is intentional: it was Twitter’s timid moves in May to put warning labels on Trump’s tweets about mail-in voting that quickly led the White House to issue an executive order calling for legislation to “reform” Section 230.

Republicans aim to stop content moderation for “political” reasons. But it would invite litigation over even truly neutral restrictions. Nextdoor.com limits discussion of national political issues to special “groups,” so that the site can focus on hyper-local issues. But the bill would no longer protect such segmentation. If a medical school wanted to keep its students focused on studying science rather than arguing about politics, enforcing that rule wouldn’t be protected either.

Republicans complain about “Big Tech,” but their bill would expose everyone to lawsuits. Trump himself has invoked Section 230 to avoid liability for retweeting allegedly defamatory material. FoxNews.com reserves the right to block “offensive” comments on its site, as Breitbart.com does for “inappropriate” content. Even Parler, the conservative “free speech” alternative to Facebook reserves broad discretion to remove any content that they consider “disruptive” or that creates “risk” (not just legal risk) for Parler.  

Section 230 protects not just the providers but users engaged in content moderation — such as those who manage Facebook pages and groups. Reddit relies on users to moderate its 130,000 “subreddit” communities, while the English Wikipedia depends on 1,131 volunteer administrators to resolve conflicts. Would you volunteer if you knew you could be sued by disgruntled users?

Republican proposals would open the door to creative plaintiffs’ lawyers to sue anyone who feels aggrieved for being “censored.” Yet it won’t do what Republicans want most: allow the FTC, Republican state attorneys general, and MAGA activists to sue “Big Tech” for “deceiving” consumers by not delivering political “neutrality” as (supposedly) promised. The reason consumer protection agencies have never brought such suits, and courts have tossed out private lawsuits, isn’t Section 230. Back in 2004, left-wing activists petitioned the FTC to sanction Fox News for not delivering on its “Fair and Balanced” slogan. The Republican FTC Chairman dismissed the petition pithily: “There is no way to evaluate this petition without evaluating the content of the news at issue. That is a task the First Amendment leaves to the American people, not a government agency.” Offline or online, the courts simply won’t adjudicate questions of media bias because they’re inherently subjective.

Section 230’s protections are vital to the Internet, where both users and providers make editorial decisions about content created by third parties at a scale and speed that are simply unfathomable in the world of traditional publishing. This new bill attempts to use Section 230’s indispensability to coerce the surrender of First Amendment rights. That violates the “unconstitutional conditions” doctrine. In 1969, the Supreme Court upheld imposing special “Fairness Doctrine” conditions on broadcast licenses only because it denied broadcasters full First Amendment protection. But the Court has repeatedly said that new media providers enjoy the same free speech rights as traditional publishers — and has struck down fairness mandates on newspapers as unconstitutional.

Republicans fought the Fairness Doctrine for decades. Their 2016 platform demanded “free-market approaches to free speech unregulated by government.” Yet now they want an even more arbitrary Fairness Doctrine for the Internet. They should remember what President Reagan said when he ended the original Fairness Doctrine in 1987: “the dangers of an overly timid or biased press cannot be averted through bureaucratic regulation, but only through the freedom and competition that the First Amendment sought to guarantee.”

If, despite a lack of any solid evidence, conservatives persist in believing that social media are biased against them, they should vote against it with their clicks and dollars. Switch to Parler, if you like. Just don’t be surprised when you find content like this on the site:

By comparison, #Section230 has 262 “parleys” (posts) — roughly 20% as many as #JEWS). And this is just the tip of a very large iceberg that includes “parleys” like this (note the gruesomely pro-Holocaust account name:

Parler has chosen not to remove such content — but Section 230 would protect the site if it did. Not so if Republicans got their way.

Let that sink in. When Republicans complain about “hate speech” being used as an excuse for censoring conservatives, this is among the content they’re saying should stay up. Because… “bias.”

Ironically, Parler has engaged in selective moderation of hate speech to make the site seem just respectable enough to attract Republican politicians like Sens. Ted Cruz, Sen. Rand Paul, and Rep. Devin Nunes. The site clearly blocks any variant of the n-word in hashtags — which are wildly popular on Gab, which Parler has rapidly eclipsed as the “free speech” network. Gab offers a clear picture of what social media would look like if Republicans succeeded in narrowing Section 230’s protections. This is what an “uncensored” Internet looks like:

If anything, it’s difficult to appreciate how widespread such content is on both Parler and Gab because, unlike Facebook and Twitter, they only allow users to search hashtags (and names of users and groups), not the contents of posts. But one thing’s clear: while Parler blocks the n-word in hashtags, they definitely don’t block it in posts.

Is this really what Sens. Wicker, Blackburn and Hawley really want the Internet to look like? Do they really believe Section 230 shouldn’t protect websites when they remove such heinous content? Or do they believe that that removing such content would still be covered by Section 230 because it would fall into the category of “harassing” content already explicitly protected by Section 230.

The NTIA’s petition to have the FCC rewrite Section 230 defines “harassing” content as having the “subjective intent to abuse, threaten, or harass any specific person.” You don’t have to be a lawyer to see how narrow that definition is. If a neo-Nazi posts something like one of the above hashtags as a reply to a black or Jewish user, yes, that might qualify as “harassing,” but simply ranting about both in his own posts would not be directed at any specific person — so websites wouldn’t be protected for removing it. Republican lawmakers might claim they take a broader view of what should qualify as “harassing,” but it’s hard to see why any court would agree. In any event, what Members or their staff say they intend is irrelevant; what matters is the plain text of the statute. If they want to make their intention clear, they need to pick other words and put them in the statute.

More importantly, hate speech is just one category of noxious content that websites could be sued for removing, hiding or labeling if Republicans have their way. The same goes for conspiracy theories, misinformation about COVID, vaccines, and voting, etc. For example:

Could moderating anti-vaccination misinformation be covered by the term “promoting self-harm?” Again, that’s a huge legal stretch — especially because the “harm” at issue here is primarily not to the “self” but to the children of parents duped by anti-vax content, and to those in society who get infected because vaccination rates fall below levels needed to achieve herd immunity. Even if a court decided that the term might cover some anti-vax content, websites would have to fight it out in court, and courts might rule differently in different cases.

If you want those things for yourself and your children, go to Gab or Parler. Just, please, stop trying to turn the rest of social media into those sites. And don’t complain when those sites fail to attract advertising. What respectable brand in America would want to advertise its products next to such content?

President Reagan’s answer would have been clear: private companies should be free to make their own decisions, especially when the alternative is a true cesspool of everything that is worst about humanity. Sadly, today’s Republicans don’t seem to care about anything beyond making political hay out of repeating the same baseless claims that they’re being persecuted.

Hashtag: #Snowflakes.

Posted on Techdirt - 24 July 2020 @ 01:39pm

The First Amendment Bars Regulating Political Neutrality, Even Via Section 230

At the end of May, President Trump issued an Executive Order demanding action against social media sites for “censoring” conservatives. His Department of Justice made a more specific proposal in mid-June. Clearly coordinating with the White House, Sen. Josh Hawley introduced a bill that same morning, making clear that his “Limiting Section 230 Immunity to Good Samaritans Act” is essentially the administration’s bill — as called for in the May Executive Order. The administration is expected to make its next move next week: having NTIA (an executive agency controlled by Trump loyalists and advised by a former law professor intent on cracking down on tech companies) ask the FCC to make rules reinterpreting Section 230 to do essentially the same thing as the Hawley bill. These two approaches, both stemming from the Executive Order, are unconstitutional for essentially the same reasons: they would put a gun to the head of the largest social media websites, forcing them to give up editorial control over their services if they want to stay in business.

The First Amendment would not allow Congress to directly require websites to be politically “neutral” or “fair”: the Supreme Court has recognized that the First Amendment protects the editorial discretion of websites no less than newspapers. Both have the same right to decide what content they want to carry; whether that content is created by third parties is immaterial. Hawley’s bill attempts to lawyer over the constitutional problem, using an intentionally convoluted process to conceal the bill’s coercive nature and to present himself as a champion of “free speech,” while actually proposing to empower the government to censor online content as never before.

Instead of directly meddling with how websites moderate content, Hawley’s bill relies on two legal sleights of hand. The first involves Section 230 of the Communications Decency Act of 1996. That law made today’s Internet possible — not only social media but all websites and services that host user content — by protecting them from most civil liability (and state criminal prosecution) for content created by third parties. Given the scale of user-generated content — with every comment, post, photo and video potentially resulting in a lawsuit — websites simply could not function if Section 230 did not immunize them not just from ultimate liability but from the litigation grindstone itself. Hawley knows that all sites that host user content depend on Section 230, so he’s carefully crafted a bill that turns that dependence against them — to do something the First Amendment clearly forbids: to force them to cede editorial control over their services. (Here’s a redline showing how Hawley’s bill would amend Section 230.)

Second, Hawley claims that his bill “protects consumers” by holding companies to their promises. In reality, it defines “good faith” so broadly that “edge providers” would face a constant threat of being sued under consumer protection and contract laws for how they exercise their editorial discretion over user content. Given the fines involved ($5,000/user plus attorneys’ fees), a single court decision could bankrupt even the largest tech company.

No one should have any illusion about what Hawley’s bill really does: use state power to advance a political agenda. The bill’s complicated structure merely masks the elaborate ways it violates the First Amendment. Conditioning 230 immunity on opening yourself up to legal liability under consumer protection law is a Rube-Goldberg-esque legal contraption intended to do what the First Amendment clearly forbids: forcing websites to host user-generated content they find objectionable.

How the Hawley Bill Works

Section 230(c)(1) says: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” These have been called the The Twenty-Six Words That Created the Internet. When websites and services are sued for third party content they host, Section 230 allows them to cheaply get lawsuits against them thrown out with a motion to dismiss. Consequently, lawsuits are far rarer than they would be in a world without 230. Section 230(c)(1) ensures that those who create content are the ones to be sued. Courts resolve nearly all 230 cases under this provision.

Republicans have insisted angrily that all of Section 230 was intended to depend on a showing of good faith, including political neutrality; however, the plain text of the statute is clear. Only Subsection 230(c)(2)(A) requires such a showing — and the statute’s operative language doesn’t mention neutrality. As Justice Neil Gorsuch recently declared, “When the express terms of a statute give us one answer and extratextual considerations suggest another, it’s no contest. Only the written word is the law, and all persons are entitled to its benefit.” Bostock v. Clayton County, 590 U.S. ___ (2020). By proposing to amend Section 230(c)(1) to require both good faith and neutrality, Trump’s DOJ and Hawley both concede that the President’s Executive Order and other Republican clamoring for immediate legal action are simply wrong about the current state of the law.

The real aim of Hawley’s bill is to force the largest social media services to change how they treat content that serves the “MAGA” political agenda — e.g., not labeling Trump’s tweets, allowing far-right provocateurs to engage in bannable conduct, treating Diamond and Silk or Gateway Pundit as the journalistic equivalents of The New York Times. The bill is almost perfectly tailored to do just that while avoiding damage to smaller, alternative social networks favored by conservative activists for their “anything goes” approach to content moderation.

Hawley’s bill applies only to “edge providers”: websites or services with 30+ million annual unique users, or more than 300 million unique global users, in the past year, and more than $1.5 billion in global revenue. To maintain 230(c)(1) protections, they would have to attest to “good faith” — essentially, political neutrality — in their content moderation practices. Thus, an edge provider has to choose between two litigation risks: If it “voluntarily” exposes itself to suit for the “fairness” of its content moderation, it cedes editorial control to judges and regulators. If it surrenders Section 230 protections, it risks being sued for anything its users say — which may simply make it impossible for them to operate.

Trump’s Executive Order asks the Federal Communications Commission to collapse Section 230’s three distinct immunities into a single immunity dependent on “good faith” — and then define that term broadly to include neutrality and potentially much more. The Hawley bill does roughly the same thing by requiring large “edge providers” to promise “good faith.” Both would change the dynamics of litigation completely: A plaintiff with a facially plausible complaint would (1) prevail on a motion to dismiss, (2) get court-ordered discovery of internal documents and depositions of employees to assess “good faith” (however that term is expanded), and (3) force the company to litigate all the way through a motion for summary judgment. Whether or not the plaintiff ultimately wins, this pre-trial phase of litigation is where the defendant will incur the vast majority of their legal costs — and where plaintiffs force settlements. Multiply those costs of litigation, and settlement, times the millions or billions of pieces of content posted to social media sites every day and you get “death by ten thousand duck-bites.” Fair v. Roommates, 521 F.3d 1157, 1174 (9th Cir. 2008). That’s why Judge Alex Kozinski (a longtime conservative champion once short-listed for the Supreme Court) declared: “section 230 must be interpreted to protect websites not merely from ultimate liability, but from having to fight costly and protracted legal battles.” Id.

Having to prove good faith to resolve litigation would kill most social media websites, which exist to host content by others. Ironically, it’s possible that the best established social media sites with the biggest legal departments might cope; they might even be grateful that Hawley’s bill had made it impossible for new competitors to get off the ground. At the same time, if (c)(1) is no longer an immunity from suit but merely a defense raised only after great expense, websites across the Internet would simply turn off their comments sections.

Today, Section 230 doesn’t define “good faith.” Courts assessing eligibility for the 230(c)(2)(A) immunity have defined the term narrowly. See e.g., BFS Fin. v. My Triggers Co., No. 09CV-14836 (Franklin Cnty. Ct. Com. Pl. Aug. 31, 2011) (allowing antitrust claims); Smith v. Trusted Universal Standards in Elec. Transactions, 2011 WL 900096, at *25–26 (D.N.J. Mar. 15, 2011). Hawley’s bill would add a five-factor definition of “good faith” in a new Subsection 230(c)(3). These factors would give plaintiffs ample room to declare that an edge provider had been politically biased against them. Inevitably, courts would have to analyze the nature of third-party content, comparing content that had been removed with content that had not in order to judge overall patterns.

To maintain 230 protections, an edge provider must also agree to pay up to $5,000 damages to users if it is found to have breached its (compelled) promises of “neutrality.” Three hundred million users times $5,000 is $1.5 trillion dollars, exceeding the entire market cap of Google. The bill also adds attorneys fees, threatening to create a cottage industry of litigation against edge providers. The mere threat of such massive fines will fundamentally change how websites operate — precisely Hawley’s goal.

Perhaps most important is what the bill doesn’t say: unlike Trump’s Order, Hawley’s bill doesn’t directly call on the FTC or state AGs to sue websites for bias. But make no mistake; his bill would weaponize federal and state consumer protection laws to allow politicians to coerce social media into favoring their side of the culture wars. The FTC might hesitate to bring such suits, because of all the constitutional problems discussed below, but multiple Republican attorneys general have already made political hay out of grandstanding against “liberal San Francisco tech giants.” They would surely use Hawley’s bill to harass edge providers, raise money for their campaigns, and run for governor — or Senate.

A New Fairness Doctrine — with Even Greater First Amendment Problems

The Original Fairness Doctrine required broadcasters (1) to “adequately cover issues of public importance” and (2) to ensure that "the various positions taken by responsible groups" were aired, thus mandating the availability of airtime to those seeking to voice an alternative opinion. President Reagan’s FCC abolished these requirements in 1987. When Reagan vetoed Democratic legislation to restore them, he noted that “the FCC found that the doctrine in fact inhibits broadcasters from presenting controversial issues of public importance, and thus defeats its own purpose.”

The Republican Party has steadfastly opposed the Fairness Doctrine for decades. The 2016 Republican platform (re-adopted verbatim for 2020) states: “We likewise call for an end to the so-called Fairness Doctrine, and support free-market approaches to free speech unregulated by government.” Yet now, Hawley and Trump propose a version of the Fairness Doctrine for the Internet that would be more vague, intrusive, and arbitrary than the original.

In Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241 (1974), the Supreme Court struck down a 1913 state law imposing a version of the Fairness Doctrine on newspapers that required them to grant a “right of reply” to candidates for public office criticized in their pages. The Court acknowledged that there had been a technological “revolution” since the enactment of the First Amendment. The arguments made then about newspapers, as summarized by the Court, are essentially the same arguments conservatives make about digital media:

The result of these vast changes has been to place in a few hands the power to inform the American people and shape public opinion…. The abuses of bias and manipulative reportage are, likewise, said to be the result of the vast accumulations of unreviewable power in the modern media empires. The First Amendment interest of the public in being informed is said to be in peril because the ‘marketplace of ideas’ is today a monopoly controlled by the owners of the market.

Id. at 250. And yet, the court struck down the law as unconstitutional because:

a compulsion to publish that which “‘reason’ tells them should not be published" is unconstitutional. A responsible press is an undoubtedly desirable goal, but press responsibility is not mandated by the Constitution and like many other virtues it cannot be legislated.

Id at 256. “Government-enforced right of access inescapably ‘dampens the vigor and limits the variety of public debate.’" Id. at 257. Critically, the Court rejected the intrusion into the editorial discretion “[e]ven if a newspaper would face no additional costs to comply,” because:

A newspaper is more than a passive receptacle or conduit for news, comment, and advertising. The choice of material to go into a newspaper, and the decisions made as to limitations on the size and content of the paper, and treatment of public issues and public officials — whether fair or unfair — constitute the exercise of editorial control and judgment.

418 U.S. at 258. The Trump/Hawley Fairness Doctrine would impose the very same intrusion upon editorial judgments of edge providers. In addition, determining whether a website has operated “fairly” would be “void for vagueness since no editor could know exactly what words would call the statute into operation.” Id. at 247.

The Supreme Court upheld the Fairness Doctrine for broadcasters in Red Lion Broadcasting Co. v. FCC, 395 U.S. 367 (1969), but only because the Court denied broadcasters full First Amendment protection: “Although broadcasting is clearly a medium affected by a First Amendment interest, differences in the characteristics of new media justify differences in the First Amendment standards.” The same arguments have been made about the Internet, and the Supreme Court explicitly rejected them.

When the Court struck down Congress’ first attempt to regulate the Internet, the Communications Decency Act (everything except Section 230), it held: “our cases provide no basis for qualifying the level of First Amendment scrutiny that should be applied to this medium.” Reno v. American Civil Liberties Union, 521 U.S. 844, 870 (1997). The Court has since repeatedly reaffirmed this holding. While striking down a state law restricting the purchase of violent video games, Justice Scalia declared: "the basic principles of freedom of speech and the press, like the First Amendment’s command, do not vary when a new and different medium for communication appears.” Brown v. Entertainment Merchants Assn., 564 U.S. 786, 790 (2011). In short, Red Lion represented an exception, and even that exception may not survive much longer.

Social Media Aren’t Public Fora, So the First Amendment Protects Them

The President’s Executive Order attempts to sidestep the Supreme Court’s consistent protection of digital speech by claiming that social media are effectively “public fora” and thus that the First Amendment limits, rather than protects, their editorial discretion — as if they were extensions of the government: “It is the policy of the United States that large online platforms, such as Twitter and Facebook, as the critical means of promoting the free flow of speech and ideas today, should not restrict protected speech.” The Order also cites the Supreme Court’s decision that shopping malls were public fora under California’s constitution in Pruneyard Shopping Center v. Robins, 447 U.S. 74, 85-89 (1980).

But Justice Kavanaugh, leading the five conservatives, explicitly rejected such arguments last year: “merely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints.” Manhattan Community Access Corp. v. Halleck, 139 S. Ct. 1921, 1930 (2019). Pruneyard simply doesn’t apply to social media.

Trump’s Order cites the Supreme Court’s recent decision in Packingham v. North Carolina, 137 S. Ct. 1730, 1737 (2017) (social media “can provide perhaps the most powerful mechanisms available to a private citizen to make his or her voice heard”), but omits the critical legal detail: it involved a state law restricting the Internet use of convicted sex offenders. Thus Packingham changed nothing: the First Amendment still fully protects, rather than limits, the editorial discretion of website operators under Miami Herald and Reno.

Hawley’s Bill Imposes an Unconstitutional Condition

Hawley’s bill turns on one underlying legal claim more than any other: that Section 230 is a special privilege granted only to large websites, and withholding it does not violate the First Amendment. The factual claim is false: the law applies equally to all websites, protecting newspapers, NationalReview.com, FoxNews.com and every local broadcaster from liability for user comments posted on their website in exactly the same way it protects social media websites for user content. The legal claim is also wrong.

The Supreme Court has clearly barred the government from forcing the surrender of First Amendment rights in order to qualify for a benefit or legal status. In Agency for Int’l Dev. v. All. for Open Soc’y Int’l, Inc., 570 U.S. 205 (2013), the Court said that the government couldn’t condition the receipt of AIDS-related funding on the recipients’ adoption of a policy opposing prostitution (a form of compelled speech). Much earlier, in Speiser v. Randall, 357 U.S. 513, 518 (1958), the Court made it clear that denying a tax exemption to claimants who engage in certain forms of speech effectively penalizes them for that speech — essentially fining them for exercising their First Amendment rights.

Using Section 230 to coerce social media companies into surrendering their First Amendment rights is no different. Consider how clearly the same kind of coercion would violate the First Amendment in other contexts. Pending legislation would immunize businesses that re-open during the pandemic from liability for those who might be infected by COVID-19 on their premises. Suppose the bill included a provision requiring such businesses to be politically neutral in any signage displayed on their stores — such that, if a business put up, or allowed a Black Lives Matter sign, they would have to allow a “right of reply” in the form of a sign from “the other side.” The constitutional problem would be obvious and in no way ameliorated by the “voluntary” nature of the immunity program.

Social Media Companies Can’t Be Forced to Risk Being Associated with Content They Find Objectionable

The case against unconstitutional conditions and public forum status is even clearer for websites than it would be for retailers or shopping malls, for two reasons. First, social media companies are in the speech business, unlike businesses whose storefronts might incidentally post their own speech or host the speech of others. Reno makes clear that websites enjoy the same First Amendment right as newspapers, and “[t]he choice of material to go into a newspaper, and the decisions made as to limitations on the size and content of the paper, and treatment of public issues and public officials — whether fair or unfair — constitute the exercise of editorial control and judgment.” Miami Herald, 418 U.S. at 258.

Second, Pruneyard emphasized that shopping malls could “expressly disavow any connection with the message by simply posting signs in the area where the speakers or handbillers stand.” But users will naturally assume speech carried by a social network reflects their decision to carry it — just as Twitter and Facebook have been attacked for not removing President Trump’s tweets or banning him from their services.

Disclaimers may actually be less effective online. Consider the three labels Twitter has applied to President Trump’s tweets (the first two of which provoked the issuance of his Executive Order).

The first example not only fails to clearly “disavow any connection with the message,” it is also ambiguous: it could be interpreted to mean there really is some problem with mail-in ballots.

Similarly, Twitter applied a “(!) Manipulated Media” label to Trump’s tweet of a video purporting to show CNN’s anti-Trump bias. Twitter’s label is once again ambiguous: since Trump’s video claims that CNN had manipulated the original footage, the “manipulated media” claim could be interpreted to refer to either Trump’s video or CNN’s. Although the label links to an “event” page explaining the controversy, the warning only works if users actually click through. It’s far from clear to many users that the label is actually a link that will take them to a page with more information.

Finally, when Trump tweeted, in reference to Black Lives Matter protests, “when the looting starts, the shooting starts,” Twitter did not merely add a label below the tweet. Instead, it hid the tweet behind a disclaimer. Clicking on “view” allows the user to view the original tweet:

And yet Twitter has still been lambasted for not taking the tweet down completely, a decision interpreted by some as an acceptance of the validity of such an extreme position.

Further, disclaimers risk creating increased liability; indeed, they may trigger lawsuits from scorned politicians. For example, labeling (and hiding) Trump’s tweets provoked issuance of the Executive Order. In the end, the only truly effective way for Twitter to disavow Trump’s comments would be to ban him from their platform — precisely what the Hawley bill aims to deter.

In this sense, the Trump/Hawley version of the Fairness Doctrine is hugely more intrusive than the right of reply in the original Fairness Doctrine; it puts edge providers in the doubly unconstitutional position of (a) hosting content they do not want to host and (b) being afraid even to label it as content they find objectionable.

Why the Hawley Bill’s Good Faith Requirement Violates the First Amendment

To maintain 230 immunity, edge providers would be required to promise to moderate content in “good faith” — which the Hawley bill defines very loosely as “honest belief and purpose…fair dealing standards, and…[no] fraudulent intent” — in other words, political neutrality (and more). The bill adds this to Section 230’s list of exceptions: “Nothing in this section shall be construed to impair or limit any claim for breach of contract, promissory estoppel, or breach of a duty of good faith.’’ Thus, an edge provider’s compelled “promises” could be enforced by the Federal Trade Commission, state AGs, or private plaintiffs under various federal and state consumer protection laws and common law contract theories. These enforcement mechanisms raise slightly different legal issues, but they all violate the First Amendment in essentially the same way: state action interfering with edge providers’ exercise of editorial discretion.

Consumer Protection Law Can’t Police “Fairness” Claims

Republicans used to oppose weaponizing consumer protection laws against media companies. In 2004, MoveOn.org and Common Cause asked the FTC to proscribe Fox News’ use of the slogan “Fair and Balanced” as a deceptive trade practice. Republican Chairman Tim Muris responded pithly: “I am not aware of any instance in which the [FTC] has investigated the slogan of a news organization. There is no way to evaluate this petition without evaluating the content of the news at issue. That is a task the First Amendment leaves to the American people, not a government agency.”

Similarly, the Hawley bill would necessarily embroil the FTC, state AGs, and judges in “evaluating the content … at issue.” Media companies aren’t exempt from consumer protection or antitrust laws, but the First Amendment makes suing them for how they exercise their editorial discretion extremely difficult, if not impossible — which is why the FTC has never attempted to police marketing claims about editorial practices the way it polices marketing claims generally.

As Chairman Muris noted, general statements about “fairness” or “neutrality” simply are not verifiable. This is why the Ninth Circuit recently dismissed Prager University’s deceptive marketing claims against YouTube. Despite having over 2.52 million subscribers and more than a billion views, this right-wing producer of “5-minute videos on things ranging from history and economics to science and happiness,” sued YouTube for “unlawfully censoring its educational videos and discriminating against its right to freedom of speech.” Specifically, Dennis Prager alleged that roughly a sixth of the site’s videos had been flagged for YouTube’s Restricted Mode, an opt-in feature that allows parents, schools and libraries to restrict access to potentially sensitive (and is turned on by fewer than 1.5% of YouTube users). The Ninth Circuit ruled:

YouTube’s braggadocio about its commitment to free speech constitutes opinions that are not subject to the Lanham Act. Lofty but vague statements like "everyone deserves to have a voice, and that the world is a better place when we listen, share and build community through our stories" or that YouTube believes that "people should be able to speak freely, share opinions, foster open dialogue, and that creative freedom leads to new voices, formats and possibilities" are classic, non-actionable opinions or puffery. See Newcal Indus., Inc. v. Ikon Office Sol., 513 F.3d 1038, 1053 (9th Cir. 2008). Similarly, YouTube’s statements that the platform will "help [one] grow," "discover what works best," and "giv[e] [one] tools, insights and best practices" for using YouTube’s products are impervious to being "quantifiable," and thus are non-actionable "puffery." Id. The district court correctly dismissed the Lanham Act claim.

Prager Univ. v. Google LLC, 951 F.3d 991, 1000 (9th Cir. 2020). Websites can’t be sued today for making statements that may sound like offering neutrality — contrary to Republican claims that they should be, and Trump’s call for such lawsuits in this Executive Order. The Hawley bill implicitly concedes this point.

But simply forcing edge providers to be more specific in their claims about neutrality will not overcome the ultimate constitutional problem. Puffery includes “claims [which] are either vague or highly subjective.” Sterling Drug, Inc. v. FTC, 741 F.2d 1146, 1150 (9th Cir. 1984) (emphasis added). It would be difficult to imagine a more subjective marketing claim than one about “good faith,” “neutrality” or “fairness.” Ultimately, the reason consumer protection law does not attempt to police marketing claims about neutrality is not their lack of specificity but their subjectivity.

In theory, the FTC might be able to base a deception case on certain very clear, objective claims about editorial practices; that category of deception, however, would be narrow — the use of human moderators to evaluate particular pieces of content or to decide which topics are “trending,” or the application of community standards to elected officials, for example. These deception cases would do little to address the complaints of conservatives, and even such narrow complaints might be unconstitutional.

Consumer Protection Law Can’t Police Non-Commercial Speech

The FTC can police marketing claims for being misleading to the extent they “propose a commercial transaction.” Central Hudson Gas & Elec. Corp. v. Public Service Comm’n of New York, 447 U.S. 557,561 (1980); Virginia State Bd. of Pharmacy v. Virginia Citizens Consumer Council, Inc., 425 U.S. 748, 762 (1976). Community standards documents do much more than that: they are essentially statements of values, comparable to Christian retailer Hobby Lobby’s statement that the company is committed to “[h]onoring the Lord in all we do by operating the company in a manner consistent with Biblical principles.”

Such statements are non-commercial speech, which is fully protected by the First Amendment under strict scrutiny even when it is misleading. United States v. Alvarez, 567 U.S. 709 (2012). To overcome strict scrutiny, the government must show that the bill is (1) necessary to address a compelling government interest (2) to which the law is narrowly tailored, and (3) that the government uses the least restrictive means possible to address that interest. Reed v. Town of Gilbert, 576 U.S. 155, 163, 171 (2015). In Miami Herald, the court noted that Florida’s interest in “ensuring free and fair elections” was a “concededly important interest,” but had to yield to the “unexceptionable, but nonetheless timeless, sentiment that liberty of the press is in peril as soon as the government tries to compel what is to go into a newspaper." 418 U.S. at 260. The bill also fails on the second two prongs of strict scrutiny,

If the Hawley bill passes, the Trump Administration will undoubtedly argue that edge providers’ community standards are ads for their services. But when speech has commercial aspects that are “inextricably intertwined” with other fully protected speech, that speech is generally fully protected. Riley v. Nat’l Fed’n of the Blind of N.C., Inc., 487 U.S. 781, 783 (1988). For example, corporate statements endorsing Black Lives Matter receive First Amendment protection even when embedded in marketing claims.

Courts are generally reluctant to label content as commercial speech because that denies the speech full First Amendment protection. Although community standards and terms of service may “refer[] to a specific product,” they in no way resemble traditional advertising — two of the factors courts assess in drawing the line between commercial and noncommercial speech. Bolger v. Youngs Drug Prods. Corp., 463 U.S. 60, 66-67 (1983). The third factor, the profit motive — which Hawley harps on in his public statements — is not dispositive: “If a newspaper’s profit motive were determinative, all aspects of its operations—from the selection of news stories to the choice of editorial position—would be subject to regulation if it could be established that they were conducted with a view toward increased sales.” Pittsburgh Press Co. v. Pittsburgh Comm’n on Human Relations, 413 U.S. 376, 385 (1973) (emphasis added).

Pittsburgh Press makes clear that statements about the way publishers exercise their editorial discretion are fundamentally different from statements about the health benefits of drug products, for example.

Even if a court decided to treat community standards as commercial speech, the government would still face an uphill battle. “The party seeking to uphold a restriction on commercial speech carries the burden of justifying it,” Bolger, 463 U.S. at 71, n. 20, and “must demonstrate that the harms it recites are real, and that its restriction will in fact alleviate them to a material degree.” Edenfield v. Fane, 507 U.S. 763, 771 (1993). Because the government’s interest in regulating commercial speech lies in its misleading or false nature, it would have to show that statements about a website’s editorial practices are misleading. General claims about “fairness,” however, are simply not verifiable.

Why the Government Can’t Compel Disclosures about Editorial Policies

Compelling edge providers to change what they say about their community standards violates the First Amendment even apart from enforcement of such claims. As a condition for maintaining 230 protection, the Hawley bill requires edge providers to (1) “describe any policies … relating to restricting access to or availability of [user-generated] material” and (2) “promise that the edge provider shall … design and operate the provided service in good faith.” The first requirement seems hands-off: it does not directly dictate what an edge provider’s terms of service must say. But this is simply a trick of clever drafting: this requirement does not need to be specific, because the second requirement (“good faith”) will, in practice, govern both. The two inquiries will collapse into one, allowing complaints about both the fairness of content moderation practices as compared to community standards, and the adequacy of those standards.

As a result, companies would (1) make their community standards as opaque or unspecific as possible and (2) minimize transparency about content moderation generally (e.g., avoiding public statements or reporting on content removals). But relying on “good faith” does not solve the compelled speech First Amendment problem.

Suppose that, instead of suing to enforce Fox News’ “Fair and Balanced” slogan in 2004, Congressional Democrats had proposed a bill like Hawley’s: just replace “community standards” with “editorial standards” and apply the bill to cable programming networks over a certain size. It would be obvious that the government cannot compel traditional media companies to “describe any policies … relating to [selection] of [programming] material.”

By contrast, the government may (and does) compel food manufacturers to disclose ingredient lists and nutritional information. The First Amendment permits such mandates because they apply to statements of objective fact, not the disclosure of opinions. This is why the seemingly simple age-based ratings systems for video games and movies have evolved as purely private undertakings. Behind each label is an editorial judgment, an opinion, about how to apply rating criteria. The government can compel neither the rating system overall, nor specific disclosures about the contents of specific films, nor disclosure of the rating methodology. By the same token, it cannot compel websites to disclose their editorial methodologies, whether implemented by humans or algorithms. Brown, 131 S. Ct. at 2740.

The Hawley Bill Is Designed to Chill the Exercise of Editorial Discretion

The Hawley bill proposes four criteria for assessing a website’s “good faith.” The first two concern “selective enforcement,” whether by humans or algorithms. But what purports to be a regulation only of marketing claims would actually, inevitably embroil regulators and/or judges in evaluating the editorial discretion of edge providers — conduct that would clearly qualify for the full protection of the First Amendment as non-commercial speech under Miami Herald. Twitter’s alleged political bias in applying its community standards is no more actionable under consumer protection law than would be Fox News’ political bias in its editorial policies.

The third criterion — “the intentional failure to honor a public or private promise made by, or on behalf of, the provider” — appears to preserve consumer protection claims, but its aim is significantly broader. In Barnes v. Yahoo!, Inc., 565 F.3d 560 (9th Cir. 2009), the court allowed the plaintiff’s suit against Yahoo! to proceed. Barnes sued the company for failing to stop her ex-boyfriend from posting revenge porn. The court ruled that the company had essentially waived its Section 230 immunity when its Director of Communications promised the plaintiff she would “personally walk the statements over to the division responsible for stopping unauthorized profiles and they would take care of it.”

This promissory estoppel theory was limited to the particular facts of that case: a clear promise made directly to a specific user. The Hawley bill’s “public or private promise” language could be read to allow plaintiffs to set aside Section 230 immunity and sue edge providers for far more general statements about content moderation practices that would never qualify for promissory estoppel. By holding companies to every past statement, the Hawley bill aims to stop companies from changing their content moderation policies over time as new challenges emerge — a critical dimension of any company’s editorial discretion.

The fourth criterion — “any other intentional action taken by the provider without an honest belief and purpose, without observing fair dealing standards, or with fraudulent intent” — seems tailor-made for a law school exam on the “void for vagueness” standard. In particular, it is considerably more expansive than the narrow standard the Supreme Court set forth in Central Hudson Gas Elec. v. Public Serv. Comm’n, 447 U.S. 557 (1980), for regulating commercial speech: “there can be no constitutional objection to the suppression of commercial messages that do not accurately inform the public about lawful activity.” In other words, the Court allows the regulation of commercial speech only because of its effects, not its intent. Applying a subjective, rather than an objective standard, would make litigation significantly easier. Thus, this criterion would not be constitutional even if it were applied solely to commercial speech. But as we have already seen with the Fox News example, there would be no way to apply this standard “without evaluating the content … at issue,” as FTC Chairman Muris put it.

The Bill Unconstitutionally Targets Specific Websites

The bill applies to “edge providers,” defined as providers of a website, mobile application or web application with more than $1.5 billion in global revenue and more than 30 million U.S. users or more than 300 million global users, that have accessed the site by any means in the past year. This tailors the bill to apply to just a handful of services: Google (Alphabet), Apple, Facebook (including Instagram and Whatsapp) and Amazon (the so-called “GAFA”) as well as Twitter, eBay, Microsoft, Apple, and TikTok (because the revenue threshold is global). Reddit, Flickr, and Etsy would meet the user thresholds but not the revenue thresholds. Wikipedia wouldn’t be covered because it’s a non-profit.

What may at first seem like a sensible way to focus the effect of the bill actually creates a host of problems. First, it’s possible that, despite posing an existential threat to “Big Tech” companies, Hawley’s bill could actually protect them from competition. By penalizing smaller market entrants for getting too big, Hawley’s bill creates an incentive for small players to get bought-out by their “big tech” counterparts before crossing Hawley’s size threshold — big companies better equipped to handle the legal risks Hawley’s bill would create.

The bill’s scope raises three distinct constitutional problems. First, singling out a small group of websites provides further reason for applying stricter scrutiny. “Minnesota’s ink and paper tax violates the First Amendment not only because it singles out the press, but also because it targets a small group of newspapers…. And when the exemption selects such a narrowly defined group to bear the full burden of the tax, the tax begins to resemble more a penalty for a few of the largest newspapers than an attempt to favor struggling smaller enterprises.” Minneapolis Star, 460 U.S. at 591-92. Applying taxes only to large newspapers “poses a particular danger of abuse by the State.” Arkansas Writers’ Project, Inc. v. Ragland, 481 U.S. 221 (1987).

Hawley’s bill poses a “danger of abuse” by focusing on only the largest social networks — all of the ones conservatives complain about being biased against them — while excluding sites with a laissez-faire approach to content moderation, where extremist right-wing content has been allowed to flourish, such as Reddit. The relatively high revenue threshold excludes Reddit as well as other popular social media sites like Yelp (business reviews), IMDB (movie reviews), Fandom (a hosting platform), and Pinterest. The user threshold also excludes smaller social networks that have become gathering places for the Alt Right, like Gab (1.8 million monthly users users) and Minds (1.25 million users total).

The bill might apply to websites for traditional media, but even this is difficult to predict. Websites the largest newspapers and cable channels all meet the monthly user threshold, but won’t qualify for the revenue threshold if separate corporate digital divisions are treated as the “edge providers” covered by the bill. In theory, it might be possible to “pierce the corporate veil” to argue that the parent companies’ revenue should be counted, but this is not what the bill says — which further suggests the bill is tailored to social media sites. In any event, including some large traditional media websites in its scope wouldn’t come anywhere near making the bill broad enough to avoid the concerns of Minneapolis Star or Arkansas Writers’ Project.

Second, the bill applies only to a particular subset of Internet media — websites, apps and services that host user content, not services like Netflix or non Internet media. On its own, this all but ensures that the bill would be subject to strict scrutiny — which it would surely fail. See Turner Broadcasting System, Inc. v. FCC, 512 U.S. 622 (1994) (“Regulations that discriminate among media … often present serious First Amendment concerns.”); Minneapolis Star Tribune, Co. v. Minnesota Commr of Revenue, 460 U.S. 575, 583 (1983) (a tax applied only to newspapers).

Arguably, a bill that applied equally to all “interactive computer service providers” would be less problematic because it would not single out a “small group” of sites for what amounts to punishment. Abandoning user count or revenue thresholds would avoid the problem of retaliatory targeting, but additional First Amendment problems would remain.

Hawley’s Bill Would Backfire Against Conservatives

It’s impossible to anticipate, ex ante, the net effect of the law upon the decision-making of each social media service — i.e., whether they will do more or less moderation, and whether conservatives would actually benefit overall. The chief purpose of Section 230 was to avoid the “Moderator’s Dilemma,” created by Stratton Oakmont, Inc. v. Prodigy Services Co., 1995 WL 323710 (N.Y. Sup. Ct. 1995). The court held Prodigy more liable because it actively engaged in content moderation to create a “family-friendly” service. If edge providers fear that removing certain content may increase their legal risks, they will moderate less. On the other hand, they may calculate that more moderation will allow them to claim a more consistent approach.

That the same law could produce diametrically opposite results is not at all unusual in First Amendment jurisprudence. This is precisely the constitutional problem with vague laws: they are both unpredictable and highly subject to manipulation by those charged with enforcement.

Empowering the government to determine political neutrality cuts both ways. Discouraging edge providers from moderating incendiary or abusive speech from the right will have the same kinds of effects on the left. Democrats will just as easily claim “bias” when speech they like is removed. Consequently, social media sites will hesitate to take down content from Antifa or radical anti-police activists for fear that a Democratic FTCor state attorney general will sue them.

More generally, if Republicans start suing edge providers for failing to deliver on the claim of neutrality required by the new Hawley bill, you could count on Democrats — when they have the chance — to start suing social media operators for not living up to other provisions in their community standards. Consider Twitter’s Community Standards:

Twitter has made an editorial decision not to remove tweets posted by President Trump that seem to violate all of these prongs (minus the one about child sexual exploitation). The First Amendment clearly protects their right to make that decision, but if the government could hold a company to such statements about its editorial practices, as Hawley claims, without violating the First Amendment, why couldn’t a Democratic FTC make the same argument about Twitter not living up to its promise to enforce its community standards? Indeed, Facebook has been heavily criticized by groups on the left for failing to do more to take down racist content that may even incite users to violence.

For better or worse, the First Amendment prevents the government from forcing Facebook, Twitter or any other social media sites to change how they favor, disfavor, or remove user content. But if Hawley’s bill were somehow to pass now, it could just as easily be used by a Biden administration to pressure social media sites to take down right-leaning content in the years it would take for the complex legal questions outlined here to work their way through the courts.

The “Problem” for Republicans Isn’t 230, but the First Amendment

In the end, Republicans’ complaints aren’t really about Section 230, but about the First Amendment. Yes, Section 230 protects websites from liability for user content — “death by ten thousand duck-bites.” Roommates, 521 F.3d at 1174. While the Hawley bill and Trump’s Executive Order both make edge providers liable for what users say, this is only a means to an end; their real focus is not on the decision made by edge providers to host potentially unlawful content, but on their decision not to host content they deem objectionable. That decision is one the First Amendment protects as fully for websites as it does for newspapers or Fox News.

Trump, Hawley and other Republicans would do well to remember what President Reagan said when he vetoed legislation to restore the Fairness Doctrine back in 1987:

We must not ignore the obvious intent of the First Amendment, which is to promote vigorous public debate and a diversity of viewpoints in the public forum as a whole, not in any particular medium, let alone in any particular journalistic outlet. History has shown that the dangers of an overly timid or biased press cannot be averted through bureaucratic regulation, but only through the freedom and competition that the First Amendment sought to guarantee.

Republicans should ask themselves: “WWRD—What Would Reagan Do?” The answer should, by now, be clear: “Congress shall make no law…”

Posted on Techdirt - 31 January 2020 @ 12:05pm

Lindsey Graham's Sneak Attack On Section 230 And Encryption: A Backdoor To A Backdoor?

Both Republicans and Democrats have been talking about amending Section 230, the law that made today’s Internet possible. Most politicians are foggy on the details, complaining generally about “Big Tech” being biased against them (Republicans), “not doing enough” about harmful content (Democrats, usually), or just being too powerful (populists on both sides). Some have promised legislation to amend, while others hope to revoke Section 230 entirely. And more bills will doubtless follow.

Rather than get mired in the specifics about how tinkering with Section 230 could backfire, Sen. Lindsey Graham is circulating a draft bill called the “Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2019” — the “EARN IT Act of 2019,” leaked by Bloomberg yesterday. Democratic Sen. Richard Blumenthal has apparently been involved in drafting.

At first blush, the bill may seem uncontroversial: it would create a presidential commission of experts to “develop recommended best practices for providers of interactive computer services regarding the prevention of online child exploitation conduct.” Who could argue with that? Indeed, given how little lawmakers understand online content moderation, getting analysis and recommendations from real experts about Section 230 is probably the only way out of the increasingly intractable, empty debate over the law.

But what Graham’s bill would actually do is give the Attorney General a blank check to bypass Congress in cracking down on Internet services in ways that may have little to do with child sexual abuse material (CSAM). Specifically, the bill would:

  1. Amend Criminal Law & Section 230: Section 230 has never shielded operators of websites and Internet services from federal criminal prosecution for CSAM. But the Graham bill would create broad new legal risks by lowering the (actual) knowledge requirement from “knowingly” to “recklessly” (which would include an after-the-fact assessment of what the company “should have known”) and amending Section 230 to authorize both criminal prosecution and civil suits under state law. For the first time, operators could be sued by plaintiff’s lawyers in class-action suits for “reckless” decisions in designing or operating their sites/services.

  2. Condition Section 230 Immunity: The commission’s (a) recommended “best practices” would quickly become (b) conditions for invoking Section 230 immunity against greatly expanded liability for CSAM — immunity so vital to the operation of many online services that (c) the conditions would be tantamount to legal mandates.

As drafted, Graham’s bill entails a shocking abandonment of the most basic principles of how administrative agencies make rules — based on the fiction that the “best practices” wouldn’t be effectively mandatory — by allowing the AG to bypass Congress on other controversial issues like mandatory age verification or even encryption. As I told Bloomberg: “The absolute worst-case scenario could easily become reality: DOJ could effectively ban end-to-end encryption.” Signal, Telegram and Whatsapp all could no longer exist in their current form. All would be required to build in backdoors for law enforcement because all could be accused of “recklessly” designing their products to make it impossible for the operators or law enforcement to stop CSAM sharing. The same could happen for age verification mechanisms. It’s the worst kind of indirect regulation. And because of the crazy way it’s done, it could be hard to challenge in court.

The rhetorical premise of the “EARN IT” Act — that Section 230 was a special favor that tech companies must continually “earn” — is false. Republicans have repeatedly made this claim in arguing that only “neutral” platforms “deserve” Section 230’s protections, and Democrats likewise argue that website operators should lose Section 230’s protections if they don’t “do more” to combat disinformation or other forms of problematic speech by users.

Congress has never conditioned Section 230 in the way Graham’s bill would do. Section 230, far from being a special favor or subsidy to tech companies, was crafted because, without its protections, website operators would have been discouraged from taking active measures to moderate user content — or from hosting user-generated content altogether, often referred to as the “moderator’s dilemma.”

Here’s how Graham’s monstrous, Rube-Goldberg-esque legal contraption would work in practice. To understand which services will be affected and why they’d feel compelled to do whatever DOJ commands to retain their Section 230 immunity, we’ll unpack the changes to criminal law first.

Step #1: Expanding Legal Liability

Graham’s bill would amend existing law in a variety of ways, mostly paralleling SESTA-FOSTA: while the 2018 law expanded the federal prostitution law (18 U.S.C. § 1591, 2421A), the Graham bill focuses on “child exploitation” imagery (child porn). (Note: To help prosecutors prosecute sex trafficking, without the need for any amendment to Section 230, TechFreedom supported toughening 18 U.S.C. § 1591, 2421A to cover trafficking of minors when FOSTA was a stand-alone bill — but opposed marrying FOSTA with SESTA, the Senate bill, which unwisely amended Section 230.) Specifically, the Graham bill would:

  1. Create a new civil remedy under 18 U.S.C. § 2255 that extends to suits brought against an “interactive computer service” for reckless § 2252 violations;

  2. Amend Section 230(e) to exclude immunity for state criminal prosecution for crimes coextensive with § 2252; and

  3. Amend Section 230(e) to exclude immunity for civil causes of action against an “interactive computer service” pursuant to other state laws if the underlying claim constitutes a violation of § 2252 (or by operation of § 2255(a)(1)). Most notably, this would open the door to states to authorize class-action lawsuits brought by entrepreneurial trial lawyers — which may even be a greater threat than criminal prosecution since the burden of proof would be lower (even though, in principle, a civil plaintiff would have to establish that a violation of criminal law had occurred under Section 2252).

The Graham bill goes further than SESTA-FOSTA in two key respects:

  1. It would lower the mens rea (knowledge) requirement from “knowingly” to “recklessly,” making it considerably easier to prosecute or sue operators; and

  2. Allow for state criminal and civil prosecution for hosting child exploitation imagery that could violate § 2252).

In a ploy to make their bill seem less draconian, SESTA-FOSTA’s sponsors loudly proclaimed that they preserved “core” parts of Section 230’s immunity. Graham will no doubt do the same thing. Both bills leave untouched Section 230(c)(2)(A)’s immunity for “good faith” content removal decisions. But this protection is essentially useless against prosecutions for either sex trafficking or CSAM. In either case, the relevant immunity would be Section 230(c)(1), which ensures that ICS operators are not held responsible as “publishers” for user content. The overwhelming majority of cases turn on that provision — and that is the provision that Graham’s bill conditions on compliance with the AG’s “best practices.”

Step #2: How a “Recommendation” Becomes a Condition to 230

The bill seems to provide an important procedural safeguard by requiring consensus — at least 10 of the 15 commissioners — for each recommended “best practice.” But the chairman (the FTC chairman or his proxy) could issue his own “alternative best practices” with no minimum level of support. The criteria for membership ensure that he’d be able to command at least a majority of the commission, with the FTC, DOJ and Department of Homeland Security each getting one seat, law enforcement getting two, prosecutors getting two more — that’s seven just for government actors — plus two more for those with “experience in providing victims services for victims of child exploitation” — which makes nine reliable votes for “getting tough.” The remaining six Commissioners would include two technical experts (who could turn out to be just as hawkish) plus two commissioners with “experience in child safety” at a big company and two more from small companies. So the “alternative” recommendations would almost certainly command a majority anyway.

More importantly, it doesn’t really matter what the Commissioners recommend: the Attorney General (AG) could issue a radically different set of “best practices” — without public comment. He need only explain why he modified the Commission’s recommendations.

What the AG ultimately issues would not just be recommendations. No, Graham’s bill would empower the AG to enact requirements for enjoying Section 230’s protections against a range of new civil lawsuits and from criminal prosecutions related to “child exploitation” or “child abuse” — two terms that the bill never defines.

Step #3: How Conditioning 230 Eligibility Amounts to a Mandate

Most websites and services, especially the smallest ones, but even the largest ones, simply couldn’t exist if their operators could be held civilly liable for what their users do and say — or if they could be prosecuted under an endless array of state laws. But it’s important to stress at the outset that Section 230 immunity isn’t anywhere near as “absolute” or “sweeping” as its critics claim. Despite the panic over online sex trafficking that finally led Congress, in 2018, to pass SESTA-FOSTA, Section 230 never hindered federal criminal prosecutions. In fact, the CEO of Backpage.com — the company at the center of the controversy over Section 230 — pled guilty to facilitating prostitution (and money laundering) the day after SESTA-FOSTA became law in April 2018. Prosecutors didn’t need a new law, as we stressed at the time.

Just as SESTA-FOSTA created considerable new legal liability for websites for sex trafficking, Graham’s bill does so for CSAM (discussed below) — which makes Section 230 an even more critical legal shield and, in turn, makes companies more willing to follow whatever requirements might be attached to that legal shield.

How Broad Could the Bill’s Effects Be?

Understanding the bill’s real-world effects depends on three separate questions:

  1. What counts as “child exploitation” and “child abuse?”

  2. Which companies would really need Section 230 protection against new, expanded liability for CSAM?

  3. What could be the scope of the AG’s conditions to 230 liability? Must they be related to conduct covered by Section 230?

What Do We Mean by “Child Exploitation” and “Child Abuse?”

The bill’s title focuses on “child exploitation” but the bill also repeatedly talks about “child abuse” — without defining either term. The former comes from the title of 18 U.S.C. § 2252, which governs the “visual depiction involves the use of a minor engaging in sexually explicit conduct” (CSAM). The bill directly invokes that bedrock law, so one might assume that’s what Graham had in mind. There is a federal child abuse law but it’s never mentioned in the bill.

This lack of clarity becomes a significant problem because, as discussed below, the bill is so broadly drafted that the AG could mandate just about anything as a condition of Section 230 immunity.

Which Websites & Services Are We Talking About?

Today, every website and Internet service operator faces some legal risk for CSAM. At greatest risk are those services that allow users to communicate with each other in private messaging or groups, or to share images or videos, because this is how CSAM is most likely to be exchanged. Those who traffic in CSAM are known to be highly creative in finding unexpected places to interact online — just as terrorist groups may use chat rooms in video games to hold staff meetings.

It’s hard to anticipate all the services that might be affected by the Graham bill, but it’s safe to bet that any messaging, photo-sharing, video-hosting or file-sharing tool would consider the bill a real threat. At greatest risk would be services that cannot see what their users do because they offer end-to-end encryption. They risk being accused of making a “reckless” design decision if it turns out that their users share CSAM with each other.

What De Facto Requirements Are We Talking About?

Again, Graham’s bill claims a narrow scope: “The purpose of the Commission is to develop recommended best practices for providers of interactive computer services regarding the prevention of online child exploitation conduct.”

The former term (ICS) is the term Section 230 uses to refer to covered operators: a service, system or software that “provides or enables computer access by multiple users to a computer server.” You might think the Graham bill’s use of this term means the bill couldn’t be used to force Apple to change how it puts E2EE on iPhones — because the iPhone, unlike iMessage, is not an ICS. You might also think that the bill couldn’t be used to regulate things that seem unrelated to CSAM — like requiring “fairness” or “neutrality” in content moderation practices, as Sen. Hawley has proposed and Graham has mentioned repeatedly.

But the bill won’t actually stop the AG from using this bill to do either. The reason is the same in both cases: this is not how legislation normally works. In a normal bill, Congress might authorize the Federal Communications Commission to do something — say, require accessibility features for disabled users of communications services. The FCC could then issue regulations that would have to be reasonably related to that purpose and within its jurisdiction over “communications.” As we know from the 2005 American Library decision, the FCC can’t regulate after the process of “communications” has ended — and thus had no authority to require television manufacturers to build in “broadcast flag” technology on their devices to ensure that, once the device received a broadcast signal, it could not make copies of the device unless authorized by the copyright holder.

But that’s not how Graham’s bill would work. A company that only makes devices, or installs firmware or operating system on them, may not feel compelled to follow the AG’s “best practices” because it does not operate an ICS, and, as such, could not claim Section 230 content (and is highly unlikely to be sued for what its users do anyway). But Apple or Google, in addition to doing these things, also operate multiple ICSes. Nothing in the Graham bill would stop the AG from saying that Apple would lose its Section 230 immunity for iMessage, iCloud or any ICS if it does not build in a backdoor on iPhones for law enforcement. Apple would likely comply. And even if Apple resists, smaller companies with fewer legal resources would likely cave under pressure.

In fact, the Graham bill specifically includes, among ten “matters addressed,” the “retention of evidence and attribution or user identification data relating to child exploitation or child sexual abuse, including such retention by subcontractors” — plus two other prongs relating to identifying such material. While these may appear to be limited to CSAM, the government has long argued that E2EE makes it impossible for operators either to identify or retain CSAM — and thus that law enforcement must have a backdoor and/or that operators must be able to see everything their users do (the opposite of E2EE).

Most of the “matters addressed” pertain to child exploitation (at least in theory) but one other stands out: “employing age limits and age verification systems.” Congress tried to mandate minimum age limits and age verification systems for adult materials back in the Child Online Protection Act (COPA) of 1998. Fortunately, that law was blocked in court in a protracted legal battle because adults have a right to access sensitive content without being subjected to age verification — which generally requires submitting a credit card, and thus necessarily entails identifying oneself. (The court also recognized publishers’ rights to reach privacy-sensitive users.)

Rep. Bart Stupak’s (D-MI) ‘‘Online Age Verification and Child Safety Act’’ of 2009 attempted to revive age verification mandates, but died amidst a howl of protest from civil libertarians. But, like banning E2EE, this is precisely the kind of thing the AG might try to mandate under Graham’s bill. And, critically, the government would argue that the bill does not present the same constitutional questions because it is not a mandate, but rather merely a condition of special immunity bestowed upon operators as a kind of subsidy. Courts should protect us from “unconstitutional conditions,” but given the state of the law and the difficulty of getting the right parties to sue, don’t count on it.

These “matters addressed” need not be the only things the Commission recommends. The bill merely says the “[t]he matters addressed by the recommended best practices developed and submitted by the Commission … shall include [the ten things outlined in the bill].” The Commission could “recommend” more — and the AG could create whatever conditions to Section 230 liability he felt he could get away with, politically. His sense of shame, even more than the courts or Congress, would determine how far the law could stretch.

It wouldn’t be hard to imagine this AG (or AGs of, sadly, either party) using the bill to reshape moderation practices more generally. Republicans increasingly argue that social media are “public fora” to which people like Alex Jones or pseudo-journalistic outlets like Gateway Pundit have First Amendment rights of access. Under the same crazy pseudo-logic, the AG might argue that, the more involved the government becomes in content moderation through whatever conditions he imposes on Section 230 immunity, the more essential it is that website operators “respect the free speech rights” of users. Ultimately, the Commission would operate as a censorship board, with murky but enormous powers — and the AG would be the ultimate censor.

If this sounds like a crazy way to make law, it is! It’s free-form lawmaking — not “we tell what you must do” (and you can raise constitutional objections in court) but rather “we’re not gonna tell you what to do, but if you don’t want to be sued or prosecuted under vague new sex trafficking laws, you’d better do what we tell you.” Once the Commission or the AG strays from “best practice” recommendations that strictly related to CSAM, then the floodgates are open to politically motivated back-door rulemaking that leave platforms with no input and virtually no avenue for appeal. And even if the best practices are related to CSAM, the way the Commission makes what amounts to law will still be unprecedented, secretive, arbitrary and difficult to challenge in court.

Other Key Aspects of How the Bill Would Work

The bill would operate as follows:

  • The bill allows 90 days Commissioners to be appointed, 60 days for the Commission’s first meeting, and 18 months to make its first set of recommendations — 25 months in total. The leaked draft leaves blankr the window in which the AG must issue his “best practices.”

  • Those would de facto requirements would not become legally valid until publication in the Federal Register — which usually takes a month but which sometimes drags out indefinitely.

  • Operators would have 1 year to submit a written certification of their compliance.

  • If, say, the next administration drags its feet and the AG never issues “best practices,” the bill’s amendments to Section 230 and criminal law go into effect four years after enactment — creating sweeping new liability for CSAM and removing Section 230’s protections.

  • The Commission and AG will go through the whole farce again at least every two years.

The bill also grants DOJ broad subpoena power to determine whether operators are, in fact, living up to their certification of compliance with the AG’s “best practices.” Expect this power to be used aggressively to turn tech companies inside out.

Conclusion

In the end, one must ask: what problem is the Graham bill trying to solve? Section 230 has never prevented federal criminal prosecution of those who traffic in CSAM — as more than 36,000 individuals were between 2004 and 2017. Website operators themselves already have enormous legal liability for CSAM — and can be prosecuted by the Department of Justice for failing to cooperate with law enforcement, just as Backpage executives were prosecuted under Federal sex trafficking before SESTA-FOSTA (and plead guilty).

The Graham bill seems to be designed for one overarching purpose: to make services that offer end-to-end encryption effectively illegal, and ensure that law enforcement (and the intelligence agencies) has a backdoor into every major communications platform.

That would be outrageous enough if it were done through a direct mandate, but doing it in the roundabout way Graham’s bill proposes is effectively a backdoor to a backdoor. Unfortunately, that doesn’t mean the bill might not suddenly move quickly through Congress, just as SESTA did. Be ready: the “Cryptowars” may finally turn very, very hot.

Posted on Techdirt - 26 September 2013 @ 08:05pm

The Second Century Of The Federal Trade Commission

You may not know much about the most important agency in Washington when it comes to regulating new technologies. Founded 99 years ago today, the Federal Trade Commission has become, for better or worse, the Federal Technology Commission.

The FTC oversees nearly every company in America. It polices competition by enforcing the antitrust laws. It tries to protect consumers by punishing deception and practices it deems “unfair.” It’s the general enforcer of corporate promises. It’s the de facto regulator of the media, from traditional advertising to Internet search and social networks. It handles novel problems of privacy, data security, online child protection, and patent claims, among others. Even Net neutrality may soon wind up in the FTC’s jurisdiction if the Federal Communications Commission’s rules are struck down in court.

But how should the FTC regulate technology? What’s the right mix of the certainty businesses need and the flexibility technological progress demands?

There are essentially three models: regulatory, discretionary and evolutionary.

The epitome of traditional regulatory model is the FTC’s chief rival: the FCC. The 1996 Telecom Act runs nearly 47,000 words — 65 times longer than the Sherman Act of 1890, the primary antitrust law enforced by the FTC. The FCC writes tech-specific before technology has even developed. Virginia Postrel described the mentality best in The Future and Its Enemies:

Technocrats are “for the future,” but only if someone is in charge of making it turn out according to plan. They greet every new idea with a “yes, but,” followed by legislation, regulation, and litigation…. By design, technocrats pick winners, establish standards, and impose a single set of values on the future.

The less technocratic alternative is the evolutionary model: build flexible law that evolves alongside technology. Learn from, and adapt to, the ever-changing technological and business environments.

On antitrust, that’s essentially what the FTC (along with the Department of Justice) does today. Judicial decisions are firmly grounded in economics, and this feeds back into the agencies’ enforcement actions. Antitrust law has become nearly synonymous with antitrust economics: both courts and agencies weigh the perils of both under- and over-enforcement in the face of unavoidable uncertainty about the future.

But much of what the FTC does falls into the discretionary model, unmoored from both sound economics and judicial oversight. The discretionary and evolutionary models share a similar legal basis and so are often confused, but they’re profoundly different: The discretionary model harms technological progress and undermines the rule of law, while the evolutionary model promotes both.

For the most part, the FTC enforces laws on a case-by-case basis. Those laws are general, short and vague. The FTC Act is about nineteen times shorter than the 1996 Telecom Act and hasn’t really changed since 1934. Its key provision would fit in a Tweet: “Unfair methods of competition… and unfair or deceptive acts or practices… are hereby declared unlawful.” The antitrust laws are similarly general, prohibiting “restraint of trade” and conduct “the effect of which is to substantially lessen competition.”

The FTC has never explained what its “unfair methods of competition” authority covers that antitrust doesn’t. Commissioner Joshua Wright recently proposed limiting principles, but FTC Chairman Edith Ramirez appears reluctant to relinquish any discretion. Contrary to popular belief, the FTC does have general rulemaking power but has simply decided it’s too hard to use. Instead, the Commission writes formal rules only in narrow areas where Congress has given it streamlined rulemaking power, such as credit reporting and online child protection.

Congress imposed procedural safeguards on the FTC’s general rulemaking authority in the mid-1970s after the FTC ran amuck with its unfairness authority. But the agency kept trying to regulate everything from children’s advertising to funeral home parlors. The Washington Post dubbed the FTC the “national nanny.” An outraged, overwhelmingly Democratic Congress briefly shuttered the agency. The FTC survived only because, in 1980, it offered limiting principles and embraced the evolutionary model:

The present understanding of the unfairness standard is the result of an evolutionary process. The statute was deliberately framed in general terms… [and the] task of identifying unfair trade practices was therefore assigned to the Commission, subject to judicial review, in the expectation that the underlying criteria would evolve and develop over time.

But it hasn’t quite worked out that way. Consumer protection law, unlike antitrust law, has increasingly been shaped primarily by the FTC’s discretion, not evolution through judicial review or dialogue with economic scholarship. In the last decade, the FTC has begun using its unfairness authority to address cutting-edge issues like data security. The FTC has even begun pushing the legal boundaries of its authority over deception by extending it beyond traditional advertising claims to online FAQs and the like.

At the heart of the discretionary model is the FTC’s ability to operate without any real constraints. The Commission hasn’t develop a predictable set of legal doctrines because that’s what courts do — and the FTC has managed to strong-arm dozens of companies into settling out of court. What the FTC calls its “common law of consent decrees” is really just a series of unadjudicated assertions. That approach is just as top-down and technocratic as the FCC’s regulatory model, but with little due process and none of the constraints of detailed authorizing legislation, as Commissioner Wright recently warned. That’s the kind of “flexibility” prosecutors love but that businesses hate, especially those trying to innovate.

How did this happen? For much the same reason companies routinely settle questionable patent claims: settling is cheaper than litigating. Normal plaintiffs have to make a threshold legal showing before courts will compel discovery. But there’s no check on the FTC’s investigative process, which can cost a company millions — until it agrees to settle. And once it does… the FTC takes a Roman approach to deterrence. As an EPA official put it on candid camera last year: “They’d go into a little Turkish town somewhere, they’d find the first five guys they saw, and they would crucify them.” Bad PR can be far more damaging to a company than litigation.

The FTC might be right in any particular case, but overall, what evolves isn’t “law.” It’s merely a list of assertions as to what the Commission thinks companies should and shouldn’t do — crucifixes along the road to wherever the FTC wants technology to go.

We’ve asked the courts to curtail the FTC’s discretion, but modern administrative law and civil procedure make legal challenges difficult. And unfortunately current and recent FTC leadership has shown little interest in limiting the agency’s discretion. Last year, for example, the current and former chairman voted to revoke the agency’s 2003 Policy Statement on disgorgement of wrongful gains in competition cases. And despite unusually broad academic agreement on its basic principles, Commissioner Wright’s proposed Policy Statement on Unfair Methods of Competition has stalled.

So it seems only Congress can force reform. Small legislative reforms could help, like setting minimum pleading requirements or subjecting FTC settlements to judicial review. But a grand rewrite of the FTC Act is politically unlikely and probably unwise. Instead, Congress should insist, as it did in 1980, that the FTC return to an evolutionary model — this time, for real. With enough pressure, the agency itself just might evolve.

What would a more evolutionary-minded FTC look like? Most importantly, it would litigate more and cajole less. Where the courts don’t develop law, the FTC would try to fill the void. Internal administrative adjudication won’t help, since the FTC always — literally always — wins. (That’s what happens when the same entity is both prosecutor and judge.) Nor will it help to produce more of the kinds of reports the FTC has issued in recent years. Instead of asserting what companies should do, the FTC needs to offer more guidance on what it thinks its legal authority means. And the Commission can’t just ignore or revoke those limiting principles when they become inconvenient.

More economic analysis would certainly help, as it has in antitrust cases. While the Commission is free to disregard staff recommendations, it tends not to. So a more significant and better-defined role for economics, and thus the agency’s Bureau of Economics, could provide some degree of internal constraint. That’s a second-best to the external constraint the courts are supposed to provide. But it could at least raise of undertaking enforcement actions simply because three Commissioners — or a few staff lawyers — think they’re helping consumers by crucifying a particular company.

One easy place to start would be holding a comprehensive workshop on data security and then issuing guidelines. The FTC has settled nearly fifty data security cases but has provided scant guidance, even though data breaches and the identity thefts they cause are far and away the top subject of consumer complaints. The goal wouldn’t be to prescribe what, specifically, companies should do but how they should understand their evolving legal duty. For example, at what point does an industry practice become sufficiently widespread to constitute “reasonable” data security?

More ambitiously, the FTC could use its unique power to enforce voluntary commitments to kickstart new paradigms of regulation. That could include codes of conduct developed by industry or multistakeholder groups as well as novel, data-driven alternative models of self-regulation. For example, Uber, Lyft and other app-based personal transportation services could create a self-regulatory program based on actual, real-time data about safety and customer satisfaction. The FTC could enforce such a model — if Congress finally makes common carriers subject to the FTC Act. The same could work for online education, AirBnB and countless other disruptive alternatives to traditional industries and the regulators they’ve captured.

Finally, the FTC could do more of what it does best: competition advocacy — like trying to remove anticompetitive local government obstacles to broadband deployment. The FTC has earned praise for defending Uber from regulatory barriers taxicab commissions want to protect incumbents. That’s the kind of thing a Federal Technology Commission ought to do: stand up for new technology, instead of trying to make “it turn out according to plan.”

Berin Szoka (@BerinSzoka) is President of TechFreedom. Geoffrey A. Manne (@GeoffManne) is Executive Director of the International Center for Law & Economics and Lecturer in Law at Lewis & Clark Law School.

Posted on Techdirt - 17 April 2013 @ 09:55am

CISPA Renders Online Privacy Agreements Meaningless, But Sponsor Sees No Reason To Fix That

CISPA’s sponsors insist the law is 100% voluntary—it doesn’t compel companies to do anything. But as we’ve been warning for a year and warned again yesterday, the bill’s blanket immunity provision doesn’t merely clear a “legislative thicket” of laws restricting information-sharing about cyber threats. It also bars companies from making enforceable promises to their users about how they might share users’ information with the government or other companies in the name of protecting cybersecurity. Yesterday the House Rules Committee refused to allow a bipartisan amendment, sponsored by Rep. Justin Amash to fix this problem, to be brought to the floor for a vote.

At that Committee meeting (1:01:45), the bill’s chief sponsor Chairman Rogers emphatically repeated his earlier assertions that CISPA wouldn’t breach private contracts in response to questions from Jared Polis:

Polis: Why wouldn’t it work to leave it up, getting back to the contract part, and I think again there may be a series of amendments to do this, if a company feels, if it’s voluntary for companies, why not allow them the discretion to enter into agreements with their customers that would allow them to share the information? …

Rogers: I think those companies should make those choices on their own. They develop their own contracts. I think they should develop their own contracts. They should enforce their own contracts in the way they do now in civil law. I don’t know why we want to get in that business.

And yet… CISPA will go to the House floor as written, providing an absolute immunity from “any provision of law,” including private contracts and terms of service.

Only in Congress can you swear up and down that your bill doesn’t do X, then refuse to amend it so that it really doesn’t do X—and then lecture those who note the disconnect, like Polis, with patronizing comments like “once you understand the mechanics of the bill…” (1:02:50).

It brings to mind what Galileo said after he was forced to sign a confession recanting belief in Copernicus’s heretical idea that the Earth revolves around the sun: “And yet, it moves.”

And yet… for all Rogers’ bluster, CISPA moots private contracts—and House Republican leadership won’t fix the problem, even when five of their GOP colleagues offer a simple, elegant fix.

This is the same stubborn refusal to accept criticism and absorb new information that brought us SOPA, PIPA and a host of other ill-conceived attempts to regulate the Internet. It’s the very opposite of what should be the cardinal virtue of Internet policy: humility. Tinkering with the always-changing Internet is hard work. But it’s even harder when you stuff your fingers in your ears and chant “Lalalala, I can’t hear you.”

The good news is that, as with SOPA, this fight transcended partisan lines, uniting a Democrat like Jared Polis (an openly gay progressive from Boulder) with a strict constitutionalist like Justin Amash (the “Ron Paul Republican” from Grand Rapids Michigan)—and four more traditional Republicans. This is precisely the realignment predicted 15 years ago by Virginia Postrel in The Future and Its Enemies. On one side are those profoundly uncomfortable with change, desperate to control and plan the future, and so insecure about their own understanding of technology that they inevitably perceive criticism as a personal attack. On the other are those far more humble and more willing to let the future play out in all its messy unpredictability. The first camp is always pushing for the one, right piece of legislation that will avert a crisis. The second camp admits they don’t know the one, best way to deal with a problem like encouraging sharing of cyberthreat information while protecting user privacy, so they reject static rules that can only be changed by Congress. They want simple rules for a complex world. At a minimum, they want what law Professor Richard Epstein argues in his book Simple Rules for a Complex World–the perfect slogan for this camp–“the most ubiquitous legal safety hatch adds three words to the formal statement of any rule: unless otherwise agreed.”

It’s not a battle between Left and Right, or conservatives and progressives. It’s a battle between attitudes towards the future: the stasis mentality of Congressmen like Mike Rogers and Lamar Smith (of SOPA infamy) and the dynamism of Justin Amash and Jared Polis, and SOPA foes like Republicans Darrell Issa and Jason Chaffetz and Democrats Ron Wyden and Zoe Lofgren.

The dynamists may have lost this battle. But, like Galileo, we’ll eventually win the war. The only questions are: How many more poorly crafted, one-size-fits-all laws will the stasists put on the books in the meantime? How long it will take to clear the real “legislative thicket”–all the complex laws that attempt to provide a single answer for a complex and unknowable future? And when will it finally become unacceptable for Congressmen like Mike Rogers to ram through legislation that doesn’t even do what they claim?

Berin Szoka (@BerinSzoka) is President of TechFreedom (@TechFreedom), a dynamist tech policy think tank.

More posts from Berin Szoka >>