Ever since he first started to make moves to purchase Twitter, Elon Musk has framed his interest in “rigorously adhering to” principles of free speech. As we’ve noted, you have to be ridiculously gullible to believe that’s true, given Elon’s long history of suppressing speech, but a new book about Elon’s purchase suggests that from the very start a major motivation in the purchase, was to silence accounts he disliked.
According to an excerpt of a new book by reporter Kurt Wagner about the purchase (and called out by the SF Chronicle), Elon had reached out to then Twitter CEO Parag Agrawal to ask him to remove student Jack Sweeney’s ElonJet account (which publicly tracks the location of Elon’s private plane). It was only when Agrawal refused, that Elon started buying up shares in the site.
The excerpt slips in that point in a discussion about how Jack Dorsey arranged what turned out to be a disastrous meeting between Agrawal and Musk early in the process:
The day after, Dorsey sent Musk a private message in hopes of setting up a call with Parag Agrawal, whom Dorsey had hand-picked as his own replacement as CEO a few months earlier. “I want to make sure Parag is doing everything possible to build towards your goals until close,” Dorsey wrote to Musk. “He is really great at getting things done when tasked with specific direction.”
Dorsey drew up an agenda that included problems Twitter was working on, short-term action items and long-term priorities. He sent it to Musk for review, along with a Google Meet link. “Getting this nailed will increase velocity,” Dorsey wrote. He was clearly hoping his new pick for owner would like his old pick for CEO.
This was probably wishful thinking. Musk was already peeved with Agrawal, with whom he’d had a terse text exchange weeks earlier after Agrawal chastised Musk for some of his tweets. Musk had also unsuccessfully petitioned Agrawal to remove a Twitter account that was tracking his private plane; the billionaire started buying Twitter shares shortly after Agrawal denied his request.
In other words, for all his posturing about the need to purchase the site to support free speech, it appears that at least one major catalyzing moment was Twitter’s refusal to shut down an account Elon hated.
As we’ve pointed out again and again, historically, Twitter was pretty committed to setting rules and trying to enforce them with its moderation policies, and refusing to take down accounts unless they violated the rules. Sometimes this created somewhat ridiculous scenarios, but at least there were principles behind it. Nowadays, the principles seem to revolve entirely around Elon’s whims.
The case study of Sweeney’s ElonJet account seems to perfectly encapsulate all that. It was widely known that Elon had offered Sweeney $5k to take the account down. Sweeney had counter-offered $50k. That was in the fall of 2021. Given the timing of this latest report, it appears that Elon’s next move was to try to pressure Agrawal to take down the account. Agrawal rightly refused, because it did not violate the rules.
It was at that point he started to buy up shares, and to present himself (originally) as an activist investor. Eventually that shifted into his plan to buy the entire site outright, which he claimed was to support free speech, even though now it appears he was focused on removing ElonJet.
At one point, Elon had claimed that he would keep the ElonJet account up:
But, also, as we now know, three weeks after that tweet, he had his brand new trust & safety boss, Ella Irwin, tell the trust & safety team to filter ElonJet heavily using the company’s “Visibility Filter” (VF) tool, which many people claim is “shadowbanning”):
Less than two weeks later, he banned the account outright, claiming (ridiculously) that the account was “doxxing” him and publishing “assassination coordinates.”
At this point it should have been abundantly clear that Musk was never interested in free speech on Twitter (now ExTwitter), but it’s fascinating to learn that one of the motivating factors in buying the site originally — even as he pretended it was about free speech — was really to silence a teenager’s account.
I know that Elon claims he’s decided he might actually live up to what he promised to do in the binding contract he signed to buy Twitter, but I still wanted to discuss some of the text messages that became public last week as part of the case, showing text messages between Musk and various famous people about his plans for Twitter.
As a side note, I saw some in the media calling it a “leak” but there was no leak. I’m actually a bit surprised they were released as such, as most of the time materials handed over in discovery usually remain secret, and only come out when bits and pieces are used as actual evidence in the case. I originally thought these texts were released as part of Twitter’s attempt to highlight how Musk had been holding back messages he should have handed over in discovery, but, as the excellent Chancery Daily notes, it was actually team Musk that asked the court to make these public, somewhat oddly challenging their own redactions (read that thread for some speculation as to why).
And, look, let’s face it: for most of us, if most of our texts were all made public, without the context that communications between two people who know each other and have some kind of personal relationship and history have… I’d bet a lot of them would be kinda cringey too. So, I get that aspect of it, and think that the many, many articles talking about the “cringiest” texts from the pages and pages of texts are a bit unfair. Let he who is without cringey texts cast the first stone, etc. etc.
But, there is something more serious here, highlighted quite well by Charlie Warzel over at the Atlantic, noting that these texts go a long way towards “shattering the myth” that a lot of the leaders in Silicon Valley are particularly insightful (or “geniuses”) when, really, it seems like they lucked into some level of success and now think it’s because of their own intelligence.
The texts are juicy, but not because they are lurid, particularly offensive, or offer up some scandalous Muskian master plan—quite the opposite. What is so illuminating about the Musk messages is just how unimpressive, unimaginative, and sycophantic the powerful men in Musk’s contacts appear to be. Whoever said there are no bad ideas in brainstorming never had access to Elon Musk’s phone.
The sycophantic stuff is, perhaps, not that surprising either. I imagine that most billionaires have to put up with a lot of that kinda thing from all sorts of people eager to be in their good graces.
But, there is a real point here that highlights how chaotic the decision-making is and just how little these “geniuses” actually bother to understand things or think through what they’re saying.
First, there’s the ease with which some folks just throw money at Elon. For example there’s famed VC Marc Andreessen saying a fund he managed was “in for $250M with no additional work required.”
(As an aside: the reason this message shows up here in the filings is because Andreessen took a screenshot of the Signal chat — with disappearing messages enabled (the red boxes) — and emailed it to Musk’s righthand man, Jared Birchall. This email was handed over during discovery, but Twitter highlighted it because Musk had told the court that he did not use Signal to discuss the merger, and this screenshot… shows otherwise. Also, this message came out a few days earlier, and not as part of this latest dump, but still fits here…).
And then, of course, there’s Oracle founder Larry Ellison, who had no problem tossing over however many billions Elon seemed to want to support his friend’s whimsy. You can see the actual texts in the filing on page 104, but I’ll borrow another idea from The Chancery Daily, and recreate them in text messaging form, to make them feel a bit more realistic.
If you can’t see the images, here’s the exchange in text:
Elon: Any interest in participating in the Twitter deal?
Larry: Yes… of course 👍
Elon: Cool.
Elon: Roughly what dollar size? Not holding you to anything, but the deal is oversubscribed, so I have to reduce or kick out some participants.
Larry: A billion…or whatever you recommend
Elon: Whatever works for you. I’d recommend maybe $2B or more. This has very high potential and I’d rather have you than anyone else.
Larry: I agree that it has huge potential… and it would be lots of fun.
Elon: Absolutely 🙂
Also of note, in the seconds between Elon’s “Cool” and “How much” texts, he also texts Jared Birchall to tell him that Ellison is in.
I’ve gone through way, way, way more rigorous processes to get a $500 grant. Next time, I should just ask for $500 million, I guess. Of course, when I noted something like that on Twitter I had a shocking number of Silicon Valley execs and VCs note in some form or another that this is more or less how business gets done for the super successful and super wealthy, which is perhaps not surprising, but is kind of infuriating for the tons of people (entrepreneurs, civil society, journalists, think tanks etc.) who could put such money to good use, but can’t even get anything.
But, even more annoying (and much more revealing) is that all these Silicon Valley “geniuses” send Musk their ideas, and the ideas are silly, half-baked, or simplistic ones that lots of people have thought through and explained why they won’t work (and I’ll note that it’s often those same civil society folks who are desperate for donations who have put in the hard work on these issues, only to see these “geniuses” tossing around a bunch of foolish ideas that anyone at these underfunded organizations could explain to you instantly why they’re bad ideas.)
While lots of articles have focused on some of the sillier suggestions, I wanted to call out Mathias Dopfner’s. In the last month, we’ve had two separate stories on Dopfner, the newish billionaire CEO of Axel Springer (which owns a bunch of media orgs, including Politico and Insider), suggesting the guy has oddly simplistic views on how things work. After lying about sending a text in support of what Dopfner falsely believed were positive results of the Trump presidency, he made it clear that he thinks what the world needs is more useless he said/she said journalism. He also called for an outright ban on TikTok.
Dopfner comes across as more desperate than many of the others (and most of them do seem pretty desperate). After the initial investment became public, Dopfner texted Elon:
Why don’t you buy Twitter? We run it for you. And establish a true platform for free speech. Would be a real contribution to democracy.
I’m still trying to puzzle out who is the “we” in this sentence. Also, it’s hilarious to think that the guy who was praising Trump and pushing for nonsense journalism knows anything about being “a real contribution to democracy.” Throughout the process he reaches out to Musk again asking if he can “join that project” and saying he “was serious with my suggestion.”
Anyway, he also had so, so many ideas for Musk. He wrote them all out in a giant text message:
Status Quo: it is the de facto public town square, but is a problem that it does not adhere to free speech principles. => so the core product is pretty good, but (i) it does not serve democracy, and (ii) the current business model is a dead end as reflected by flat share price. # Goal: Make Twitter the global backbone of free speech, an open market place of ideas that truly complies with the spirit of the first amendment and shift the business model to a combination of ad-supported and paid to support quality. #Game Plan: 1.) ,,Solve Free Speech” 1a) Step 1: Make it censorship-FREE by radically reducing Terms of Service (now hundreds of pages) to the following: Twitter users agree to (1) Use our service to send spam or scam users, (2) Promote violence, (3) Post illegal pornography 🙃 1b) Step 2: Make Twitter censorship-RESISTANT • Ensure censorship resistance by implementing measures that warrant that Twitter can’t be censored long term, regardless of which government and management. •How? Keep pushing projects at Twitter that have been working on developing a decentralized social network protocol (e.g., BlueSky). It’s not easy, but the backend must run on decentralized infrastructure, APIs should become open (back to the roots! Twitter started and became big with open APIs). •Twitter would be one of many clients to post and consume content. •Then create a marketplace of algorithms, e.g., if you’re a snowflake and don’t want content that offends you pick another algorithm. 2.) ,,Solve Share Price” Current state of the business: • Twitter’s ad revenues grow steadily and for the time being, are sufficient to fund operations. • MAUs are flat, no structural growth • Share price is flat, no confidence in existing business model and/or
That’s all one giant lump o’ text. And it looks like it even goes on longer, but got cut off because even the texting app was like “dude, chill.” And, there are some interesting ideas in there. Obviously, I’m a big fan of Bluesky (which was, in part, based on my paper), and we’ve had a couple of posts on why Musk should support Bluesky. But… Bluesky is not a part of Twitter, even if it’s initial funding came from Twitter.
But what’s really silly is the whole terms of service stuff. Leaving aside the fact that Dopfner seems to have left out a “not” (it should be “agree not to” rather than “agree to”), his simplified terms of service is laughable to anyone who has ever worked in trust and safety, and has any experience with crafting a set of terms for a website. That list is… not workable.
And that’s the part that’s frustrating about this. There are hundreds of trust and safety experts who could walk someone like Dopfner through the different trade offs and challenges here. Hell, the same day that these texts came out, there was a whole conference of trust and safety professionals talking about creating better site policies. But, here we have a billionaire tossing off a simplistic (and confused) idea to another billionaire in a stream of consciousness text that shows he hasn’t put in any of the work.
And, yes, of course, he’s a billionaire, so he figures he can give big picture concepts and let the little people figure out the details. But this is an area where the details really, really matter, and it’s clear that Dopfner hasn’t done the homework (neither has Musk).
There are also messages between Jack Dorsey and Musk, and while some people have been making fun of them, they actually seem to be some of the more reasonable and level headed texts in the batch. It’s pretty clear that Dorsey is trying to explain to Musk why Bluesky is important and should be the future of Twitter (something I very much agree with, though that’s still a long way off). Jack talks about open-source protocols. But… this is right around the same time that Musk is pushing for “open source the algorithm,” which is… not the same thing.
To be honest, during that period of time, I kept waiting to see if Musk would say anything about Bluesky or the protocol concept, and he never seemed to mention it at all, even with Jack pushing the idea, and Dopfner (in his own confused way) pushing it as well.
The other funny thing in these text messages is how random people pop up suggesting new executives who can be put in place at Twitter. There’s a person who’s name is redacted who pushed for “a Blake Masters type” to lead Twitter enforcement (Blake Masters is the Peter Thiel protégé who is running for the Senate in Arizona and seems to have all the charisma of a stone toad). Then there’s the (partially disgraced) venture capitalist Steve Jurvetson suggesting (the even more disgraced) Emil Michael as a potential senior exec for the company.
Warzel spoke to another social media exec who calls out how silly all this is:
“I’m on 20 threads with people,” the former social-media executive told me. “And it’s literally like, Damn, they were just throwing shit at the wall. The ideas people were writing in, in terms of who would be CEO—it’s some real fantasy-baseball bullshit.” Despite all the self-mythologizing and talk of building, the men in these text messages appear mercurial, disorganized, and incapable of solving the kind of societal problems they think they can
And that’s the bit that stands out to me the most about all of this. It becomes clear that almost all of these messages involve dudes who got extremely lucky in the past, and now think that they can solve the world’s problems, and they toss money and ideas around at each other as if they’re doing something important.
It was truly a contrast that, at the same time those messages came out, I was at this conference of trust and safety professionals, many of them not making nearly enough money for the work that they do, but who were actually working through the hard problems of how to make a social media site actually function for democracy, recognizing the many hard challenges and impossible trade-offs, and recognizing that there are no easy answers. There’s no multi-point plan that “fixes” social media, and any plan to make better social media requires more than a text message.
They’re not having billionaires throw their billions around to help them actually make the very real improvements that can be made. They’re struggling every day to make these websites actually work. And the lucky dudes are mostly tossing around extremely simplistic ideas that half the people at the conference last week could lay out the many issues they’d need to actually think through to understand those challenges.
It just made me realize how many more people in Silicon Valley really should have imposter syndrome, but clearly do not.
In Part I, we explained why the First Amendment doesn’t get Musk to where he seemingly wants to be: If Twitter were truly, legally the “town square” (i.e., public forum) he wants it to be, it couldn’t do certain things Musk wants (cracking down on spam, authenticating users, banning things equivalent to “shouting fire in a crowded theatre,” etc.). Twitter also couldn’t do the things it clearly needs to do to continue to attract the critical mass of users that make the site worth buying, let alone attract those—eight times as many Americans—who don’t use Twitter every day.
So what, exactly, should Twitter do to become a more meaningful “de facto town square,” as Musk puts it?
What Objectives Should Guide Content Moderation?
Even existing alternative social media networks claim to offer the kind of neutrality that Musk contemplates—but have failed to deliver. In June 2020, John Matze, Parler’s founder and then its CEO, proudly declared the site to be an “a community town square, an open town square, with no censorship,” adding, “if you can say it on the street of New York, you can say it on Parler.” Yet that same day, Matze also bragged of “banning trolls” from the left.
Likewise, GETTR’s CEO has bragged about tracking, catching, and deleting “left-of-center” content, with little clarity about what that might mean. Musk promises to void such hypocrisy:
Let’s take Musk at his word. The more interesting thing about GETTR, Parler and other alternative apps that claim to be “town squares” is just how much discretion they allow themselves to moderate content—and how much content moderation they do.
Even in mid-2020, Parler reserved the right to “remove any content and terminate your access to the Services at any time and for any reason or no reason,” adding only a vague aspiration: “although Parler endeavors to allow all free speech that is lawful and does not infringe the legal rights of others.” today, Parler forbids any user to “harass, abuse, insult, harm, defame, slander, disparage, intimidate, or discriminate based on gender, sexual orientation, religion, ethnicity, race, age, national origin, or disability.” Despite claiming that it “defends free speech,” GETTR bans racial slurs such as those by Miller as well as white nationalist codewords.
Why do these supposed free-speech-absolutist sites remove perfectly lawful content? Would you spend more or less time on a site that turned a blind eye to racial slurs? By the same token, would you spend more or less time on Twitter if the site stopped removing content denying the Holocaust, advocating new genocides, promoting violence, showing animals being tortured, encouraging teenagers to cut or even kill themselves, and so on? Would you want to be part of such a community? Would any reputable advertiser want to be associated with it? That platforms ostensibly starting with the same goal as Musk have reserved broad discretion to make these content moderation decisions underscores the difficulty in drawing these lines and balancing competing interests.
Musk may not care about alienating advertisers, but all social media platforms moderate some lawful content because it alienates potential users. Musk implicitly acknowledges this imperative on user engagement, at least when it comes to the other half of content moderation: deciding which content to recommend to users algorithmically—an essential feature of any social media site. (Few Twitter users activate the option to view their feeds in reverse-chronological order.) When TED’s Chrius Anderson asked him about a tweet many people have flagged as “obnoxious,” Musk hedged: “obviously in a case where there’s perhaps a lot of controversy, that you would not want to necessarily promote that tweet.” Why? Because, presumably, it could alienate users. What is “obvious” is that the First Amendment would not allow the government to disfavor content merely because it is “controversial” or “obnoxious.”
Today, Twitter lets you block and mute other users. Some claim user empowerment should be enough to address users’ concerns—or that user empowerment just needs to work better. A former Twitter employee tells the Washington Post that Twitter has considered an “algorithm marketplace” in which users can choose different ways to view their feeds. Such algorithms could indeed make user-controlled filtering easier and more scalable.
But such controls offer only “out of sight, out of mind” comfort. That won’t be enough if a harasser hounds your employer, colleagues, family, or friends—or organizes others, or creates new accounts, to harass you. Even sophisticated filtering won’t change the reality of what content is available on Twitter.
And herein lies the critical point: advertisers don’t want their content to be associated with repugnant content even if their ads don’t appear next to that content. Likewise, most users care what kind of content a site allows even if they don’t see it. Remember, by default, everything said on Twitter is public—unlike the phone network. Few, if anyone, would associate the phone company with what’s said in private telephone communications. But every Tweet that isn’t posted to the rare private account can be seen by anyone. Reporters embed tweets in news stories. Broadcasters include screenshots in the evening news. If Twitter allows odious content, most Twitter users will see some of that one way or another—and they’ll hold Twitter responsible for deciding to allow it.
If you want to find such lawful but awful content, you can find it online somewhere. But is that enough? Should you be able to find it on Twitter, too? These are undoubtedly difficult questions on which many disagree; but they are unavoidable.
What, Exactly, Is the Virtual Town Square?
The idea of a virtual town square isn’t new, but what, precisely, that means has always been fuzzy, and lofty talk in a recent Supreme Court ruling greatly exacerbated that confusion.
“Through the use of chat rooms,” proclaimed the Supreme Court in Reno v. ACLU (1997), “any person with a phone line can become a town crier with a voice that resonates farther than it could from any soapbox. Through the use of Web pages, mail exploders, and newsgroups, the same individual can become a pamphleteer.” The Court wasn’t saying that digital media were public fora without First Amendment rights. Rather, it said the opposite: digital publishers have the same First Amendment rights as traditional publishers. Thus, the Court struck down Congress’s first attempt to regulate online “indecency” to protect children, rejecting analogies to broadcasting, which rested on government licensing of a “‘scarce’ expressive commodity.” Unlike broadcasting, the Internet empowers anyone to speak; it just doesn’t guarantee them an audience.
In Packingham v. North Carolina (2017), citing Reno’s “town crier” language, the Court waxed even more lyrical: “By prohibiting sex offenders from using [social media], North Carolina with one broad stroke bars access to what for many are the principal sources for knowing current events, checking ads for employment, speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge.” This rhetorical flourish launched a thousand conservative op-eds—all claiming that social media were legally public fora like town squares.
Of course, Packingham doesn’t address that question; it merely said governments can’t deny Internet access to those who have completed their sentences. Manhattan Community Access Corp. v. Halleck (2019) essentially answers the question, albeit in the slightly different context of public access cable channels: “merely hosting speech by others” doesn’t “transform private entities into” public fora.
The question facing Musk now is harder: what part, exactly, of the Internet should be treated as if it were a public forum—where anyone can say anything “within the bounds of the law”? The easiest way to understand the debate is the Open Systems Interconnection model, which has guided the understanding of the Internet since the 1970s:
Long before “net neutrality” was a policy buzzword, it described the longstanding operational state of the Internet: Internet service (broadband) providers won’t block, throttle or discriminate against lawful Internet content. The sky didn’t fall when the Republican FCC repealed net neutrality rules in 2018. Indeed, nothing really changed: You can still send or receive lawful content exactly as before. ISPs promise to deliver connectivity to all lawful content. The Federal Trade Commission enforces those promises, as do state attorneys general. And, in upholding the FCC’s 2015 net neutrality rules over then-Judge Brett Kavanaugh’s arguments that they violated the First Amendment, the D.C. Circuit noted that the rules applied only to providers that “sell retail customers the ability to go anywhere (lawful) on the Internet.” The rules simply didn’t apply to “an ISP making sufficiently clear to potential customers that it provides a filtered service involving the ISP’s exercise of ‘editorial intervention.’”)
In essence, Musk is talking about applying something like net neutrality principles, developed to govern the uncurated service ISPs offer at layers 1-3, to Twitter, which operates at layer 7—but with a major difference: Twitter can monitor all content, which ISPs can’t do. This means embroiling Twitter in trying to decide what content is lawful in a far, far deeper way than any ISP has ever attempted.
Implementing Twitter’s existing plans to offer users an “algorithm marketplace” would essentially mean creating a new layer of user control on top of Twitter. But Twitter has also been working on a different idea: creating a layer below Twitter, interconnecting all the Internet’s “soapboxes” into one, giant virtual town square while still preserving Twitter as a community within that square that most people feel comfortable participating in.
“Bluesky”: Decentralization While Preserving Twitter’s Brand
Jack Dorsey, former Twitter CEO, has been talking about “decentralizing” social media for over three years—leading some reporters to conclude that Dorsey and Musk “share similar views … promoting more free speech online.” In fact, their visions for Twitter seem to be very different: unlike Musk, Dorsey saw Twitter as a community that, like any community, requires curation.
In late 2019, Dorsey announced that Twitter would fund Bluesky, an independent project intended “to develop an open and decentralized standard for social media.” Bluesky “isn’t going to happen overnight,” Dorsey warned in 2019. “It will take many years to develop a sound, scalable, and usable decentralized standard for social media.” The project’s latest update detailed the many significant challenges facing the effort, but significant progress.
Twitter has a strong financial incentive to shake up social media: Bluesky would “allow us to access and contribute to a much larger corpus of public conversation.” That’s lofty talk for an obvious business imperative. Recall Metcalfe’s Law: a network’s impact is the square of the number of nodes in the network. Twitter (330 million active users worldwide) is a fraction as large as its “Big Tech” rivals: Facebook (2.4 billion), Instagram (1 billion), YouTube (1.9 billion) and TikTok. So it’s not surprising that Twitter’s market cap is a much smaller fraction of theirs—just 1/16 that of Facebook. Adopting Bluesky should dramatically increase the value of Twitter and smaller companies like Reddit (330 million users) and LinkedIn (560 million users) because Bluesky would allow users of each participating site to interact easily with content posted on other participating sites. Each site would be more an application or a “client” than “platform”—just as Gmail and Outlook both use the same email protocols.
Dorsey also framed Bluesky as a way to address concerns about content moderation. Days after the January 6 insurrection, Dorsey defended Trump’s suspension from Twitter yet noted concerns about content moderation:
Dorsey acknowledged the need for more “transparency in our moderation operations,” but pointed to Bluesky as a more fundamental, structural solution:
Adopting Bluesky won’t change how each company does its own content moderation, but it would make those decisions much less consequential. Twitter could moderate content on Twitter, but not on the “public conversation layer.” No central authority could control that, just as with email protocols and Bitcoin. Twitter and other participating social networks would no longer be “platforms” for speech so much as applications (or “clients”) for viewing the public conversation layer, the universal “corpus” of social content.
Four years ago, Twitter banned Alex Jones for repeatedly violating rules against harassment. The conspiracy theorist par excellence moved to Gab, an alternative social network launched in 2017 that claims 15 million monthly visitors (an unverified number). On Gab, Jones now has only a quarter as many followers as he once had on Twitter. And because the site is much smaller overall, he gets much less engagement and attention than he once did. Metcalfe’s Law means fewer people talk about him.
Bluesky won’t get Alex Jones or his posts back on Twitter or other mainstream social media sites, but it might ensure that his content is available on the public conversation layer, where users of any app that doesn’t block him can see it. Thus, Jones could use his Gab account to seamlessly reach audiences on Parler, Getter, Truth Social, or any other site using Bluesky that doesn’t ban him. Each of these sites, in turn, would have a strong incentive to adopt Bluesky because the protocol would make them more viable competitors to mainstream social media. Bluesky would turn Metcalfe’s Law to their advantage: no longer separate, tiny town squares, these sites would be ways of experiencing the same town square—only with a different set of filters.
But Mecalfe’s Law cuts both ways: even if Twitter and other social media sites implemented Bluesky, so long as Twitter continues to moderate the likes of Alex Jones, the portion of the “town square” enabled by Bluesky that Jones has access to will be limited. Twitter would remain a curated community, a filter (or set of filters) for experiencing the “public conversation layer.” When first announcing Bluesky, Dorsey said the effort would be good for Twitter not only for allowing the company “to access and contribute to a much larger corpus of public conversation” but also because Twitter could “focus our efforts on building open recommendation algorithms which promote healthy conversation.” With user-generated content becoming more interchangeable across services—essentially a commodity—Twitter and other social media sites would compete on user experience.
Given this divergence in visions, it shouldn’t be surprising that Musk has never mentioned Bluesky. If he merely wanted to make Bluesky happen faster, he could pour money into the effort—an independent, open source project—without buying Twitter. He could help implement proposals to run the effort as a decentralized autonomous organization (DAO) to ensure its long-term independence from any effort to moderate content. Instead, Musk is focused on cutting back Twitter’s moderation of content—except where he wants more moderation.
What Does Political Neutrality Really Mean?
Much of the popular debate over content moderation revolves around the perception that moderation practices are biased against certain political identities, beliefs, or viewpoints. Jack Dorsey responded to such concerns in a 2018 congressional hearing, telling lawmakers: “We don’t consider political viewpoints—period. Impartiality is our guiding principle.” Dorsey was invoking the First Amendment, which bars discrimination based on content, speakers, or viewpoints. Musk has said something that sounds similar, but isn’t quite the same:
The First Amendment doesn’t require neutrality as to outcomes. If user behavior varies across the political spectrum, neutral enforcement of any neutral rule will produce what might look like politically “biased” results.
Take, for example, a study routinely invoked by conservatives that purportedly shows Twitter’s political bias in the 2016 election. Richard Hanania, a political scientist at Columbia University, concluded that Twitter suspended Trump supporters more often than Clinton supporters at a ratio of 22:1. Hanania postulated that this meant Trump supporters would have to be at least four times as likely to violate neutrally applied rules to rule out Twitter’s political bias—and dismissed such a possibility as implausible. But Hanania’s study was based on a tiny sample of only reported (i.e., newsworthy) suspensions—just a small percentage of overall content moderation. And when one bothers to actually look at Hanania’s data—something none of the many conservatives who have since invoked his study seem to have done—one finds exactly those you’d expect to be several times more likely to violate neutrally-applied rules: the American Nazi Party, leading white supremacists including David Duke, Richard Spencer, Jared Taylor, Alex Jones, Charlottesville “Unite the Right” organizer James Allsup, and various Proud Boys.
Was Twitter non-neutral because it didn’t ban an equal number of “far left” and “far right” users? Or because the “right” was incensed by endless reporting in leading outlets like The Wall Street Journal of a study purporting to show that “conservatives” were being disproportionately “censored”?
There’s no way to assess Musk’s outcome-based conception neutrality without knowing a lot more about objectionable content on the site. We don’t know how many accounts were reported, for what reasons, and what happened to those complaints. There is no clear denominator that allows for meaningful measurements—leaving only self-serving speculation about how content moderation is or is not biased. This is one problem Musk can do something about.
Greater Transparency Would Help, But…
After telling Anderson “I’m not saying that I have all the answers here,” Musk fell back on something simpler than line-drawing in content moderation: increased transparency. If Twitter should “make any changes to people’s tweets, if they’re emphasized or de-emphasized, that action should be made apparent so anyone can see that action’s been taken, so there’s no behind the scenes manipulation, either algorithmically or manually.” Such tweet-by-tweet reporting sounds appealing in principle, but it’s hard to know what it will mean in practice. What kind of transparency will users actually find useful? After all, all tweets are “emphasized or de-emphasized” to some degree; that is simply what Twitter’s recommendation algorithm does.
Greater transparency, implemented well, could indeed increase trust in Twitter’s impartiality. But ultimately, only large-scale statistical analysis can resolve claims of systemic bias. Twitter could certainly help to facilitate such research by providing data—and perhaps funding—to bona fide researchers.
More problematic is Musk’s suggestion that Twitter’s content moderation algorithm should be “open source” so anyone could see it. There is an obvious reason why such algorithms aren’t open source: revealing precisely how a site decides what content to recommend would make it easy to manipulate the algorithm. This is especially true for those most determined to abuse the site: the spambots on whom Musk has declared war. Making Twitter’s content moderation less opaque will have to be done carefully, lest it fosters the abuses that Musk recognizes as making Twitter a less valuable place for conversation.
Public Officials Shouldn’t Be Able to Block Users
Making Twitter more like a public forum is, in short, vastly more complicated than Musk suggests. But there is one easy thing Twitter could do to, quite literally, enforce the First Amendment. Courts haverepeatedlyfound that government officials can violate the First Amendment by blocking commenters on their official accounts. After then-President Trump blocked several users from replying to his tweets, the users sued. The Second Circuit held that Trump violated the First Amendment by blocking users because Trump’s Twitter account was, with respect to what he could do, a public forum. The Supreme Court vacated the Second Circuit’s decision—Trump left office, so the case was moot—but Justice Thomas indicated that some aspects of government officials’ accounts seem like constitutionally protected spaces. Unless a user’s conduct constitutes harassment, government accounts likely can’t block them without violating the First Amendment. Whatever courts ultimately decide, Twitter could easily implement this principle.
Conclusion
Like Musk, we definitely “don’t have all the answers here.” In introducing what we know as the “marketplace of ideas” to First Amendment doctrine, Justice Holmes’s famous dissent in Abramsv. United States (1919) said this of the First Amendment: “It is an experiment, as all life is an experiment.” The same could be said of the Internet, Twitter, and content moderation.
The First Amendment may help guide Musk’s experimentation with content moderation, but it simply isn’t the precise roadmap he imagines—at least, not for making Twitter the “town square” everyone wants to go participate in actively. Bluesky offers the best of both worlds: a much more meaningful town square where anyone can say anything, but also a community that continues to thrive.
Berin Szóka (@BerinSzoka) is President of TechFreedom. Ari Cohn (@AriCohn) is Free Speech Counsel at TechFreedom. Both are lawyers focused on the First Amendment’s application to the Internet.
Jack Dorsey has left Twitter, which he co-founded and ran for more than a decade. Many on the American political right frequently accused Dorsey and other prominent social media CEOs of censoring conservative content. Yet Dorsey doesn’t easily fit within partisan molds. Although Twitter is often lumped together with Facebook and YouTube, its founder’s approach to free speech and interest in decentralized initiatives such as BlueSky make Dorsey one of the more interesting online speech leaders of recent years. If you want to know what the future of social media might be, keep an eye on Dorsey.
Twitter has much in common with other prominent “Big Tech” social media firms such as Facebook and Google-owned YouTube. Like these firms, Twitter is centralized, with one set of rules and policies. Twitter is nonetheless different from other social media sites in important ways. Although often discussed in the context of “Big Tech” debates, Twitter is much smaller than Facebook and YouTube. Only about a fifth of Americans use Twitter and most are not active on the platform, with 10 percent of users being responsible for 80 percent of tweets. Despite its relatively small size, Twitter is often discussed by lawmakers because of its outsized influence among cultural and political elites.
Republican lawmakers’ focus on Twitter arose out of concerns over its content moderation policies. Over the last few years it has become common for members of Congress to decry the content moderation decisions of “Big Tech” companies. Twitter is often lumped together with Facebook and YouTube in such conversations, which is a shame given Dorsey’s views on free speech.
Dorsey has been more supportive of free speech than many on the American political right might think. Did Twitter, under Dorsey’s leadership, adhere to a policy of allowing all legal speech? Of course not. Did Twitter sometimes inconsistently apply its policies? Yes.
But no social media site could allow all legal speech. The wide range of awful but lawful speech aside, spam and other intrusive legal speech would ruin the online experience. Any social media site with millions or billions of users will experience false positives and false negatives while implementing a content moderation policy.
It became clear in the last few years that Dorsey is open to new ideas that may end up being considered mainstream eventually. We are still in the early years of the Internet and social media and users are used to centralized platforms such as Facebook, Twitter, and YouTube. But, increasingly, there are decentralized alternatives, and a few years ago Dorsey announced the decentralized social media project BlueSky, with the goal of moving Twitter over to such a system eventually.
Dorsey has not been shy about his passion for decentralization, citing the cryptocurrency bitcoin as a particular influence, “largely because of the model it demonstrates: a foundational internet technology that is not controlled or influenced by any single individual or entity. This is what the internet wants to be, and over time, more of it will be.”
I predict that in the coming years decentralized social media will gradually become more popular than current centralized platforms. As I wrote earlier this year:
“Americans across the political spectrum may look to decentralized social media and cryptocurrencies if their political allies continue to criticize household name firms. Those involved in protest movements as varied as Black Lives Matter and #StopTheSteal are especially likely to embrace such alternatives given their experiences with surveillance.
But Americans fed up with what they perceive to be politically?motivated content moderation and Big Tech’s irresponsible approach to harassment and misinformation may also join an exit from popular platforms and use decentralized alternatives. If they do, members of Congress upset over the spread of specific political content, COVID 19 misinformation, and election conspiracy theories will have to reach beyond Big Tech and grapple with decentralized systems where there is no CEO to subpoena or financial institution to investigate.”
Such platforms can embrace a Twitter-like aesthetic. Mastodon, a decentralized and open source social media service, looks very similar to Twitter, allowing users to send “toots.” Gab, a right wing social media network, which also mimics Twitter, became a Mastodon fork in 2019 after adopting Mastodon software. As policy fights over “Big Tech” and online speech continue, we should not be surprised if more people across the political spectrum adopt decentralized social media.
Dorsey clearly believes in a future where decentralized social media replaces centralized online speech platforms. If he is vindicated in that prediction it is likely that Dorsey’s legacy will be more bound to his work in decentralization more than his career at Twitter.
Matthew Feeney is the director of Cato?s Project on Emerging Technologies, where he works on issues concerning the intersection of new technologies and civil liberties.
Yes, it can always get dumber. The news broke last night that Donald Trump was planning to sue the CEOs of Facebook and Twitter for his “deplatforming.” This morning we found out that they were going to be class action lawsuits on behalf of Trump and other users who were removed, and now that they’re announced we find out that he’s actually suing Facebook & Mark Zuckerberg, Twitter & Jack Dorsey, and YouTube & Sundar Pichai. I expected the lawsuits to be performative nonsense, but these are… well… these are more performative and more nonsensical than even I expected.
These lawsuits are so dumb, and so bad, that there seems to be a decent likelihood Trump himself will be on the hook for the companies’ legal bills before this is all over.
The underlying claims in all three lawsuits are the same. Count one is that these companies removing Trump and others from their platforms violates the 1st Amendment. I mean, I know we’ve heard crackpots push this theory (without any success), but this is the former President of the United States arguing that private companies violated HIS 1st Amendment rights by conspiring with the government HE LED AT THE TIME to deplatform him. I cannot stress how absolutely laughably stupid this is. The 1st Amendment, as anyone who has taken a civics class should know, restricts the government from suppressing speech. It does not prevent private companies from doing so.
The arguments here are so convoluted. To avoid the fact that he ran the government at the time, he tries to blame the Biden transition team in the Facebook and Twitter lawsuits (in the YouTube one he tries to blame the Biden White House).
Pursuant to Section 230, Defendants are encouraged and immunized by Congress
to censor constitutionally protected speech on the Internet, including by and among its
approximately three (3) billion Users that are citizens of the United States.
Using its authority under Section 230 together and in concert with other social
media companies, the Defendants regulate the content of speech over a vast swath of the
Internet.
Defendants are vulnerable to and react to coercive pressure from the federal
government to regulate specific speech.
In censoring the specific speech at issue in this lawsuit and deplatforming
Plaintiff, Defendants were acting in concert with federal officials, including officials at the CDC
and the Biden transition team.
As such, Defendants? censorship activities amount to state action.
Defendants? censoring the Plaintiff?s Facebook account, as well as those Putative
Class Members, violates the First Amendment to the United States Constitution because it
eliminates the Plaintiffs and Class Member?s participation in a public forum and the right to
communicate to others their content and point of view.
Defendants? censoring of the Plaintiff and Putative Class Members from their
Facebook accounts violates the First Amendment because it imposes viewpoint and contentbased
restrictions on the Plaintiffs? and Putative Class Members? access to information, views,
and content otherwise available to the general public.
Defendants? censoring of the Plaintiff and Putative Class Members violates the
First Amendment because it imposes a prior restraint on free speech and has a chilling effect on
social media Users and non-Users alike.
Defendants? blocking of the Individual and Class Plaintiffs from their Facebook
accounts violates the First Amendment because it imposes a viewpoint and content-based
restriction on the Plaintiff and Putative Class Members? ability to petition the government for
redress of grievances.
Defendants? censorship of the Plaintiff and Putative Class Members from their
Facebook accounts violates the First Amendment because it imposes a viewpoint and contentbased
restriction on their ability to speak and the public?s right to hear and respond.
Defendants? blocking the Plaintiff and Putative Class Members from their
Facebook accounts violates their First Amendment rights to free speech.
Defendants? censoring of Plaintiff by banning Plaintiff from his Facebook account
while exercising his free speech as President of the United States was an egregious violation of
the First Amendment.
So, let’s just get this out of the way. I have expressed significant concerns about lawmakers and other government officials that have tried to pressure social media companies to remove content. I think they should not be doing so, and if they do so with implied threats to retaliate for the editorial choices of these companies that is potentially a violation of the 1st Amendment. But that’s because it’s done by a government official.
It does not mean the private companies magically become state actors. It does not mean that the private companies can’t kick you off for whatever reason they want. Even if there were some sort of 1st Amendment violation here, it would be on behalf of the government officials trying to intimidate the platforms into acting — and none of the examples in any of the lawsuits seem likely to reach even that level (and, again the lawsuits are against the wrong parties anyway).
The second claim, believe it or not, is perhaps even dumber than the first. It asks for declaratory judgment that Section 230 itself is unconstitutional.
In censoring (flagging, shadow banning, etc.) Plaintiff and the Class, Defendants
relied upon and acted pursuant to Section 230 of the Communications Decency Act.
Defendants would not have deplatformed Plaintiff or similarly situated Putative
Class Members but for the immunity purportedly offered by Section 230.
Let’s just cut in here to point out that this point is just absolutely, 100% wrong and completely destroys this entire claim. Section 230 does provide immunity from lawsuits, but that does not mean without it no one would ever do any moderation at all. Most companies would still do content moderation — as that is still protected under the 1st Amendment itself. To claim that without 230 Trump would still be on these platforms is laughable. If anything the opposite is the case. Without 230 liability protections, if others sued the websites for Trump’s threats, attacks, potentially defamatory statements and so on, it would have likely meant that these companies would have pulled the trigger faster on removing Trump. Because anything he (and others) said would represent a potential legal liability for the platforms.
Back to the LOLsuit.
Section 230(c)(2) purports to immunize social media companies from liability for
action taken by them to block, restrict, or refuse to carry ?objectionable? speech even if that
speech is ?constitutionally protected.? 47 U.S.C. ? 230(c)(2).
In addition, Section 230(c)(1) also has been interpreted as furnishing an additional
immunity to social media companies for action taken by them to block, restrict, or refuse to carry
constitutionally protected speech.
Section 230(c)(1) and 230(c)(2) were deliberately enacted by Congress to induce,
encourage, and promote social medial companies to accomplish an objective?the censorship of
supposedly ?objectionable? but constitutionally protected speech on the Internet?that Congress
could not constitutionally accomplish itself.
Congress cannot lawfully induce, encourage or promote private persons to
accomplish what it is constitutionally forbidden to accomplish.? Norwood v. Harrison, 413 US
455, 465 (1973).
Section 230(c)(2) is therefore unconstitutional on its face, and Section 230(c)(1) is
likewise unconstitutional insofar as it has interpreted to immunize social media companies for
action they take to censor constitutionally protected speech.
And those are the only two claims in the various lawsuits. That these private companies making an editorial decision to ban Donald Trump (in response to worries about him encouraging violence) violates the 1st Amendment (it does not) and that Section 230 is unconstitutional because it somehow involves Congress encouraging companies to remove Constitutionally protected speech. This is also wrong, because all of the cases related to this argument involve laws that actually pressure companies to act in this way. Section 230 has no such pressure involved (indeed, many of the complaints from some in government is that 230 is a “free pass” for companies to do nothing at all if they so choose).
There is a ton of other garbage — mostly performative throat-clearing — in the lawsuits, but none of that really matters beyond the two laughably dumb claims. I did want to call out a few really, really stupid points though. In the Twitter lawsuit, Trump’s lawyers misleadingly cite the Knight 1st Amendment Institute’s suit against Trump for blocking users on Twitter:
In Biden v. Knight 141 S. Ct. 1220 (2021), the Supreme Court discussed the
Second Circuit?s decision in Knight First Amendment Inst. at Columbia Univ. v. Trump, No. 18-
1691, holding that Plaintiff?s threads on Twitter from his personal account were, in fact, official
presidential statements made in a ?public forum.?
Likewise, President Trump would discuss government activity on Twitter in his
official capacity as President of the United States with any User who chose to follow him, except
for seven (7) Plaintiffs in the Knight case, supra., and with the public at large.
So, uh, “the Supreme Court” did not discuss it. Only Justice Clarence Thomas did, and it was a weird, meandering, unbriefed set of musings that were unrelated to the case at hand. It’s a stretch to argue that “the Supreme Court” did that. Second, part of President Trump’s argument in the Knight case was that his Twitter account was not being used in his “official capacity,” but was rather his personal account that just sometimes tweeted official information. Literally. This was President Trump appealing to the Supreme Court in that case:
The government?s response is that the President is
not acting in his official capacity when he blocks users….
To then turn around in another case and claim that it was official action is just galaxy brain nonsense.
Another crazy point: in all three lawsuits, Donald Trump argues that government officials threatening the removal of Section 230 in response to social media companies’ content moderation policies itself proves that the decisions by those companies make them state actors. Here’s the version from the YouTube complaint (just insert the other two companies where it says YouTube to see what it is in the others):
Below are just some examples of Democrat legislators threatening new
regulations, antitrust breakup, and removal of Section 230 immunity for Defendants and other
social media platforms if YouTube did not censor views and content with which these Members
of Congress disagreed, including the views and content of Plaintiff and the Putative Class
Members
But, uh, Donald Trump spent much of the last year in office doing exactly the same thing. He literally demanded the removal of Section 230. He signed an executive order to try to remove Section 230 immunity from companies, then demaned Congress repeal all of Section 230 before he would fund the military. On the antitrust breakup front, Trump demanded that Bill Barr file antitrust claims against Google prior to the election as part of his campaign against “big tech.”
It’s just absolutely hilarious that he’s now claiming that members of Congress doing the very same thing he did, but to a lesser degree, and with less power magically turns these platforms into state actors.
There was a lot of speculation as to what lawyers Trump would have found to file such a lawsuit, and (surprisingly) it’s not any of the usual suspects. There is the one local lawyer in Florida (required to file such a suit there), two lawyers with AOL email addresses, and then a whole bunch of lawyers from Ivey, Barnum, & O’Mara, a (I kid you not) “personal injury and real estate” law firm in Connecticut. If these lawyers have any capacity for shame, they should be embarrassed to file something this bad. But considering that the bio for the lead lawyer on the case hypes up his many, many media appearances, and even has a gallery of photos of him appearing on TV shows, you get the feeling that perhaps these lawyers know it’s all performative and will get them more media coverage. That coverage should be mocking them for filing an obviously vexatious and frivolous lawsuit.
The lawsuit is filed in Florida, which has an anti-SLAPP law (not a great one, but not a horrible one either). It does seem possible that these companies might file anti-SLAPP claims in response to this lawsuit, meaning that Trump could potentially be on the hook for the legal fees of all three. Of course, if the whole thing is a performative attempt at playing the victim, it’s not clear that that would matter.
As I’m sure most people are aware, last week, the House Energy & Commerce Committee held yet another hearing on “big tech” and its content moderation practices. This one was ostensibly on “disinformation,” and had Facebook’s Mark Zuckerberg, Google’s Sundar Pichai, and Twitter’s Jack Dorsey as the panelists. It went on for five and a half hours which appears to be the norm for these things. Last week, I did write about both Zuckerberg and Pichai’s released opening remarks, in which both focused on various efforts they had made to combat disinfo. Of course, the big difference between the two was that Zuckerberg then suggested 230 should be reformed, while Pichai said it was worth defending.
If you actually want to watch all five and a half hours of this nonsense, you can do so here:
As per usual — and as was totally expected — you got a lot more of the same. You had very angry looking Representatives practically screaming about awful stuff online. You had Democrats complaining about the platforms failing to take down info they disliked, while just as equally angry Republicans complained about the platforms taking down content they liked (often this was the same, or related, content). Amusingly, often just after saying that websites took down content they shouldn’t have (bias!), the very same Representatives would whine “but how dare you not take down this other content.” It was the usual mess of “why don’t you moderate exactly the way I want you to moderate,” which is always a silly, pointless activity. There was also a lot of “think of the children!” moral panic.
However, Jack Dorsey’s testimony was somewhat different than Zuckerberg’s and Pichai’s. While it also talks somewhat about how Twitter has dealt with disinformation, his testimony actually went significantly further in noting real, fundamental changes that Twitter is exploring that go way beyond the way most people think about this debate. Rather than focusing on the power that Twitter has to decide how, who, and what to moderate, Dorsey’s testimony talked about various ways in which they are seeking to give more control to end users themselves and empower those end users, rather than leaving Twitter as the final arbiter. He talked about “algorithmic choice” so that rather than having Twitter controlling everything, different users could opt-in to different algorithmic options, and different providers could create their own algorithmic options. And he mentioned the Bluesky project, and potentially moving Twitter to a protocol-based system, rather than one that Twitter fully controls.
Twitter is also funding Bluesky, an independent team of open source architects, engineers, and
designers, to develop open and decentralized standards for social media. This team has already
created an initial review of the ecosystem around protocols for social media to aid this effort.
Bluesky will eventually allow Twitter and other companies to contribute to and access open
recommendation algorithms that promote healthy conversation and ultimately provide
individuals greater choice. These standards will support innovation, making it easier for startups
to address issues like abuse and hate speech at a lower cost. Since these standards will be open
and transparent, our hope is that they will contribute to greater trust on the part of the individuals
who use our service. This effort is emergent, complex, and unprecedented, and therefore it will
take time. However, we are excited by its potential and will continue to provide the necessary
exploratory resources to push this project forward.
All of these were showing that Dorsey and Twitter are thinking about actual ways to deal with many of the complains that our elected officials insist are the fault of social media — including the fact that no two politicians seem to agree one what is the “proper” level of moderation. By moving to something like protocols and algorithmic choice, you could allow different individuals, groups, organizations and others to set their own standards and rules.
And, yes, I’m somewhat biased here, because I have suggested this approach (as have many others). That doesn’t mean I’m convinced it will absolutely work, but I do think it’s worth experimenting with.
And what I had hoped was that perhaps, if Congress were actually interested in solving the perceived problems they declared throughout the hearing, then they would perhaps explore these initiatives, and ask Jack to explain how they might impact questions around disinformation or harm or “censorship” or “think of the children.” Because there are lots of interesting discussions to be had over whether or not this approach will help deal with many of those issues.
But as far as I can tell not one single elected official ever asked Jack about any of this. Not one. Now, I will admit that I missed some of the hearing to take a few meetings, but I asked around and others I know who watched the entire thing through could not recall it coming up beyond Jack mentioning it a few times during the hearing.
What I did hear a lot of, however, was members of the House insisting, angrily (always angrily), that none of the CEOs presenting were willing to “offer solutions” and that’s why “Congress must and will act!”
All it did was drive home the key idea that this was not a serious hearing in which Congress hoped to learn something. This was yet another grandstanding dog and pony show where Congressional members got to get their clips and headlines they can put on the very same social media platforms they insist are destroying America. But when they demanded to hear “solutions” to the supposed problems they raised, and when one of the CEOs on the panel put forth some ideas on better ways to approach this… every single one of those elected officials ignored it. Entirely. Over five and a half hours, and not one asked him to explain what he meant, or to explore how it might help.
This is not Congress trying to fix the “problems” of social media. This is Congress wanting to grandstand on social media while pretending to do real work.
The CEOs of Facebook, Google, and Twitter will once again testify before Congress this Thursday, this time on disinformation. Here?s what I hope they will say:
Thank you Mister Chairman and Madam Ranking Member.
While no honest CEO would ever say that he or she enjoys testifying before Congress, I recognize that hearings like this play an important role — in holding us accountable, illuminating our blind spots, and increasing public understanding of our work.
Some policymakers accuse us of asserting too much editorial control and removing too much content. Others say that we don?t remove enough incendiary content. Our platforms see millions of user-generated posts every day — on a global scale — but questions at these hearings often focus on how one of our thousands of employees handled a single individual post.
As a company we could surely do a better job of explaining — privately and publicly — our calls in controversial cases. Because it?s sometimes difficult to explain in time-limited hearing answers the reasons behind individual content decisions, we will soon launch a new public website that will explain in detail our decisions on cases in which there is considerable public interest. Today, I?ll focus my remarks on how we view content moderation generally.
Not ?neutral?
In past hearings, I and my CEO counterparts have adopted an approach of highlighting our companies? economic and social impact, answering questions deferentially, and promising to answer detailed follow up questions in writing. While this approach maximizes comity, I?ve come to believe that it can sometimes leave a false impression of how we operate.
So today I?d like to take a new approach: leveling with you.
In particular, in the past I have told you that our service is ?neutral.? My intent was to convey that we don?t pick political sides, or allow commercial influence over our editorial content.
But I?ve come to believe that characterizing our service as ?neutral? was a mistake. We are not a purely neutral speech platform, and virtually no user-generated-content service is.
Our philosophy
In general, we start with a Western, small-d democratic approach of allowing a broad range of human expression and views. From there, our products reflect our subjective — but scientifically informed — judgments about what information and speech our users will find most relevant, most delightful, most topical, or of the highest quality.
We aspire for our services to be utilized by billions of people around the globe, and we don?t ever relish limiting anyone?s speech. And while we generally reflect an American free speech norm, we recognize that norm is not shared by much of the world — so we must abide by more restrictive speech laws in many countries where we operate.
Even within the United States, however, we choose to forbid certain types of speech which are legal, but which we have chosen to keep off our service: incitements to violence, hate speech, Holocaust denial, and adult pornography, just to name a few.
We make these decisions based not on the law, but on what kind of service we want to be for our users.
While some people claim to want ?neutral? online speech platforms, we have seen that services with little or no content moderation whatsoever — such as Gab and Parler — become dominated by trolling, obscenities, and conspiracy theories. Most consumers reject this chaotic, noisy mess.
In contrast, we believe that millions of people use our service because they value our approach of airing a variety of views, but avoiding an ?anything goes” cesspool.
We realize that some people won?t like our rules, and go elsewhere. I?m glad that consumers have choices like Gab and Parler, and that the open Internet makes them possible. But we want our service to be something different: a pleasant experience for the widest possible audience.
Complicated info landscape means tough calls
When we first started our service decades ago, content moderation was a much less fractious topic. Today, we face a more complicated speech and information landscape including foreign propaganda, bots, disinformation, misinformation, conspiracy theories, deepfakes, distrust of institutions, and a fractured media landscape. It challenges all of us who are in the information business.
All user-generated content services are grappling with new challenges to our default of allowing most speech. For example, we have recently chosen to take a more aggressive posture toward election- and vaccine-related disinformation because those of us who run our company ultimately don?t feel comfortable with our platform being an instrument to undermine democracy or public health.
As much as we aim to create consistent rules and policies, many of the most difficult content questions we face are ones we?ve never seen before, or involve elected officials — so the questions often end up on my desk as CEO.
Despite the popularity of our services, I recognize that I?m not a democratically elected policymaker. I?m a leader of a private enterprise. None of us company leaders takes pleasure in making speech decisions that inevitably upset some portion of our user base – or world leaders. We may make the wrong call.
But our desire to make our platform a positive experience for millions of people sometimes demands that we make difficult decisions to limit or block certain types of controversial (but legal) content. The First Amendment prevents the government from making those extra-legal speech decisions for us. So it?s appropriate that I make these tough calls, because each decision reflects and shapes what kind of service we want to be for our users.
Long-term experience over short-term traffic
Some of our critics assert that we are driven solely by ?engagement metrics? or ?monetizing outrage? like heated political speech.
While we use our editorial judgment to deliver what we hope are joyful experiences to our users, it would be foolish for us to be ruled by weekly engagement metrics. If platforms like ours prioritized quick-hit, sugar-high content that polarizes our users, it might drive short term usage but it would destroy people?s long-term trust and desire to return to our service. People would give up on our service if it?s not making them happy.
We believe that most consumers want user-generated-content services like ours to maintain some degree of editorial control. But we also believe that as you move further down the Internet ?stack? — from applications towards ours toward app stores, then cloud hosting, then DNS providers, and finally ISPs — most people support a norm of progressively less content moderation at each layer.
In other words, our users may not want to see controversial speech on our service — but they don?t necessarily support disappearing it from the Internet altogether.
I fully understand that not everyone will agree with our content policies, and that some people feel disrespected by our decisions. I empathize with those that feel overlooked or discriminated against, and I am glad that the open Internet allows people to seek out alternatives to our service. But that doesn?t mean that the US government can or should deny our company?s freedom to moderate our own services.
First Amendment and CDA 230
Some have suggested that social media sites are the ?new public square? and that services should be forbidden by the government to block anyone?s speech. But such a rule would violate our company?s own First Amendment rights of editorial judgment within our services. Our legal freedom to prioritize certain content is no different than that of the New York Times or Breitbart.
Others allege that Section 230?s liability protections are conditioned on our service following a false standard of political ?neutrality.? But Section 230 doesn?t require this, and in fact it incentivizes platforms like ours to moderate inappropriate content.
Section 230 is primarily a legal routing mechanism for defamation claims — making the speaker responsible, not the platform. Holding speakers directly accountable for their own defamatory speech ultimately helps encourage their own personal responsibility for a healthier Internet.
For example, if car rental companies always paid for their renters? red light tickets instead of making the renter pay, all renters would keep running red lights. Direct consequences improve behavior.
If Section 230 were revoked, our defamation liability exposure would likely require us to be much more conservative about who and what types of content we allowed to post on our services. This would likely inhibit a much broader range of potentially ?controversial? speech, but more importantly would impose disproportionate legal and compliance burdens on much smaller platforms.
Operating responsibly — and humbly
We?re aware of the privileged position our service occupies. We aim to use our influence for good, and to act responsibly in the best interests of society and our users. But we screw up sometimes, we have blind spots, and our services, like all tools, get misused by a very small slice of our users. Our service is run by human beings, and we ask for grace as we remedy our mistakes.
We value the public?s feedback on our content policies, especially from those whose life experiences differ from those of our employees. We listen. Some people call this ?working the refs,? but if done respectfully I think it can be healthy, constructive, and enlightening.
By the same token, we have a responsibility to our millions of users to make our service the kind of positive experience they want to return to again and again. That means utilizing our own constitutional freedom to make editorial judgments. I respect that some will disagree with our judgments, just as I hope you will respect our goal of creating a service that millions of people enjoy.
Thank you for the opportunity to appear here today.
Adam Kovacevich is a former public policy executive for Google and Lime, former Democratic congressional and campaign aide, and a longtime tech policy strategist based in Washington, DC.
Last Friday, Twitter made the decision to permanently ban Donald Trump from its platform, which I wrote about at the time, explaining that it’s not an easy decision, but neither is it an unreasonable one. On Wednesday, Jack Dorsey put out an interesting Twitter thread in which he discusses some of the difficulty in making such a decision. This is good to see. So much of the content moderation debate often is told in black and white terms, in which many people act as if one answer is “obvious” and any other thing is crazy. And part of the reason for that is many of these decisions are made behind close doors, and no one outside gets to see the debates, or how much the people within the company explore the trade-offs and nuances inherent in one of these decisions.
Jack doesn’t go into that much detail, but enough to explain that the company felt that, given the wider context of everything that happened last week, it absolutely made sense to put in place the ban now, even as the company’s general stance and philosophy has always pushed back on such an approach. In short, context matters:
I believe this was the right decision for Twitter. We faced an extraordinary and untenable circumstance, forcing us to focus all of our actions on public safety. Offline harm as a result of online speech is demonstrably real, and what drives our policy and enforcement above all.
I do not celebrate or feel pride in our having to ban
@realDonaldTrump
from Twitter, or how we got here. After a clear warning we?d take this action, we made a decision with the best information we had based on threats to physical safety both on and off Twitter. Was this correct?
I believe this was the right decision for Twitter. We faced an extraordinary and untenable circumstance, forcing us to focus all of our actions on public safety. Offline harm as a result of online speech is demonstrably real, and what drives our policy and enforcement above all.
That said, having to ban an account has real and significant ramifications. While there are clear and obvious exceptions, I feel a ban is a failure of ours ultimately to promote healthy conversation. And a time for us to reflect on our operations and the environment around us. Having to take these actions fragment the public conversation. They divide us. They limit the potential for clarification, redemption, and learning. And sets a precedent I feel is dangerous: the power an individual or corporation has over a part of the global public conversation.
The check and accountability on this power has always been the fact that a service like Twitter is one small part of the larger public conversation happening across the internet. If folks do not agree with our rules and enforcement, they can simply go to another internet service. This concept was challenged last week when a number of foundational internet tool providers also decided not to host what they found dangerous. I do not believe this was coordinated. More likely: companies came to their own conclusions or were emboldened by the actions of others. This moment in time might call for this dynamic, but over the long term it will be destructive to the noble purpose and ideals of the open internet. A company making a business decision to moderate itself is different from a government removing access, yet can feel much the same.
Yes, we all need to look critically at inconsistencies of our policy and enforcement. Yes, we need to look at how our service might incentivize distraction and harm. Yes, we need more transparency in our moderation operations. All this can?t erode a free and open global internet.
I fear that many will miss the important nuances that Jack is explaining here, but there are a few overlapping important points. The context and the situation dictated that this was the right move for Twitter — and I think there’s clear support for that argument. However, it does raise some questions about how the open internet itself functions. If anything, this tweet-thread reminds me of when Cloudlflare removed the Daily Stormer from its service, and the company’s CEO, Matthew Prince highlighted that, while the move was justified for a wide variety of reasons, he felt uncomfortable that he had that kind of power.
At the time, Prince called for a wider discussion on these kinds of issues — and unfortunately those discussions didn’t really happen. And so, we’re back in a spot where we need to have them again.
The second part of Jack’s thread highlights how Twitter is actually working to remove that power from its own hands. As he announced at the end of 2019, he is exploring a protocol-based approach that would make the Twitter system an open protocol standard, with Twitter itself just one implementation. This was based, in part, on my paper on this topic. Here’s what Jack is saying now:
The reason I have so much passion for #Bitcoin is largely because of the model it demonstrates: a foundational internet technology that is not controlled or influenced by any single individual or entity. This is what the internet wants to be, and over time, more of it will be. We are trying to do our part by funding an initiative around an open decentralized standard for social media. Our goal is to be a client of that standard for the public conversation layer of the internet. We call it @bluesky.
This will take time to build. We are in the process of interviewing and hiring folks, looking at both starting a standard from scratch or contributing to something that already exists. No matter the ultimate direction, we will do this work completely through public transparency. I believe the internet and global public conversation is our best and most relevant method of achieving this. I also recognize it does not feel that way today. Everything we learn in this moment will better our effort, and push us to be what we are: one humanity working together.
There had been some concern recently that, since nothing was said about the Bluesky project in 2020, Twitter had abandoned it. That is not at all true. There have been discussions (disclaimer: I’ve been involved in some of those discussions) about how best to approach it and who would work on it. In the fall, a variety of different proposals were submitted for Twitter to review and choose a direction to head in. I’ve seen the proposals — and a few have been mentioned publicly. I’ve been waiting for Twitter to release all of the proposals publicly to talk about them, which I hope will happen soon.
Still, it’s interesting to see how the latest debates may lead to finally having this larger discussion about how the internet works, and how it should be managed. While I’m sure Jack will be getting some criticism (because that’s the nature of the internet), I appreciate that his approach to this, like Matthew’s at Cloudflare, is to recognize his own discomfort with his own power, and to explore better ways of going about things. I wish I could say the same for all internet CEOs.
The timing on this is quite incredible. On Monday, Georgia’s (Republican) Secretary of State, Brad Raffensperger, spoke out, saying that Senator Lindsey Graham had called him and implied that Raffensperger should look to throw out ballots that were legally cast in the state. On Tuesday morning, in trying to defend his efforts to undermine the election, Graham tried to shake off his calls with Raffensperger as no big deal, saying that he also spoke to Arizona and Nevada election officials. This does not make things better. Indeed, it actually seems to make things worse (and that’s even after Arizona’s Secretary of State, Katie Hobbs, claimed that Graham’s claims were “false” and she never spoke to him.
All of this certainly seems like cause for concern about election interference and tampering. Indeed, it’s the kind of thing a good government would at least investigate. And, in a stroke of good timing, the Senate Judiciary Committee was all set up on Wednesday to host a hearing about the 2020 Election and “suppression.” Except… this hearing was organized and chaired by the very same Senator Lindsey Graham, and was yet another dog and pony show of internet CEOs having to defend specific content moderation choices.
Now a sane person who loosely follows the news might be saying “wait, didn’t we just do that last month?” And you’d be right. Just a few weeks ago, there was an almost identical hearing. Both hearings had Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey (the earlier hearing also had Google’s Sundar Pichai). Both hearings featured a bunch of grandstanding and often clueless Senators demanding to know specific answers to why the websites did or did not moderate specific pieces of content.
But this time it was the Senate Judiciary Committee, as compared to the Senate Commerce Committee last time. There were a few overlapping guests — including Senators Ted Cruz, Mike Lee, and Marsha Blackburn. This one also included Senator Josh Hawley who grandstands with the best of them over this issue. Cruz and Lee basically did a warmed over, half-baked rehash of their performances from a few weeks ago. Hawley’s performance was particularly stupid. He claimed to have heard from a “whistleblower” inside Facebook and posted two grainy screenshots of internal Facebook tools. One was its “Tasks” tool, which is a general company-wide task manager tool, which Hawley used to imply that Facebook, Twitter and Google are some how colluding to figure out which users, hashtags, and content they’re going to suppress.
This is not how any of this works. Hawley demanded that Zuckerberg turn over every mention of Google or Twitter in their Tasks tool, and Zuck quite reasonably pointed out that he couldn’t commit to that without knowing what sort of sensitive information might be involved. This is basically the equivalent of Hawley asking for every email that mentions Twitter or Google. It’s an insane and intrusive request, though he threatened to subpoena the company if Zuckerberg wouldn’t comply. Hawley then demanded to know if any Facebook employees ever communicate with Twitter or Google.
Zuckerberg, again, quite reasonably, pointed out that he’s sure that people who work in trust and safety at some point or another know of people in similar roles at other companies and he’s sure at some point or another some of them communicate with each other, but that’s quite different than plotting over what content to block as Hawley kept insisting. Hawley then trotted out another screenshot of some other internal tool that Zuckerberg says he didn’t recognize and thus couldn’t answer any questions about — which Hawley again pretended to be some damning evasiveness from the CEO. What it actually suggested is that this is not a very important tool, and Hawley is clearly overstating what it’s used for.
Oh, and Hawley, ridiculously, insisted on calling the trust and safety teams at these companies “censorship teams,” and implying that they deliberately try to silence ideological content (they do not). Of course, what’s truly crazy is that many of the half-dozen or so different Section 230 reform bills that Hawley has introduced in the Senate would actually require more content takedowns than we have today. But you can’t be a demagoguing populist without demagoguing while the cameras are on, and Hawley played his part.
If you’d like to read my play-by-play response to the entire hearing as it happened, I have a very long Twitter thread:
Here I am, awake in the early morning to watch @LindseyGrahamSC hold a hearing on "suppression and the 2020 election" a day after it was revealed that *he* was demanding Georgia's Secretary of State throw out *legal* votes. The hearing's not about Graham, though, but "big tech." pic.twitter.com/OgLfUvXVrO
Or if you’re truly a glutton for punishment, you can watch the entire 4 hours and 43 minutes of the hearing but I do not recommend that for your own sanity:
Like other hearings involving the internet, this hearing was big on rhetoric, ignorance on the part of the senators and no clear urgent need for such a hearing right now. For the most part, you had Republican senators mad about choices to moderate certain content (or to make decisions too quickly that later turned out to be mistaken), while Democratic senators were mad about choices not to moderate other content (or to make decisions too slowly). In other words, they see this debate as a sort of tug o’ war, with the companies as the rope, and their main hope is to influence content moderation to work the way they want it to work, if they could wave a magic wand and just enable the content that they and their supporters want.
We should see this entire thing as an affront to the 1st Amendment. Demanding changes (in either direction) to the content moderation practices of private websites is a massive 1st Amendment issue. Imagine if Democratic senators called in Fox News execs to complain about story choices? Or Republicans did the same with the NY Times. 1st Amendment and free press people would be reasonably up in arms over this gross abuse of power over something that the Constitution deliberately and clearly says Congress has no authority over.
So why do we let them do this to social media companies?
Much of the hearing was little more than moral panic claim after moral panic claim, highlighting the nature and problems of society itself — and then pinning the blame on social media. It’s the same moral panic we’ve seen play out over and over again for centuries. Some new medium comes about, and people use it. Some people use it for things that upset other people, and rather than look at the underlying causes, it’s easier to blame the messenger.
It’s as disappointing as it is predictable.
And chances are it’s only going to continue. Towards the end of the hearing Senator Thom Tillis (and Senator Chris Coons) suggested that both Zuckerberg and Dorsey should commit to returning again next month. And Senator Richard Blumenthal even laughed mirthfully in closing out the hearing by saying he fully expected there to be many, many more hearings with these execs.
It’s all for show. The senators want to be seen to be doing something and picking on these platforms is a welcome distraction from actual problems in society — including a president who refuses to concede in the election he lost, the quarter of a million (and climbing) dead people from a botched COVID response (and the lack of any real effort to deal with COVID as it sweeps across the country again), and so many other things. Rather than facing actual problems facing society, Senator Lindsey Graham and his colleagues have decided that it’s best to play “look! squirrel!” and insist that the biggest problem of today is that Twitter and Facebook want to fact check the president when he spews nonsense and dangerous conspiracy theories.
While much of yesterday’s Senate Commerce Committee hearing was focused on the pointless grievances and grandstanding of sitting Senators, there was a bit of actual news made by Mark Zuckerberg and Jack Dorsey. As we discussed earlier this week, Zuckerberg agreed for the first time that he was in support of Section 230 reform, though he declined in his opening remarks to specify the nature of the reforms he supported. And while the original draft of Jack Dorsey’s opening testimony suggested full support of 230, in the given remarks he also suggested that Twitter would support changes to Section 230 focused on getting companies to be more transparent. Later in the hearing, during one of the extraordinarily rare moments when a Senator actually asked the CEOs how they would change 230, Zuckerberg also focused on transparency reports, before immediately noting that Facebook already issued transparency reports.
In other words, it appears that the “compromise” the internet companies are looking to throw to a greedy Congress regarding Section 230 reform is “transparency.” I’ve heard from a variety of policymakers over the last few months who also seem focused on this transparency issue as a “narrow” way to reform 230 without mucking up everything else, so it seems like mandating content moderation transparency may become “a thing.”
Mandating transparency, however, would be a dangerous move that would stifle both innovation and competition.
Cathy Gellis has covered this in detail in the past, and I addressed it in my comments to the FCC about Section 230. But it seems like we should be a little clearer:
Transparency is important. Mandated transparency is dangerous.
We’ve been celebrating lots of internet companies and their transparency reports going back to Google’s decision nearly a decade ago to start releasing such reports. Over time, every large internet company (and many medium ones) has joined the bandwagon. Indeed, after significant public pressure, even the notoriously secretive giant telcos started issuing transparency reports as well (though they often did so in a secretive manner that actually hid important details).
So, at the very least, it certainly looks like public pressure, good business practices, and pressure from peers in the industry have already pushed the companies into releasing such reports. On top of that, many of the internet companies seem to try to outdo each other in being more transparent than their peers on these reports — which again is a good thing. The transparency reports are coming and we should celebrate that.
At the very least, though, this suggests that Congress doesn’t need to mandate this, as it’s already happening.
But, you might say, then why should we worry about mandates for transparency reports? Many, many reasons. First off, while transparency reports are valuable, in some cases, we’ve seen governments and government officials using them as tools to celebrate censorship. Governments are not using them to better understand the challenges of content moderation, but rather as tools to see where more censorship should be targeted. That’s a problem.
Furthermore, creating a “baseline” for transparency reports creates two very large issues that could damage competition and innovation. First, it creates a clear compliance cost, which can be quite burdensome for new and smaller websites. Facebook, Google and Twitter can devote people to creating transparency reports. Smaller sites cannot. And while you could, in theory, craft a mandate that has some size thresholds, historically that leads to gaming and other tricks.
Perhaps more importantly, though, a mandate with baseline transparency thresholds locks in certain “rules” for content moderation and creates real harm to innovative and different ideas. While most people seem to think of content moderation along the lines of how Facebook, YouTube, and Twitter handle it — with large (often outsourced) content moderation teams and giant sets of policies — there are many, many other models out there as well. Reddit is a decently large company. Yet it handles content moderation by pushing it out to volunteer moderators who run each subreddit and get to make their own content moderation rules. Would each subreddit have to release its own report? Would Reddit itself have to track how each individual subreddit is moderated and include all of that in its report?
Or how about Wikipedia? That’s one of the largest sites on the internet, and all of its content moderation practices are already incredibly transparent, since every single edit shows in each page’s history — often including a note about the reasoning. And, again, rather than being done by staff, every Wikipedia edit is done by volunteers. But should Wikipedia have to file a “standardized” report as well about how and why each of those moderation decisions were made?
And those are just two examples of large sites with different models. The more you look, the more alternative moderation models you can find — and many of them would not fit neatly into any “standards” for a transparency report. Instead, what you’d get is a hamfisted setup that more or less forces all different sites into a single (Facebook/YouTube/Twitter) style of content moderation and transparency. And that’s very bad for innovation in the space.
Indeed, as someone who is quite hopeful for a future where the content moderation layer is entirely separated from the corporate layer of various social media sites, I worry that mandated transparency rules would make that much, much more difficult to implement. Many of the proposals I’ve seen to build more distributed/decentralized protocol-based solutions for social media would not (and often could not) be fit into a “standardized” model of content moderation.
And thus, creating rules that mandate such transparency reporting for companies based on the manner in which those three large companies currently release transparency reports would only serve to push others into that same model, creating significant compliance costs for those smaller entities, while greatly limiting their ability to experiment with new and different styles of moderation.