However, Cat Zakrzewski, over at the Washington Post has highlighted yet another reason why this particular "investigation" into disinformation online is so disingenuous: a bunch of the Republicans on the panel, exploring how these sites deal with mis- and disinformation -- are guilty of spreading disinformation themselves online.
A Washington Post analysis found that seven Republican members of the House Energy and Commerce Committee who are scheduled to grill the chief executives of Facebook, Google and Twitter about election misinformation on Thursday sent tweets that advanced baseless narratives of election fraud, or otherwise supported former president Donald Trump’s efforts to challenge the results of the presidential election. They were among 15 of the 26 Republican members of the committee who voted to overturn President Biden’s election victory.
Three Republican members of the committee, Reps. Markwayne Mullin (Okla.), Billy Long (Mo.) and Earl L. “Buddy” Carter (Ga.), tweeted or retweeted posts with the phrase “Stop the Steal” in the chaotic aftermath of the 2020 presidential election. Stop the Steal was an online movement that researchers studying disinformation say led to the violence that overtook the U.S. Capitol on Jan. 6.
Cool cool.
Actually, this highlights one of the many reasons why we should be concerned about all of these efforts to force these companies into a particular path for dealing with disinformation online. Because once we head down the regulatory route, we're going to reach a point in which the government is, in some form, determining what is okay and what is not okay online. And do we really want elected officials, who themselves were spreading disinformation and even voted to overturn the results of the last Presidential election, to be determining what is acceptable and what is not for social media companies to host?
As the article itself notes, rather than have a serious conversation about disinformation online and what to do about it, this is just going to be yet another culture war. Republicans are going to push demands to have these websites stop removing their own efforts at disinformation, and Democrats are going to push the websites to be more aggressive in removing information (often without concern for the consequences of such demands -- which often lead to the over-suppression of speech).
One thing I think we can be sure of is that Rep. Frank Pallone, who is heading the committee for today's hearing is being laughably naïve if he actually believes this:
Rep. Frank Pallone Jr. (N.J.), the Democrat who chairs the committee, said any member of Congress using social media to spread falsehoods about election fraud was “wrong,” but he remained optimistic that he could find bipartisan momentum with Republicans who don’t agree with that rhetoric.
“There’s many that came out and said after Jan. 6 that they regretted what happened and they don’t want to be part of it at all,” Pallone said in an interview. “You have to hope that there’s enough members on both sides of the aisle that see the need for some kind of legislative reform here because they don’t want social media to allow extremism and disinformation to spread in the real world and encourage that.”
Uh huh. The problem is that those who spread disinformation online don't think of it as disinformation. And they see any attempt to cut back on their ability to spread it to be (wrongly) "censorship." Just the fact that the two sides can't even agree on what is, and what is not, disinformation should give pause to anyone seeking "some kind of legislative reform" here. While the Democrats may be in power now, that may not last very long, and they should recognize that if it's the Republicans who get to define what is and what is not "disinformation" it may look very, very different than what the Democrats think.
]]>Now, as background for this, many people reading this likely know that I spent over two years engaged in a legal fight with Shiva after he sued us over a series of articles we had written highlighting how his claim to have invented email is not supported by the evidence. The case was eventually settled with no money changing hands and with all of our stories remaining up. And we have since presented even more evidence that Shiva Ayyadurai did not invent email. You might think that this would make me immediately disagree with him in any legal fight, but as I did in writing my original pieces about him and as I do now, I'm looking at the actual details, not whether or not I like or agree with any particular individual.
Over the last few years, Shiva has really embraced a Trumpian position in trying to build up a political base. He's been very supportive of the President, and in recent months has been an outspoken critic of both vaccines and Dr. Anthony Fauci. He's built up quite a large social media following and regularly espouses idea that I consider to be silly, misleading or unsupported by any evidence -- which seems somewhat par for the course, given his historical assertions. He's run for Senator in Massachusetts twice now. In 2018 he first sought the Republican nomination for the Senate to run against Elizabeth Warren, and then later switched to running as an Independent. After losing that race, he almost immediately declared that he would run again in 2020. He ran in the Republican primary, which he lost to Kevin O'Connor 158,590 votes to 104,782.
Perhaps not surprisingly, he was not happy about this result, and started making a bunch of wild, unsupported allegations of election fraud:
He then spent weeks trying to drum up a write-in campaign, while repeatedly using social media to allege election fraud. In an effort to show this, he filed some public records requests with Massachusetts, including (among other things) asking for scanned images of every ballot. In response, he was told that there were no responsive records because the machines that scan the ballots do not make images. In fact, the certification process flat out prohibits the machines from capturing ballot images. Furthermore, many of the machines don't even have the ability to capture images even if they could under Massachusetts laws.
This was the email that was sent to him in response to the request:
Good Morning‐
I am writing to acknowledge receipt of your request for records. Please note, that this Office does not maintain voter tabulation software, firmware or hardware. While this office certifies voting equipment, as required by law, we do not purchase or lease equipment. Once equipment is approved by this Office, cities and towns can purchase or lease such equipment. Accordingly, this Office has no records responsive to your request.
Further, to the extent you request the same information from local election officials, please note that the approval of digital scan equipment in Massachusetts specifically prohibits the capturing of ballot images.
Shiva responded asking to show what Massachusetts law prohibited digital scans from taking images of ballots, to which Tassinari responded with an attachment showing the certification documents of various ballot scanning machines used in Massachusetts and also stating directly:
Please note that while the ballot images are not stored, the actual ballots voted on at any federal election are secured and stored for 22 months in accordance with federal law. However, under state law, those ballots must remain sealed until such time as they can be destroyed.
In response to this, Shiva inflated his initial unsubstantiated claims of election fraud by falsely claiming that Massachusetts was "DESTROYING BALLOTS." In fact, he claimed that Massachusetts "Destroys Over 1 MILLION Ballots in US SENATE PRIMARY RACE" because they did not make images of every ballot.
Again, to be clear, Shiva's claims here are bullshit.
So here's where things get complex. The election official that Shiva spoke to felt, correctly, that these tweets were highly misleading. The federal government -- namely the FBI and CISA -- have been putting out alerts, including to election officials around the country, to be on the lookout for false information on social media "intended to cast doubt on [the] legitimacy of US elections" and has suggested that if spotted one of the things they might consider doing is the following:
If appropriate, make use of in-platform tools offered by social media companies for reporting suspicious posts that appear to be spreading false or inconsistent information about voter information or voting systems.
The woman who responded to Shiva, Michelle Tassinari, told the office's Communications Director, Debra O'Malley, who runs the Massachusetts Elections Division's Twitter account, that she should report Shiva's tweet to Twitter as false information about the election. O'Malley did so, received a notification that Twitter would investigate, and heard nothing further.
Approximately a day later, Twitter informed Shiva that he needed to remove those tweets. Once again, this appeared to anger Shiva. The fact checking site Lead Stories did a fact check of the whole thing, agreeing that Shiva's tweets were false, and also spoke to O'Malley who told the reporter that her office had notified Twitter.
And, thus, Shiva sued the Secretary of the Commonwealth, William Galvin asserting a violation of the 1st Amendment in having Twitter take down Shiva's protected speech (he also asserts it violated other aspects of the 1st Amendment, including the right to a free press, to petition the government and to peaceably assemble). He asked the court to award him the tidy sum of $1.2 billion dollars. And, just to be clear: while I think there's an interesting 1st Amendment issue here, there's no fucking way it's a $1.2 billion question.
Procedurally, things went a little weird: just a week after filing, Shiva's lawyer, Daniel Casieri, asked the court to withdraw as Shiva's counsel, saying that he had to withdraw but the reasons for doing so involved information that was protected by attorney-client privilege. Shiva then immediately asked the court to allow him to continue pro se (representing himself), which the court allowed.
There was then some more procedural weirdness that charitably could be explained as confusion in the hand-off of the case from Casieri to Shiva. Casieri had held a call with the government's lawyer, Adam Hornstine, in which Hornstine had raised questions about the legal authorities to support his motion for a temporary restraining order, highlighting both jurisdictional and substantive concerns. Hornstine also noted that Galvin had not been served, and asked Cassieri to email him a request to waive service which they would consider. Cassieri also promised not to file a new motion for the TRO until they had talked again. Then, before any of that happened, Cassieri resigned, and Shiva took over and filed a new memorandum in support of the motion. Hornstine highlighted the procedural problems in an affidavit to the court.
While all that was happening, Galvin (represented by Hornstine) opposed the TRO motion on multiple grounds, saying it was barred jurisdictionally (because you can't sue public officials for doing their official jobs), and that the government has its own right to free speech to say something is false.
On Friday, there was a hearing on Shiva's motion for a temporary restraining order (TRO) to stop Galvin's office from reporting more of his tweets. The judge was apparently quite skeptical of Shiva's arguments (and self-representation) and pointed out (correctly) that Twitter removing content by itself is not a 1st Amendment violation since it's a private company. Shiva's argument in response (which I'll discuss below) is that a government official putting pressure on Twitter to take down his speech is the 1st Amendment violation. He also claimed (correctly!) that false speech is generally protected (which is interesting to hear him say as he has regularly claimed otherwise in the past, including in reference to the lawsuit he filed against me and a period of time in which he frequently deployed the phrase "truthful free speech"). Perhaps he's learned something.
Both Tassinari and O'Malley appeared to testify, with O'Malley admitting under oath that she hoped that Twitter would delete Shiva's tweet, but she did not know if it would. The judge then pointed out that the Secretary should have just responded publicly with more speech denying Ayyadurai's claims. Eventually, Galvin and his office agreed that they would not report any more tweets to Twitter until after Election Day. The end result is that Galvin is free to respond to Shiva's nonsense tweets through tweets or other means, but will not use the reporting function to seek to take down those tweets, thus making the TRO request moot.
So, that's a lot of background without much analysis. Obviously, I think that Shiva's tweets are utter nonsense. Massachusetts didn't destroy any ballots and him presenting it that way is highly misleading. But... I actually think he has a legitimate 1st Amendment concern here, as highlighted by two separate lawsuits we've discussed before. The issue, as Shiva raised, is that when a public official makes moves to pressure a private company into silencing speech, that should be a 1st Amendment concern. I've highlighted this issue before and it's a non-partisan concern, as both Republicans and Democrats have a long history of doing this, especially in the age of the internet. We raised concerns over a dozen years ago when Senator Joe Lieberman demanded YouTube take down "terrorist" videos, and we've continued to report on that problem right up until a few days ago when we noted that Senators on both sides of the aisle seemed to be demanding that Facebook, Twitter, and YouTube remove certain content they dislike.
We should always be concerned when an elected official is making statements about how private companies must remove some constitutionally protected content -- even if the content is misleading. The 1st Amendment bars Congress from regulating speech for a reason.
I frequently point to Judge Posner's ruling in the Backpage v. Dart case from a few years back, in which Sheriff Thomas Dart tried to pressure credit card companies to stop serving Backpage. When Backpage claimed this violated its 1st Amendment rights, Dart argued in response that he was merely exercising his own 1st Amendment rights in sending a letter to the credit card companies. Posner pointed out that when you become a government official the rules change.
“The fact that a public-official defendant lacks direct regulatory or decisionmaking authority over a plaintiff, or a third party that is publishing or otherwise disseminating the plaintiff’s message, is not necessarily dispositive .... What matters is the distinction between attempts to convince and attempts to coerce. A public-official defendant who threatens to employ coercive state power to stifle protected speech violates a plaintiff’s First Amendment rights, regardless of whether the threatened punishment comes in the form of the use (or, misuse) of the defendant’s direct regulatory or decisionmaking authority over the plaintiff, or in some less-direct form.”
Of course, the central issue in that case was whether or not Dart was threatening to take action if the credit card companies did not comply with his request:
As a citizen or father, or in any other private capacity, Sheriff Dart can denounce Backpage to his heart’s content. He is in good company; many people are disturbed or revolted by the kind of sex ads found on Backpage’s website. And even in his official capacity the sheriff can express his distaste for Backpage and its look-alikes; that is, he can exercise what is called “[freedom of] government speech.”... A government entity, including therefore the Cook County Sheriff’s Office, is entitled to say what it wants to say—but only within limits. It is not permitted to employ threats to squelch the free speech of private citizens. “[A] government’s ability to express itself is [not] without restriction. … [T]he Free Speech Clause itself may constrain the government’s speech.”
So, then, the question is whether or not O'Malley reporting the tweet to Twitter was a "threat to squelch the free speech of private citizens." And I think there's a pretty strong argument that it was. You can argue back that using the report feature that is built into Twitter is not actually doing anything other than providing speech, but even O'Malley admitted she hoped it would lead to the tweet being removed, and there's not much reason to use the report feature except because you hope that Twitter will then remove the speech. And since the report was apparently coming from the same person who managed Massachusetts' election division's Twitter account, you would expect that Twitter would take it extra seriously.
This may feel strange to some -- that election officials who are (righteously) fighting off disinformation around elections should not be entitled to use the very tools that Twitter provides to everyone to report such disinformation -- but there is a logic to it. And to highlight that, I'll point to another case we've mentioned frequently: Knight First Amendment Institute at Columbia v. Donald Trump, in which it was established that government officials using Twitter in their official capacity cannot use the block feature, because it violates the 1st Amendment. Based on that case, there's a very strong argument to be made on the same conceptual basis that such officials also cannot use the report feature to report election disinformation if it is constitutionally protected speech.
Admittedly, that does create an awkward situation for election officials, who often are on the frontlines battling election mis- and disinformation. If they are privately alerting social media companies to this information, there's a decently strong argument that it violates the 1st Amendment in the same manner that Trump (or any elected official) using the block button violates the rights of public citizens. That feels... uncomfortable for a wide variety of reasons. It obviously feels like election officials should be able to call out and alert social media companies when they see disinformation regarding the election. And the social media companies remain free to make their own final decisions.
But, if you flip the story around a bit, you might see why it would be a good thing that officials should not be able to make use of these mechanisms. It is not hard to think of a scenario under which certain election officials might simply try to use claims of mis- and disinformation against candidates they dislike, or with whom they disagree, rather than in situations in which there is actual disinformation. In such cases, we should not want the weight of an "election official" making such claims to social media companies. As the judge suggested here, though did not need to rule on, officials can (and probably should) say that some information is false, and can say so publicly where it can be reviewed, discussed, and debated. Social media companies can then decide how they want to deal with that information (and certainly other citizens can report the tweets and point to the public refutations). And, if those social media companies decide to delete that content, that is their own right, under the 1st Amendment.
But, as much as I believe he is spewing election disinformation nonsense and misleading his followers, Shiva may have a very legitimate 1st Amendment right to do so without government officials directly seeking to suppress his speech through the use of a reporting tool.
]]>Social media platforms have content moderation policies in place to counter both COVID-19 disinformation and election disinformation. However, platforms seem to be taking a more proactive approach to combating COVID-19 disinformation by building tools, spending significant resources, and most importantly, changing their content moderation policies to reflect the evolving nature of inaccurate information about the virus.
To be clear, COVID-19 disinformation is still rapidly spreading online. However, the platforms’ actions on the pandemic demonstrate they can develop specific policies to address and remove this harmful content. Platforms’ efforts to mitigate election disinformation, on the other hand, are falling short, due to the significant gaps that remain in their content moderation policies. Platforms should seriously examine how their COVID-19 disinformation policies can apply to reducing the spread of election disinformation and online voter suppression
Disinformation on social media can spread in a variety of ways including (1) the lack of prioritizing authoritative sources of information and third-party fact-checking; (2) algorithmic amplification and targeting; and (3) platform self-monetization. Social media platforms have revised their content moderation policies on COVID-19 to address many of the ways disinformation can spread about the pandemic.
For example, Facebook, Twitter, and YouTube all direct their users to authoritative sources of COVID-19 information. In addition, Facebook works with fact-checking organizations to review and rate pandemic-related content; YouTube utilizes fact-checking information panels; and Twitter is beginning to add fact-checked warning labels. Twitter has also taken the further step of expanding its definition on what it considers harmful content in order to capture and remove more inaccurate content related to the pandemic. To reduce the harms of algorithmic amplification, Facebook uses automated tools to downrank COVID-19 disinformation. Additionally, Facebook places restrictions on its advertisement policy to prevent the sale of fraudulent medical equipment and the platform prohibits ads that use exploitative tactics to create a panic over the pandemic as two methods for stopping the monetization of pandemic-related disinformation.
These content moderation policies have resulted in social media platforms taking down significant amounts of COVID-19 disinformation including recent posts from President Trump. Again, disinformation about the pandemic persists on social media. But these actions show the willingness of platforms to take action and reduce the spread of this content.
In comparison, social media platforms have not been as proactive in enforcing or developing new policies to respond to the spread of election disinformation. Platforms’ civic integrity policies are primarily limited to prohibiting inaccurate information about the processes of voting (e.g., misrepresentations about the dates and times people can vote). But even these limited policies are not being consistently enforced.
For example, Twitter placed a warning label on one of Trump’s inaccurate tweets about mail-in-voting procedures but have taken no action on other similar tweets from the president. Further, social media platforms current policies may not be broad enough to take into account emerging voter suppression narratives about voter fraud and election rigging. Indeed, Trump has pushed inaccurate content about mail-in-voting across social media platforms, falsely claiming it will lead to voter fraud and election rigging. With many states expanding their mail-in-voting procedures due to the pandemic, Trump’s continued inaccurate attacks on this method of voting threaten to confuse and discourage eligible voters from casting their ballot.
Platform content moderation policies also contain significant holes that bad actors continue to exploit to proliferate online voter suppression. For example, Facebook refuses to fact-check political ads even if they contain demonstrably false information that discourage people from voting. President Trump’s campaign has taken advantage of this by flooding the platform with hundreds of ads that spread disproven claims about voter fraud. Political ads with election disinformation can be algorithmically amplified or micro-targeted to specific communities to suppress their vote.
Social media platforms including Facebook and Twitter have recently announced new policies they will be rolling out to fight online voter suppression. As outlined above, there are some lessons platforms can learn from their efforts in combatting COVID-19 disinformation.
First, social media platforms should prioritize directing their users to authoritative sources of information when it comes to the election. Authoritative sources of information include state and local election officials. Second, platforms must consistently enforce and expand their content moderation policies as appropriate to remove election disinformation. Like their COVID-19 disinformation policies, platforms should build better tools and expand definitions of harmful content when it comes to online voter suppression. Finally, platforms must address the structural problems that allow bad actors to engage in online voter suppression tactics including algorithmic amplification and targeted advertisements.
COVID-19 – as dangerous and terrifying an experience as it has been – has at least proven that when platforms want to step up their efforts to stop the spread of disinformation, they can. If we want authentic civic engagement and a healthy democracy that enables everyone’s voices to be heard, then we need digital platforms to ramp up their fight against online voter suppression, too. Our voices – and the voices of those in marginalized communities -- depend on it.
Just as combating COVID-19 disinformation is important to our public health, reducing the spread of election disinformation is critical to authentic civic engagement and a healthy democracy. As part of our efforts to stop the spread of online voter suppression, Common Cause will continue to monitor social media platforms for election disinformation and encourages readers to report any inaccurate content to our tip line. At the end of the day, platforms themselves must step up their fight against new online voter suppression efforts.
Yosef Getachew serves as the Media & Democracy Program Director for Common Cause. Prior to joining Common Cause, Yosef served as a Policy Fellow at Public Knowledge where he’s worked on a variety of technology and communications issues. His work has focused on broadband privacy, broadband access and affordability, and other consumer issues.
]]>