What do Fentanyl and TikTok have in common? Well, the real answer is absolutely nothing. Nothing at all. But, if you want to push a nonsense moral panic, apparently, you compare the two.
While it’s unclear exactly where Congress currently stands on the push to ban TikTok in the US (or, at the very least, force ByteDance to divest its ownership stake in the company), it appears that some are still pushing for it. Strongly opinionated Venture Capitalist Vinod Khosla took to the pages of the Financial Times (while insisting he has no financial dog in this fight), to claim that the US must ban TikTok because it’s “programmable fentanyl.”
Few appreciate that TikTok is not available in China. Instead, Chinese consumers use Douyin, the sister app that features educational and patriotic videos, and is limited to 40 minutes per day of total usage. Spinach for Chinese kids, fentanyl — another chief export of China’s — for ours. Worse still, TikTok is a programmable fentanyl whose effects are under the control of the CCP.
First of all, it’s only true that “few appreciate” that point if you haven’t been paying any attention at all. The point that inside China people use Douyin rather than TikTok is mentioned in basically every discussion of the app. There are tons of articles in the media about it. It’s been mentioned in congressional hearings. So, if people don’t know about it, that means they haven’t been paying attention and their opinion is already not well informed.
But, more importantly, it’s a meaningless point. There are lots of apps that aren’t available in China because China is an authoritarian country that deliberately censors much of the internetthat its citizens can access. While there are all sorts of accusations (including above) that the Chinese Communist Party puts its fingers on the moderation scales of TikTok, one indication that TikTok is a lot more free and open than Douyin is the very fact that China doesn’t allow TikTok inside the country.
I’d already pointed out if China is using TikTok to influence American opinions, it’s doing a terrible job of it, as American opinions towards China are at record lows.
And, just as an experiment, I just went on TikTok and searched for “uyghur.” I found tons of videos about China’s attempted genocide of the Uyghur people, many with hundreds of thousands of views. Even the autocomplete search suggestion shows “uyghur genocide” as the second option after I type Uyghur. I can also find lots of videos about Tiananmen Square. If China is really trying to suppress speech on TikTok, it’s not doing a great job.
But, even more to the point, this whole idea is based on the false belief that people are simply sheep that are easily brainwashed by an algorithm and the content they see. And… that’s not true. Human beings are not puppets. Yes, content can have some level of influence on the margins, but there’s little to no evidence supporting the idea that the internet, as a whole, is a vast brainwashing machine.
Of course, the internet and tech industries have a strong incentive to tell you that the internet is uniquely powerful in brainwashing you, because that makes it seem like it’s super worthwhile to buy ads or use those tools yourself to brainwash others. But, most of that is nonsense.
In addition, the claim of “programmable fentanyl” is even dumber. It’s yet another attempt to pretend that speech is somehow the equivalent of something you actually put into your body. As we’ve discussed before, speech online is not lead paint or cigarettes or chocolate or fentanyl.
It’s speech.
And sometimes there’s speech we disagree with. And sometimes that speech we disagree with is persuasive. But in a free society, we deal with that. We respond to it. We explain why it’s wrong and we seek to persuade in the other direction.
We do not take the Chinese approach and shut down the speech. But that is exactly what people pushing a TikTok ban are doing. They’re so convinced (or they so want to convince us) of the power of online speech that they are giving way more power to speech than it actually has.
They’re treating it as if it’s some sort of mind-altering drug, rather than recognizing that it’s just another form of communication. And, in doing so, they’re actually giving more power to the Chinese government by suggesting that its speech is so powerful that it must be banned.
A free society has dealt with bad and misleading speech in the past. It is possible. Speech is not all powerful. It is not “brainwashing,” it is not like a drug. Sometimes it’s persuasive when we’d prefer it not be, but that doesn’t mean we need to ban it. Just counter it.
Over the last few days, we’ve had a few posts about the latest attempt to ban TikTok in the US (and to people who say it’s only a divestiture bill: there is a ban in the language of the bill if ByteDance won’t divest).
Yesterday, unsurprisingly, the House voted overwhelmingly, 352 to 65, to pass that bill. The 15 Republicans and 50 Democrats who voted no make up an odd mix. You have some extreme Trump supporters, who probably are voting no because the boss man said so, and then a random assortment of Democrats, including a bunch from California. I thought Rep. Sara Jacobs from the San Diego area put out a particularly good statement on why this bill is so stupid:
As a member of both the House Armed Services and House Foreign Affairs Committees, I am keenly aware of the threat that PRC information operations can pose, especially as they relate to our elections. However, after reviewing the intelligence, I do not believe that this bill is the answer to those threats. Banning TikTok won’t protect Americans from targeted misinformation or misuse of their personal data, which American data brokers routinely sell and share. This is a blunt instrument for serious concerns, and if enacted, would mark a huge expansion of government power to ban apps in the future. Instead, we need comprehensive data privacy legislation, alongside thoughtful guardrails for social media platforms – whether those platforms are funded by companies in the PRC, Russia, Saudi Arabia, or the United States.
Taking this unprecedented step also undermines our reputation around the world. We can’t credibly hold other countries to one set of democratic values while giving ourselves a free pass to restrict freedom of speech. The United States has rightly criticized others for censorship and banning specific social media platforms in the past. Doing so ourselves now would tarnish our credibility when it matters most and trample on the civil liberties of 150 million Americans – a vast majority of whom are young Americans – who use TikTok for their livelihoods, news, communication, and entertainment. Ultimately, all Americans should have the freedom to decide for themselves how and where to express themselves and what information they want to consume.”
I think the second paragraph here is the key one. People keep saying “but they do the same to us.” That’s no excuse. We shouldn’t take a page from the Chinese censorship playbook and basically give them the moral high ground, combined with the ability to point to this move as justification for the shenanigans they’ve pulled in banning US companies from China.
Don’t let the authoritarians set the agenda. We should be better than that.
But also, her first paragraph is important as well. To date no one has shown an actual evidence of TikTok being dangerous. Instead, all that people will tell me is that there was some sort of classified briefing about it. From Rep. Jacobs’ statement we see that she was able to see that classified intel, and did not find it convincing at all.
I even find myself in rare agreement with Rep. Thomas Massie, who once blocked me on Twitter. He did so in response to me calling out his First Amendment violations in blocking people on Twitter (he eventually removed the block after the Knight First Amendment Institute sent him a letter on my behalf). Rep. Massie may have a somewhat conditional take on the First Amendment, but he correctly pointed out just how dangerous this bill would be:
The President will be given the power to ban WEB SITES, not just Apps. The person breaking the new law is deemed to be the U.S. (or offshore) INTERNET HOSTING SERVICE or App Store, not the “foreign adversary.”
Massie also pointed (as we did earlier this week) to the clearly lobbied-for (hi, Yelp lobbyists!) “exclusion” for review websites as proof that people know this law covers websites.
I stand by the point we’ve been making for multiple years now: banning TikTok is a stupid, performative, unconstitutional, authoritarian move that doesn’t do even the slightest bit to stop China from (1) getting data on Americans or (2) using propaganda to try to influence people (which are the two issues most frequently used to justify a ban).
Banning TikTok, rather than passing comprehensive federal privacy legislation, is nothing but xenophobic theater. China can (and does) already buy a ton of data on Americans because we refuse to pass any regulation regarding data brokers who make this data available (contrary to popular opinion, Facebook and Google don’t actually sell your data, but data brokers who collect it from lots of other sources do).
Meanwhile, there’s little to no evidence that China is “manipulating” sentiment with TikTok, and there’s even less evidence that it would be effective if they were trying to do so. Public sentiment in the US regarding China is reaching record lows, with the vast majority of Americans reasonably concerned about China’s role in the world. So if China is using TikTok to propagandize to Americans, it’s doing a shitty job of it.
The US has dealt with foreign propaganda for ages. And we don’t ban it. Part of free speech is that you have to deal with the fact that nonsense propaganda and disinformation exists. There are ways to deal with it and respond to it that don’t involve banning speech. It’s astounding to me how quickly people give up their principles out of a weird, xenophobic fear that somehow China has magic pixie dust hidden within TikTok to turn Americans’ brains to mush.
The Supreme Court has reviewed this kind of thing before and said that, no the US cannot ban foreign propaganda just because it’s scared of what that propaganda says. In that case, the government sought to restrict the delivery of “communist political propaganda” from outside the country. The court struck down the restriction on First Amendment grounds, stating that it was “a limitation on the unfettered exercise of the [recipient’s] First Amendment rights.”
As the court noted in that case, the setup of the law was “at war with the ‘uninhibited, robust, and wide-open’ debate and discussion that are contemplated by the First Amendment.”
In the US, we’re supposed to believe in freedom of speech, even if that freedom of speech comes in the form of “foreign communist propaganda.” If we survived that same foreign communist propaganda for decades in other forms, it seems like we can survive it coming from an app designed to highlight short videos of dance moves.
Again, we can pass data protection laws if we’re afraid of how the data is going to be used, because China doesn’t need TikTok to get that data. And we can counter Chinese propaganda. But part of doing so has to be not hiding it and acting like it’s so powerful that Americans are powerless against it. You counter it by showing how freedom can resist such efforts at manipulation.
I have no idea if the Senate will actually take up this bill, though there’s good reason to believe they will. However, such a ban would be a huge mistake, reflect poorly on American values, and show how quickly we’re willing to ignore the First Amendment on some misguided fear of a successful app from a foreign country.
While it seemed like our national policy hysteria over TikTok had waned slightly in 2024, it bubbled up once again last week upon rumors that the White House is supporting a “welcome and important” new bill that would effectively ban TikTok from operating in the United States.
The bipartisan bill (full text) — which moved forward last week in spite of TikTok’s ham-fisted attempt to overload Congress with phone calls from users — sponsored by Reps Mike Gallagher and Raja Krishnamoorthi, prevents all ByteDance-controlled applications from enjoying app store availability or web hosting services in the U.S., unless TikTok “severs ties to entities like ByteDance that are subject to the control of a foreign adversary.” Basically, the bill wants ByteDance to divest TikTok, preferably to an American company.
You’ll recall the Trump administration’s big “solution” for TikTok was basically cronyism: to force the company to sell itself to Walmart and Oracle. That is: companies controlled by Trump’s cronies, with their own track records of bad behavior and privacy violations. You’ll also recall that Facebook has been very busy sowing congressional angst for years about TikTok for purely anti-competitive reasons.
The bill applies to any company owned by ByteDance, whether or not anybody has actually proven any sort of meaningful connection to Chinese intelligence (we’re working off of vibes here, man). There’s also some murky language in the legislation that curiously excludes companies that deal in reviews, a nice treat for whatever company successfully lobbied for that exemption:
EXCLUSION: The term ‘‘covered company’’ does not include an entity that operates a website, desktop application, mobile application, or augmented or immersive technology application whose primary purpose is to allow users to post product reviews, business reviews, or travel information and reviews.
To be very clear: TikTok certainly isn’t without surveillance, national security, and notable privacy concerns. And the authoritarian Chinese government is, without question, an oppressive genocidal shitshow.
But banning TikTok, while refusing to pass a privacy law or regulate data brokers (which traffic in significantly greater volumes of sensitive data at much greater collective scale), winds up mostly being a performative endeavor driven more by anti-competitive intent (and a desire to control the flow and scope of modern news, information and propaganda) than any desire for serious reform.
A lot of the congressional opposition (especially on the GOP side) to TikTok comes largely from the belief that white owned and controlled American companies are owed, by divine right, access to the massive ad revenues Chinese-owned TikTok enjoys. For Luddites and policy nitwits like Tommy Tuberville, I strongly doubt the thinking extends much further than that.
I also think Republicans very much don’t like the idea of a company that could potentially traffic in propaganda that isn’t theirs. They’ve worked very hard for several years to scare feckless U.S. tech giants away from policing race-baiting political propaganda online (a cornerstone of modern GOP power), and their inability to control TikTok presents an obvious concern for entirely self-serving reasons.
But even lawmakers who sincerely believe that banning TikTok makes meaningful inroads on national security or consumer privacy generally don’t seem to understand the size and scope of the problem we’re dealing with.
You could ban TikTok with a patriotic flourish from the heavens immediately, but if we fail to regulate data brokers, pass a privacy law, or combat corruption, Chinese (or Russian, or Iranian) intelligence can simply turn around and buy that same data (and detailed profiles of American consumers) from an unlimited parade of different data brokers, telecoms, app makers, marketing companies, or services.
And they can do that because the U.S. has proven to be, time and time again, too corrupt to do the right thing or hold giant corporations (domestic or otherwise) accountable for privacy abuses. The result has been the creation of an historically massive, planet-wide, data monetization and surveillance machine that fails — over and over and over again — to meaningfully protect public safety and consumer privacy.
Congress has repeatedly made it very clear that making money is significantly more important than consumer welfare and public safety, as the scandal over sensitive abortion clinic location data makes clear. The U.S. government is also disincentivized to act, because it’s found exploitation of this privacy-optional nightmare to be a super handy way to avoid having to get warrants for domestic surveillance.
But it’s not enough. Congress needs to pass a privacy law for the internet-era with teeth that applies to all companies that operate in the U.S., foreign or domestic. It needs to adequately staff and fund the FTC so it can actually address the problem at the scale it’s operating at. And it needs to close the privacy loopholes that lets government surveillance efforts exploit the dysfunction.
But Congress won’t do that because Congress is comically, blisteringly corrupt. We’ve defanged our regulators for decades under the pretense it fostered an innovative, free market renaissance that never happened. When discussing our failure to meaningful protect U.S. consumer (and industry privacy), this corruption just isn’t mentioned–as if it’s simply somehow not relevant to the problem at hand.
Countries that care about national security make fleeting efforts to combat corruption, and don’t support NYC real estate conmen with fourth grade reading levels for the most powerful office in the land.
Countries that care about consumer privacy pass privacy laws, regulate data brokers, and generally hold corporations (and executives) meaningfully accountable for failing to secure consumer data. T-Mobile has been hacked eight times in five years due to comically lax security and privacy standards, and I’ve yet to see Congress lift so much as an eyebrow.
The myopic hyperventilation about TikTok (and TikTok only!) is mostly a distraction. A distraction from the GOP’s ongoing quest to turn the internet into a propaganda dumpster fire. A distraction from our failures on consumer protection. A distraction from Congressional corruption. A distraction from the fact that we’ve lobotomized our regulators in exchange for Utopian promises never actually delivered.
Banning an app that may not even be popular five years from now — but doing absolutely nothing about the corruptive rot that enabled its privacy abuses — is a hollow performance that simply doesn’t strike at the heart of the actual problem.
So, for all of the nonsense about what level of coercive power governments have over social media companies, it’s bizarre how little attention has been paid to the fact that TikTok is apparently proposing to give the US government control over its content moderation setup, and the US government is looking at it seriously.
As you likely know, there’s been an ongoing moral panic about TikTok in particular. The exceptionally popular social media app (that became popular long after we were assured that Facebook had such a monopoly on social media no new social media app could possibly gain traction) happens to be owned by a Chinese company, ByteDance, which has resulted in a series of concerns about the privacy risks of using the app. Some of those concerns are absolutely legitimate. But many of them are nonsense.
And, for basically all of the legitimate concerns the proper response would be to pass a comprehensive federal data privacy law. But no one seems to have the appetite for that. You get more headlines and silly people on social media cheering you on by claiming you want to ban TikTok (this is a bipartisan moral panic).
Instead of recognizing all of this and doing the right thing after Trump’s failed attempt at banning TikTok, the Biden administration has… simply kept on trying to ban TikTok or force ByteDance to divest. That’s another repeat of a bad Trump idea, which ended not in the divestiture, but Trump getting his buddy Larry Ellison’s company, Oracle, a hosting deal for TikTok. And, of course, TikTok and Oracle now insist that Oracle is reviewing TikTok’s algorithms and content moderation practices.
But, moral panics are not about facts, but panics. So, the Biden administration did the same damn thing Trump did three years earlier in demanding that TikTok be fully separated from ByteDance, or said the company would get banned in the US. Apparently negotiations fell apart in the spring, hopefully because TikTok folks know full well that the government can’t just ban TikTok.
However, the Washington Post says that they’re back to negotiating (now that the Biden administration is mostly convinced a ban would be unconstitutional), and the focus is on a TikTok proffered plan to… wait for it… outsource content moderation questions to the US government. This plan was first revealed in Forbes by one of the best reporters on this beat: Emily Baker-White (whom TikTok surveilled to try to find out where she got her stories from…). And it’s insane:
The draft agreement, as it was being negotiated at the time, would give government agencies like the DOJ or the DOD the authority to:
Examine TikTok’s U.S. facilities, records, equipment and servers with minimal or no notice,
Block changes to the app’s U.S. terms of service, moderation policies and privacy policy,
Veto the hiring of any executive involved in leading TikTok’s U.S. Data Security org,
Order TikTok and ByteDance to pay for and subject themselves to various audits, assessments and other reports on the security of TikTok’s U.S. functions, and,
In some circumstances, require ByteDance to temporarily stop TikTok from functioning in the United States.
The draft agreement would make TikTok’s U.S. operations subject to extensive supervision by an array of independent investigative bodies, including a third-party monitor, a third-party auditor, a cybersecurity auditor and a source code inspector. It would also force TikTok U.S. to exclude ByteDance leaders from certain security-related decision making, and instead rely on an executive security committee that would operate in secrecy from ByteDance. Members of this committee would be responsible first for protecting the national security of the United States, as defined by the Executive Branch, and only then for making the company money.
For all the (mostly misleading) talk of the US government having too much say in content moderation decisions, this move would literally put US government officials effectively in control of content moderation decisions for TikTok. Apparently the thinking is “welp, it’s better than the Chinese government.” But… that doesn’t mean it’s good. Or constitutional.
“If this agreement would give the U.S. government the power to dictate what content TikTok can or cannot carry, or how it makes those decisions, that would raise serious concerns about the government’s ability to censor or distort what people are saying or watching on TikTok,” Patrick Toomey, deputy director of the ACLU’s National Security Project, told Forbes.
A subsidiary called TikTok U.S. Data Security, which would handle all of the app’s critical functions in the United States, including user data, engineering, security and content moderation, would be run by the CFIUS-approved board that would report solely to the federal government, not ByteDance.
CFIUS monitoring agencies, including the departments of Justice, Treasury and Defense, would have the right to access TikTok facilities at any time and overrule its policies or contracting decisions. CFIUS would also set the rules for all new company hires, including that they must be U.S. citizens, must consent to additional background checks and could be denied the job at any time.
All of the company’s internal changes to its source code and content-moderation playbook would be reported to the agencies on a routine basis, the proposal states, and the agencies could demand ByteDance “promptly alter” its source code to “ensure compliance” at any time. Source code sets the rules for a computer’s operation.
Honestly, what this reads as is the moral panic over China and TikTok so eating the brains of US officials that rather than saying “hey, we should have privacy laws that block this,” they thought instead “hey, that would be cool if we could just do all the things we accuse China of doing, but where we pull the strings.”
Now, yes, it’s true that an individual or private company can voluntarily choose to give up its constitutionally protected rights, but there is no indication that any of this is even remotely close to voluntary. If the 5th Circuit found that simply explaining what is misinformation about COVID was too coercive for social media companies to make moderation decisions over, then how is “take this deal or we’ll ban your app from the entire country” not similarly coercive?
Furthermore, it’s not just the rights of TikTok to consider here, but the millions of users on the platform, who have not agreed to give up their own 1st Amendment rights.
Indeed, I would think there’s a very, very high probability that if this deal were to be put in place, it would backfire spectacularly, because anyone who was moderated on TikTok and didn’t like it would actually have a totally legitimate 1st Amendment complaint that it was driven by the US government, and that TikTok was a state actor (because it totally would be under those conditions).
In other words, if the administration and TikTok actually consummated such a deal, the actual end result would be that TikTok would effectively no longer be able to do much content moderation at all, because it would only be able to take down content that was not 1st Amendment protected.
So, look, if we’re going to talk about US government influence over content moderation choices, why aren’t we talking much more about this?
The two big EU attempts to overly regulate the internet are starting to go into effect. The Digital Services Act (DSA), along with all its associated problems, is about six months ahead of the Digital Markets Act (DMA) and all of its associated problems. Six months ago, the EU designated 17 sites as “Very Large Online Platforms” under the DSA (though a few of those sites are protesting the designation, including Zalando, which is the only company on the list mainly targeting EU users).
The DMA’s equivalent is being designated as a “gatekeeper,” and that’s now happened with the exact six companies you probably would have guessed being designated as such: Alphabet (Google), Amazon, Apple, ByteDance (TikTok), Meta (Facebook, Instagram) and Microsoft. The DMA gatekeeper designation process is… somewhat arbitrary. It’s basically any platform the EU Commission thinks is “important” for “core services.” What could go wrong?
That said, it came out just before the release of the gatekeeper list that Apple is fighting to keep iMessage off the messaging list (which the EU, in truly EU-fashion, calls “N-IICS” for “Number Independent Interpersonal Communications Services”). And also that Microsoft is trying to keep Bing off the search list, Edge off the browser list, and its ads platform off the ads list. In both cases, the companies are suggesting that their offerings are not nearly as large and “gatekeepery” as the others. Also in both cases (or all four cases, if you count each service as separate), the EU has instead “launched an investigation” before making the final designation.
I wouldn’t be surprised to see the EU end up using the investigation to say all four of those are, in fact, gatekeepers, which would create an interesting scenario in which Apple is told it needs to open up iMessage, rather than locking it to the Apple ecosystem. Wouldn’t that be something?
The EU also announced that it’s also launching a separate investigation into Apple’s iPadOS to see if it should also be included in the Operating System category (I honestly thought that the iPad just used iOS as well… which shows how not closely I follow the ins and outs of the Apple ecosystem).
The EU also says it spared three products that “met the thresholds,” but wherein the companies convinced the EU that they weren’t really gatekeepery: Gmail, Outlook.com, and Samsung’s browser.
Notably… unlike with the DSA, the EU Commission didn’t even bother to put a token EU company on the list, because why bother? These laws have always been about controlling foreign internet companies.
Again, all of this feels both somewhat arbitrary, and somewhat theater. Everyone knew what services the DMA was targeting, so this isn’t much of a surprise.
Now these offerings have until March of next year to comply with the requirements of the DMA’s rules for gatekeepers. It will be interesting to see how that will go. Some elements will be quite interesting, such as the requirements for interoperability, and enabling access to data to business users (though it’s not at all clear how that won’t lead to another Cambridge Analytica kind of scenario). There are also prohibitions on favoring their own offerings, blocking users from making use of 3rd party interoperable tools, and blocking users from uninstalling pre-installed software.
There are plenty of very interesting ideas, and I’m all in favor of more interoperability and less lock-in. So I’m intrigued (and maybe even a little bit hopeful?) about how that might play out.
However, this is a massive experiment in how the internet will work going forward, and I have zero faith that the EU technocrats who put all this together have a good grasp on the consequences of all of this, meaning that my excitement about better interop and less lock-in is greatly tempered by the reality that, in practice, there are many reasons why the DMA seems likely to just be one giant clusterfuck of problems.
China and India are widely expected to be two of the most powerful global players in the decades to come. In some ways, they are alike. As Techdirt has reported, both have dismal records when it comes to Internet freedom, online censorship and privacy. But they differ in terms of their impact on the IT sector outside their home countries. China has produced a worldwide success story in TikTok, alongside well-known Internet giants such as Alibaba, Baidu and Tencent. India, by contrast, is chiefly famous in the computing world for its vast digital biometric identity system, Aadhaar. That may be about to change, thanks to another Indian creation, the Unified Payments Interface (UPI).
As its rather boring name suggests, UPI is a way of allowing all the different payment systems and companies that make up India’s financial sector to interoperate seamlessly. In practice, this means that Indians can send money to more or less anyone, or any company, in India, with a few clicks on a UPI mobile phone app without worrying about the details. An article from 2017 on Medium provides an excellent detailed history of the project up to that time. A post on the Rest of the World brings the story up to date:
UPI, introduced in 2016, has surpassed the use of credit and debit cards in India. Nearly 260 million Indians use UPI — in January 2023, it recorded about 8 billion transactions worth nearly $200 billion. The transactions can be facilitated using mobile numbers or QR codes, ranging from a few cents to 100,000 rupees ($1,221) a day.
At the heart of UPI lies Aadhaar:
Users without debit cards can use a UPI address — similar to an email address — to transfer money from their Aadhaar-linked bank accounts in real time. Over the past decade, the government has used Aadhaar as a building block for a host of digital services, such as payments, e-signatures, and health apps; these interlinked sets of digital platforms are called India Stack.
UPI is clearly a big success in India, not least for providing poorer sectors of society with advanced financial services via their mobile phone. But the real story may be the one developing outside India:
That makes sense, because India is one of the largest remittance recipients in the world, receiving around $100 billion in 2022. But there’s another key aspect:
India’s federal bank has been pushing for the internationalization of UPI since 2020. One of the reasons for this aggressive global expansion is to mitigate geopolitical risk. In February 2022, the U.S. and its Western allies blocked Russian banks’ access to Swift, an international payments system used by thousands of financial institutions, hurting Russia severely. It spooked other countries about secondary sanctions — especially India, which continues to purchase crude oil from Russia.
A global roll-out of UPI would obviously be great news for Russia, offering a way to circumvent the ban on using Swift that was imposed following its invasion of Ukraine. It would also bolster India’s geopolitical power, since it controls the underlying UPI technology, and it would place Indian companies at the heart of this emerging international payments system. UPI may have a dull name and low visibility currently. But behind the scenes the implications of its wider adoption outside India could be dramatic, and just as influential as China’s more obvious approach to bolstering its soft power in the online world.
Last week, we wrote about the positively ridiculous lawsuit filed by the Seattle Public School district against basically all of social media claiming social media was “a public nuisance.” As we noted, the school district appeared to be wasting taxpayer money, that could have gone to educating their kids, on this lawsuit that screamed out to the public that the school district had totally failed in educating their children how to be good digital citizens, how to use the internet properly, and how to be prepared for living life in the age of the internet.
And, now it appears that the Mesa, Arizona school district has decided to do the same thing. Using the same lawyers. The law offices of Keller Rohrback appears to be trying to carve out this corner of the market as their own: having public school districts waste a shitload of time and resources to publicly proclaim that they can’t prepare the children they’re in charge of educating for the modern internet world.
The Mesa complaint is, not surprisingly, similar to the Seattle complaint. It’s suing the same companies (really: Meta, Google, Snap, Tiktok). Like the Seattle complaint, it argues that social media is a “public nuisance.” Like the Seattle complaint, it says that Section 230 doesn’t protect the companies (it’s wrong). Like the Seattle complaint, it posts a few cherry-picked studies claiming that social media is bad for kids, and ignores more comprehensive studies that argue that opposite. Like the Seattle complaint, it goes hard in proving that Mesa public schools apparently are staffed by administrators and teachers who suck at educating children, and find themselves powerless against… entertainment.
In short, it’s pathetic.
The one main “difference” between the Seattle complaint and the Mesa one is that in Mesa they’ve added a “negligence” claim, saying that social media companies “owe” the school district “a duty not to expose Plaintiff to an unreasonable risk of harm….”
This is all laughably stupid, and not at all how the law works. I mean, it’s possible that the lawyers at Keller Rohrback figure that if they file enough of these lawsuits, eventually they’ll find a judge who lets the moral panic of “social media is bad for kids” overwhelm the actual legal issues, but it’s difficult to see it standing up to any legitimate judicial scrutiny.
Of course, now that we have these two lawsuits, it means it’s almost certain that they’re shopping for similar lawsuits. One hopes that other school districts will reject this nonsense. The whole point of these lawsuits is almost certainly to try to shake down the social media companies to get them to settle, but that seems unlikely.
Either way, if you’re a parent of a student in the Mesa public schools, you should be asking why your school’s administrators seem to be publicly admitting that they can’t teach your children how to deal with the modern internet world.
I just wrote about Utah’s ridiculously silly plans to sue every social media company for being dangerous to children, in which I pointed out that the actual research doesn’t support the underlying argument at all. But I forgot that a few weeks ago, Seattle’s public school district actually filed just such a lawsuit, suing basically every large social media company for being a “public nuisance.” The 91-page complaint is bad. Seattle taxpayers should be furious that their taxes, which are supposed to be paying for educating their children, are, instead, going to lawyers to file a lawsuit so ridiculous that it’s entirely possible the lawyers get sanctioned.
The lawsuit was filed against a variety of entities and subsidiaries, but basically boils down to suing Meta (over Facebook, Instagram), Google (over YouTube), Snapchat, and TikTok. Most of the actual lawsuit reads like any one of the many, many moral panic articles you read about how “social media is bad for you,” with extremely cherry-picked facts that are not actually supported by the data. Indeed, one might argue that the complaint itself, filed by Seattle Public Schools lawyer Gregory Narver and the local Seattle law firm of Keller Rohrback, is chock full of the very sort of misinformation that they so quickly wish to blame the social media companies for spreading.
First: as we’ve detailed, the actual evidence that social media is harming children basically… does not exist. Over and over again studies show a near total lack of evidence. Indeed, as recent studies have shown, the vast majority of children get value from social media. There are plenty of moral paniciky pieces from adults freaked out about what “the kids these days” are doing, but little evidence to support any of it. Indeed, the parents often seem to be driven into a moral panic fury by… misinformation they (the adults) encountered on social media.
The school’s lawsuit reads like one giant aggregation of basically all of these moral panic stories. First, it notes that the kids these days, they use social media a lot. Which, well, duh. But, honestly, when you look at the details it suggests they’re mostly using them for entertainment, meaning that it hearkens back to previous moral panics about every new form of entertainment from books, to TV, to movies, etc. And, even then, none of this even looks that bad? The complaint argues that this chart is “alarming,” but if you asked kids about how much TV they watched a couple decades ago, I’m guessing it would be similar to what is currently noted about YouTube and TikTok (and note that others like Facebook/Instagram don’t seem to get that much use at all according to this chart, but are still being sued):
There’s a whole section claiming to show that “research has confirmed the harmful effects” of social media on youth, but that’s false. It’s literally misinformation. It cherry-picks a few studies, nearly all of which are by a single researcher, and ignores the piles upon piles of research suggesting otherwise. Hell, even the graphic above that it uses to show the “alarming” addition to social media is from Pew Research Center… the organization that just released a massive study about how social media has made life better for teens. Somehow, the Seattle Public Schools forgot to include that one. I wonder why?
Honestly, the best way to think about this lawsuit is that it is the Seattle Public School system publicly admitting that they’re terrible educators. While it’s clear that there are some kids who end up having problems exacerbated by social media, one of the best ways to deal with that is through good education. Teaching kids how to use social media properly, how to be a good digital citizen, how to have better media literacy for things they find on social media… these are all the kinds of things that a good school district builds into its curriculum.
This lawsuit is effectively the Seattle Public School system publicly stating “we’re terrible at our job, we have not prepared your kids for the real world, and therefore, we need to sue the media apps and services they use, because we failed in our job.” It’s not a good look. And, again, if I were a Seattle taxpayer — and especially if I were a Seattle taxpayer with kids in the Seattle public school district — I would be furious.
The complaint repeatedly points out that the various social media platforms have been marketed to kids, which, um, yes? That doesn’t make it against the law. While the lawsuit mentions COPPA, the law designed to protect kids, it’s not making a COPPA claim (which it can’t make anyway). Instead, it’s just a bunch of blind conjectures, leading to a laughably weak “public nuisance” claim.
Pursuant to RCW 7.48.010, an actionable nuisance is defined as, inter alia,
“whatever is injurious to health or indecent or offensive to the senses, or an obstruction to the
free use of property, so as to essentially interfere with the comfortable enjoyment of the life and
property.”
Specifically, a “[n]uisance consists in unlawfully doing an act, or omitting to
perform a duty, which act or omission either annoys, injures or endangers the comfort, repose,
health or safety of others, offends decency . . . or in any way renders other persons insecure in
life, or in the use of property.”
Under Washington law, conduct that substantially and/or unreasonably interferes
with the Plaintiff’s use of its property is a nuisance even if it would otherwise be lawful.
Pursuant to RCW 7.48.130, “[a] public nuisance is one which affects equally the
rights of an entire community or neighborhood, although the extent of the damage may be
unequal.”
Defendants have created a mental health crisis in Seattle Public Schools, injuring
the public health and safety in Plaintiff’s community and interfering with the operations, use, and
enjoyment of the property of Seattle Public Schools
Employees and patrons, including students, of Seattle Public Schools have a right
to be free from conduct that endangers their health and safety. Yet Defendants have engaged in
conduct which endangers or injures the health and safety of the employees and students of
Seattle Public Schools by designing, marketing, and operating their respective social media
platforms for use by students in Seattle Public Schools and in a manner that substantially
interferes with the functions and operations of Seattle Public Schools and impacts the public
health, safety, and welfare of the Seattle Public Schools community
This reads just as any similar moral panic complaint would have read against older technologies. Imagine schools in the 1950s suing television or schools in the 1920s suing radios. Or schools in the 19th century suing book publishers for early pulp novels.
For what it’s worth, the school district also tries (and, frankly, fails) to take on Section 230 head on, claiming that it is “no shield.”
Plaintiff anticipates that Defendants will raise section 230 of the Communications
Decency Act, 47 U.S.C. § 230(c)(1), as a shield for their conduct. But section 230 is no shield for
Defendants’ own acts in designing, marketing, and operating social media platforms that are
harmful to youth.
….
Section 230 does not shield Defendants’ conduct because, among other
considerations: (1) Defendants are liable for their own affirmative conduct in recommending and
promoting harmful content to youth; (2) Defendants are liable for their own actions designing
and marketing their social media platforms in a way that causes harm; (3) Defendants are liable
for the content they create that causes harm; and (4) Defendants are liable for distributing,
delivering, and/or transmitting material that they know or have reason to know is harmful,
unlawful, and/or tortious.
Except that, as we and many others explained in our briefs in the Supreme Court’s Gonzalez case, that’s all nonsense. All of them are still attempting to hold companies liable for the speech of users. None of the actual complaints are about actions by the companies, but rather how they don’t like the fact that the expression of these sites users are (the school district misleadingly claims) harmful to the kids in their schools.
First, Plaintiff is not alleging Defendants are liable for what third-parties have
said on Defendants’ platforms but, rather, for Defendants’ own conduct. As described above,
Defendants affirmatively recommend and promote harmful content to youth, such as proanorexia and eating disorder content. Recommendation and promotion of damaging material is
not a traditional editorial function and seeking to hold Defendants liable for these actions is not
seeking to hold them liable as a publisher or speaker of third party-content.
Yes, but recommending and promoting content is 1st Amendment protected speech. They can’t be sued for that. And, it’s not the “recommendation” that they’re really claiming is harmful, but the speech that is being recommended which (again) is protected by Section 230.
Second, Plaintiff’s claims arise from Defendants’ status as designers and
marketers of dangerous social media platforms that have injured the health, comfort, and repose
of its community. The nature of Defendants’ platforms centers around Defendants’ use of
algorithms and other designs features that encourage users to spend the maximum amount of
time on their platforms—not on particular third party content.
One could just as reasonably argue that the harm actually arises from the Seattle Public School system’s apparently total inability to properly prepare the children in their care for modern communications and entertainment systems. This entire lawsuit seems like the school district foisting the blame for their own failings on a convenient scapegoat.
There’s a lot more nonsense in the lawsuit, but hopefully the court quickly recognizes how ridiculous this is and tosses it out. Of course, if the Supreme Court screws up everything with a bad ruling in the Gonzalez case, well, then this lawsuit should give everyone pretty clear warning of what’s to come: a whole slew of utterly vexatious, frivolous lawsuits against internet websites for any perceived “harm.”
The only real takeaways from this lawsuit should be (1) Seattle parents should be furious, (2) the Seattle Public School system seems to be admitting it’s terrible at preparing children for the real world, and (3) Section 230 remains hugely important in protecting websites against these kinds of frivolous SLAPP suits.
Back in June we wrote about a blockbuster article in Buzzfeed by Emily Baker-White detailing how ByteDance engineers in China were still accessing data on US TikTok users. That was notable, given that ByteDance had signed this big deal with Oracle, while former President Trump held a proverbial gun to its head, to try to separate out its US data and keep it separate. It’s also still not entirely clear what Oracle is really doing with regards to TikTok, as each announcement seems less and less informative.
Either way, in October, we again wrote about another story by Baker-White, now at Forbes, talking about how ByteDance appeared to use TikTok data to try to spy on certain US citizens, though the details were vague. As we said at the time, this seemed like the sort of thing that should spur people to pass a comprehensive federal privacy law, not that that’s happened. Instead, we’ve just been getting more and more performative nonsense focused exclusively on TikTok, rather than on the underlying problem.
Now, Baker-White has the third piece in this trilogy that ties them all together. Apparently one of the US citizens ByteDance was trying to spy on… was Baker-White herself, and it was because of the original Buzzfeed article, as the company sought to track down how the initial info was leaked. It’s quite a story and you should read the whole thing, though here’s just a snippet.
According to materials reviewed by Forbes, ByteDance tracked multiple Forbes journalists as part of this covert surveillance campaign, which was designed to unearth the source of leaks inside the company following a drumbeat of storiesexposing the company’s ongoing links to China. As a result of the investigation into the surveillance tactics, ByteDance fired Chris Lepitak, its chief internal auditor who led the team responsible for them. The China-based executive Song Ye, who Lepitak reported to and who reports directly to ByteDance CEO Rubo Liang, resigned.
“I was deeply disappointed when I was notified of the situation… and I’m sure you feel the same,” Liang wrote in an internal email shared with Forbes. “The public trust that we have spent huge efforts building is going to be significantly undermined by the misconduct of a few individuals. … I believe this situation will serve as a lesson to us all.”
That is to say, it’s unfortunate, but true, that tech companies have a bit of a history attacking critical journalists, and abusing their own access to data to do so. It’s very, very bad, and it should not be allowed, but (once again), it’s not unique to TikTok, nor will any solution focused solely on TikTok do anything to “solve” this issue.
It is certainly yet another frightening example, though, and it remains ridiculous that this is how any company responds to a little critical press coverage. Tech execs need to realize that the press covers them critically. It’s how things work.
Emily Baker-White has quite the story over at Forbes, revealing how ByteDance, the Chinese company that owns TikTok, apparently planned to have its “Internal Audit and Risk Control” department spy on the location of some American citizens:
The team primarily conducts investigations into potential misconduct by current and former ByteDance employees. But in at least two cases, the Internal Audit team also planned to collect TikTok data about the location of a U.S. citizen who had never had an employment relationship with the company, the materials show. It is unclear from the materials whether data about these Americans was actually collected; however, the plan was for a Beijing-based ByteDance team to obtain location data from U.S. users’ devices.
[….]
But the material reviewed by Forbes indicates that ByteDance’s Internal Audit team was planning to use this location information to surveil individual American citizens, not to target ads or any of these other purposes. Forbes is not disclosing the nature and purpose of the planned surveillance referenced in the materials in order to protect sources. TikTok and ByteDance did not answer questions about whether Internal Audit has specifically targeted any members of the U.S. government, activists, public figures or journalists.
Given the near non-stop moral panics about TikTok from the past few years, I’m am absolutely sure that this will be used (yet again) to argue that TikTok is somehow uniquely problematic, when the reality (yet again) is that what it’s doing is really no different than what a ton of American internet companies already do and have done in the past. Baker-White, who is one of the best reporters on this beat, makes that clear in her reporting:
ByteDance is not the first tech giant to have considered using an app to monitor specific U.S. users. In 2017, the New York Times reported that Uber had identified various local politicians and regulators and served them a separate, misleading version of the Uber app to avoid regulatory penalties. At the time, Uber acknowledged that it had run the program, called “greyball,” but said it was used to deny ride requests to “opponents who collude with officials on secret ‘stings’ meant to entrap drivers,” among other groups.
[….]
Both Uber and Facebook also reportedly tracked the location of journalists reporting on their apps. A 2015 investigation by the Electronic Privacy Information Center found that Uber had monitored the location of journalists covering the company. Uber did not specifically respond to this claim. The 2021 bookAn Ugly Truth alleges that Facebook did the same thing, in an effort to identify the journalists’ sources. Facebook did not respond directly to the assertions in the book, but a spokesperson told the San Jose Mercury News in 2018 that, like other companies, Facebook “routinely use[s] business records in workplace investigations.”
So, rather than making this a big thing about “oh no TikTok/China bad,” this should be a recognition that Congress should stop bickering about stupid stuff, and that includes pushing silly performative legislation, and come up with an actual federal privacy law that gives the public greater ability to protect their own privacy from all sorts of companies.
But, of course, that would take competence, and probably wouldn’t be useful for grandstanding or headlines… so it’ll never happen.
Of course, there are questions about what this means regarding TikTok’s widely discussed plans to separate US user data from ByteDance’s peeking eyes. I thought Oracle was supposed to protect us from all this, right? Right?