Mike Masnick’s Techdirt Profile

mmasnick

About Mike Masnick Techdirt Insider

Mike is the founder and CEO of the Copia Institute and editor of the Techdirt blog.

He can be found on Twitter at http://www.twitter.com/mmasnick

https://twitter.com/mmasnick



Posted on Techdirt - 15 December 2020 @ 9:33am

FTC's Misses Opportunity To Understand Social Media; Instead Goes For Weird Fishing Expedition Against Odd Grouping Of Companies

from the this-could-have-been-helpful dept

On Monday, the FTC announced that it was issuing what's known as 6(b) orders to nine social media and video streaming companies, demanding a fairly massive amount of information regarding their data collection and usage policies, as well as their advertising practices. To me, this is a huge missed opportunity. If the FTC is truly trying to gain a better understanding of data collection, privacy, and advertising practices, perhaps to better inform Congress on how to, say, pass a truly comprehensive (and useful?!?) privacy legislation, then there are ways to do that. But this... is not that. This looks like a weird fishing expedition for a ton of unrelated information, from an odd selection of nine companies, many of whom are in a very different business than the others. It leaves me quite perplexed.

First, let's look at the odd selection of companies. The letters are going to:

  • Amazon (apparently including Twitch)
  • Bytedance (TikTok)
  • Discord
  • Facebook
  • Reddit
  • Snap
  • Twitter
  • WhatsApp (owned by Facebook)
  • YouTube
Okay, so they've definitely focused on many of the big players, but they've also left out a ton as well. Where's LinkedIn? Or Github? Or WeChat? Or Pinterest? Or Quora? They list Facebook and Whatsapp... but not Instagram? Where's Zoom? Now it's true that sometimes the FTC will randomly sample a bunch of companies in a particular industry to get a look at certain practices -- but for that to make sense, you want to sample from a set of similarly situated companies. This is... not that.

For the smaller companies on the list, such as Reddit and Discord, the FTC demanding they file a ton of paperwork in a very short time frame is going to mean a tremendous waste of time.

The second concern is the broad nature of the requests. The "sample order" is massive. There are 53 separate requests, many with multiple sub-parts. They're not just asking for specific information, but rather going on what appears to be an incredibly broad fishing expedition for information about a wide variety of practices at all of these companies -- including broad demands for future strategies and plans. For example, beyond just information on the number of users, it demands all documents relating to "business strategies or plans," "research and development efforts," "strategies or plans to reduce costs, improve products or services..." It also seems to be demanding all "presentations to management committees, executive committees, and boards of directors."

That feels like a fishing expedition, rather than an attempt to actually understand data collection and usage practices.

There are categories of information included here that I think it would be useful for the FTC to understand. But there's just so much information requested that it seems likely to bury the useful information.

The one FTC Commissioner who dissented from this effort, Noah Joshua Phillips, raises important questions in his dissent:

Effective 6(b) orders look carefully at business practices in which companies engage in a manner designed to elicit information, understand it, and then present it to the public in way that is usable and can form a basis for sound public policy.

The first step is to select a group of recipients that will permit such examination, usually a group of firms engaged in conduct that can be compared. But the logic behind the choice of recipients here is not clear at all. The 6(b) orders target nine entities: Facebook, WhatsApp, Snap, Twitter, YouTube, ByteDance, Twitch, Reddit, and Discord. These are different companies, some of which have strikingly different business models. And the orders omit other companies engaged in business practices similar to recipients, for example, Apple, Gab, GroupMe, LinkedIn, Parler, Rumble, and Tumblr, not to mention other firms the data practices of which have drawn significant government concern, like WeChat. The only plausible benefit to drawing the lines the Commission has is targeting a number of high profile companies and, by limiting the number to nine, avoiding the review process required under the Paperwork Reduction Act, which is not triggered if fewer than ten entities are subject to requests.

Phillips calls out the same broad demands I raised above regarding business plans, R&D and presentations, noting:

Such a request would be suited to an antitrust investigation. But as part of an inquiry ostensibly aimed at consumer privacy practices, it amounts to invasive government overreach. And that is just one of the order’s 50-plus specifications.

And, finally, he highlights how this effort is just demanding way too much information to be of use for a comprehensive policy recommendation:

The biggest problem is that today’s 6(b) orders simply cover too many topics to make them likely to result in the production of comparable, usable information—yet another feature proper oversight and public comment could have flagged. Rather than a carefully calibrated set of specifications designed to elicit information that the agency could digest and analyze as a basis for informing itself, Congress, stakeholders, and the public, these 6(b) orders instead are sprawling and scattershot. Their over 50 specifications, most with numerous and detailed subparts, address topics including, but not limited to: advertising (reach, revenue, costs, and number and type); consumer data (collection, use, storage, disclosure, and deletion); as noted above, all strategic, financial, and research plans; algorithms and data analytics; user engagement and content moderation; demographic information; relationships with other services; and children and teens (policies, practices, and procedures).

Recipients of 6(b) orders typically negotiate to limit their productions, to tailor them in light of their specific business models and business practices. Perhaps the Commission will push back on attempts to do so, devoting additional lawyers to litigating the orders and having a federal judge oversee them, rather than OIRA. Or negotiation may reduce the burdens. But if that happens, each recipient will be responding to a different set of negotiated specifications. That certain of the companies in question have very different business models makes this even more likely. The end result of that is, say, the agency learning a lot about one recipient’s advertising practices, but not as much about its algorithms. For another recipient, the agency might receive information about privacy practices but very little about its plans to expand. Each of the nine recipients will produce differing, if any, amounts of information to each of the 50-plus specifications.

I actually think it would be a good thing for the FTC to better understand how these companies work and their practices. I think it could be useful for them to gain such an understanding, and then make recommendations on a comprehensive federal privacy law. But I don't see how this fishing expedition does any of that. Instead, it just asks for basically everything and the kitchen sink from a somewhat random selection of companies, some of whom will have difficulty producing all of this information.

Read More | 14 Comments | Leave a Comment..

Posted on Techdirt - 14 December 2020 @ 10:44am

District Court Rejects CDT's Challenge Of Trump's Ridiculous Executive Order On Section 230

from the no-standing dept

Back in May, you may recall, Donald Trump issued his silly executive order on Section 230 in response to Twitter adding a couple fact checks to blatant conspiracy theory nonsense that Trump was posting. A week later, the Center for Democracy and Technology (CDT) sued over the executive order, arguing that it was unconstitutional, and clearly retaliatory against Twitter.

When CDT filed the lawsuit I noted that the big question would be whether or not CDT could show standing in order to challenge the order, as it would be harder to prove that it impacted CDT directly. CDT argued that because the executive order would divert its attention and resources away from other, more important, fights regarding free speech online and government surveillance, it injured the organization.

On Friday, a judge agreed with my initial gut reaction and said that CDT failed to show standing. Basically, since the order only directed the government to do a bunch of stupid things, it didn't really impact CDT.

But Order 13,925 is most notable at this point for what it does not do. It imposes no obligation on CDT (or any other private party), but it merely directs government officials to take preliminary steps towards possible lawmaking. CDT’s claimed injury is not concrete or imminent and is thus insufficient to establish Article III standing. Even if CDT managed to clear the standing hurdle, it faces redressability and ripeness problems too. The Court will therefore dismiss this case for lack of jurisdiction.

The claim that this silly waste of time diverted resources from more serious issues doesn't impress the court:

If an organization alleges “only impairment of its advocacy,” that “will not suffice” to show standing. Turlock, 786 F.3d at 24; see also Food & Water Watch, 808 F.3d at 919 (“Our precedent makes clear that an organization’s use of resources for litigation, investigation in anticipation of litigation, or advocacy is not sufficient to give rise to an Article III injury.”). “This is true whether the advocacy takes place through litigation or administrative proceedings.” Turlock, 786 F.3d at 24. More, “an organization does not suffer an injury in fact where it expends resources to educate its members and others unless doing so subjects the organization to operational costs beyond those normally expended.” Food & Water Watch, 808 F.3d at 920 (cleaned up).

Though somewhat ridiculously the judge, Trevor McFadden (appointed by Donald Trump), actually throws in an incredibly silly line, claiming that CDT should be applauding Donald Trump's executive order, which he suggests (laughably) is about protecting free speech online.

CDT has not met its burden to show an injury to its interests. To begin, there does not appear to be a “direct conflict” between Order 13,925 and CDT’s stated mission. The Order expresses “the policy of the United States to foster clear ground rules promoting free and open debate on the internet.” ... CDT asserts a similar mission—to “advocat[e] in favor of First Amendment protection for speech on the Internet.” ... One would think that CDT would applaud the President’s desire to prevent online censorship. But no matter. The Court will take CDT at its word and assume that Order 13,925 directly conflicts with its interests. ... It still has not established an Article III injury.

That seems quite silly. Just because Trump's exec order claimed to be promoting free and open debate on the internet, the whole point was to move to stifle speech online, and that's what CDT was pointing out. Still, the standing point is a big one and CDT can't jump over that hurdle:

CDT has not alleged that Order 13,925 has “perceptibly impaired” its “ability to provide services.” Turlock, 786 F.3d at 24 (cleaned up). It claims that because of the Order it will have to “devote substantial resources to”: “participating in the planned FCC rulemaking proceeding,” “monitoring federal agencies’ reports,” “tracking any FTC action,” “participating in any proceedings that the Commission institutes,” and “engaging with federal and state policymakers.”...

This is plainly deficient. Circuit precedent is “clear that an organization’s use of resources for . . . advocacy is not sufficient to give rise to an Article III injury,” Food & Water Watch, 808 F.3d at 919, “whether the advocacy takes place through litigation or administrative proceedings,” Turlock, 786 F.3d at 24. CDT’s alleged injury—resources spent monitoring federal agencies, participating in their proceedings, and working with lawmakers—is one to its advocacy work, which is not a cognizable injury. ... In other words, CDT has shown that it is engaging in business as usual, not that Order 13,925 “causes an inhibition of [its] daily operations.” ...

All in all this is disappointing, but not unexpected. In the meantime, the executive order has already created its own mess in the form of the NTIA petition to the FCC to reinterpret Section 230, which the FCC, led by total hypocrite Ajit Pai, has agreed to move forward with.

CDT may not have had standing to challenge the bogus order, but the order has still created a huge mess for the open internet. It was the kind of mess that principled people could have stopped much earlier, but they all went along with it, either because they're too clueless to understand Section 230 or they're too afraid of Donald Trump pointing his angry temper tantrums in their direction. One hopes that the issue will die with the new administration, but with recent moves like appointing the author of the NTIA petition to the FCC, and some other rumors -- combined with Biden's top tech advisor pushing to ditch 230 entirely -- the trail of destruction this executive order is causing isn't likely to end any time soon.

Read More | 9 Comments | Leave a Comment..

Posted on Techdirt - 14 December 2020 @ 9:31am

USA Today Publishes Yet Another Bogus OpEd Against 230, Completely Misrepresents The Law

from the aren't-you-tired-of-this? dept

Another day, another op-ed that totally misrepresents Section 230. This one comes from USA Today, and is written by faux-conservative Rachel Bovard, who is doing this on purpose. Sometimes we see op-eds where it's clear the author is unfamiliar with how Section 230 works. Other times they are deliberately misrepresenting it. Bovard is in the latter category. She works for an organization, with dark money funding, that pretends to be for "transparency" about the tech industry -- which is hilarious since that organization's own funding is kept secret. The only known funding for that organization comes from Oracle, a company that has made it clear it wants to do away with Section 230 (despite the fact that it wants people to use its cloud services). Bovard has had many, many experts in Section 230 explain to her why she's misrepresenting the law. And she has never once changed her arguments, nor admitted to being wrong. She just keeps repeating the bullshit.

I get it. That's her job. Everybody's gotta make a buck, and apparently this is the best she can do. However, why is USA Today sullying its own reputation by allowing her to misrepresent the law on its pages? Let's go through some of the misrepresentations.

Though Section 230 protects more than just Google, Facebook, and Twitter, the giant tech platforms have benefited substantially from the privilege — so much that Section 230 can be characterized as a giant government subsidy to the world’s biggest companies.

At least she admits that it protects more than just "big tech" (in the past she has pretended otherwise), but it's still wrong to say that only those companies have benefited, or that it "can be characterized as a giant government subsidy to the world’s biggest companies." Nothing could be further from the truth, on multiple levels. The big companies could deal with the legal liability of not having 230. This is why Facebook has been fine with undermining it over the past few years. The senior management team there long ago made the calculation that they'd come out of it fine. Their competitors would be harmed.

It's everyone else -- including you and me -- who cannot handle a world without 230. Indeed, empirical research has shown that 230 increases competition in internet companies by encouraging investment to go up against the internet giants.

But, more importantly, Section 230 has never been about "protecting" the big companies -- it has always been about enabling you and me to speak. Without Section 230 few websites would willingly host just anyone's speech. Section 230 opened up the possibility for more people to be able to communicate and speak online. And it protects all of us as well. People like Rachel like to forget that the key part of 230 says:

No provider or user of an interactive computer service shall be treated as the publisher or speaker...

That's what protects you and me when we retweet someone. Or when we forward an email. To say that it disproportionately benefits large companies is simply wrong.

But it's even worse to say that it's a "subsidy." How is it possibly a subsidy, other than if you declare it a subsidy not to have to waste time fighting off mistargeted, vexatious lawsuits? You know, the kind of vexatious lawsuits that "conservatives" like Rachel Bovard used to pretend they were against, but are now for when they're against companies she dislikes. The entire setup of Section 230 is simple: lawsuits should be targeted at those who actually violate the law. It's a tool for avoiding frivolous lawsuits. That's not a "subsidy."

But the article gets worse:

It wasn’t always viewed this way. The law was enacted nearly 25 years ago as something akin to an exchange: Internet platforms would receive a liability shield so they could voluntarily screen out harmful content accessible to children, and in return they would provide a forum for “true diversity of political discourse” and “myriad avenues for intellectual activity.”

This is also a misrepresentation. A big one too. This is one that a lot of people focus in on, and if you're unaware of the history here, I can see how you might get this wrong. Rachel knows the history, so she's using this line to blatantly misrepresent reality. There was no exchange. It wasn't "you get this protection if you create a true diversity of political discourse." Indeed, the authors of the law have directly debunked this point. So I'm not sure why Bovard would repeat it other than that she assumes the readers of her op-ed can be easily misled.

As the authors of 230, Ron Wyden and Chris Cox, have explained, the purpose of properly applying liability to those doing the law violating was not to create platforms that themselves enabled a wide diversity of voices, but to enable every platform to moderate as they see fit so that each platform could use their own favored approach. The "diversity" would be in the different kinds of platforms that were enabled by it. Here's what Cox & Wyden said just recently in their FCC filing:

In our view as the law’s authors, this requires that government allow a thousand flowers to bloom—not that a single website has to represent every conceivable point of view. The reason that Section 230 does not require political neutrality, and was never intended to do so, is that it would enforce homogeneity: every website would have the same “neutral” point of view. This is the opposite of true diversity.

For example, 230 has enabled things like Parler to take a different approach to content moderation than Twitter. That's the diversity they were looking for -- not that Twitter (or Parler) have to host all voices. You'd think that Rachel would know this, seeing as she herself is active on Parler. Parler, for what it's worth, supports Section 230, knowing that it couldn't exist without it.

But, having established the false premise that 230 was a trade-off for "diverse" platforms, Bovard then argues that the big websites engage in discrimination (which they do not):

Critically, in protecting these companies from costly damages in lawsuits, Section 230 has also fueled the growth of the Big Tech platforms which now engage in viewpoint discrimination at an unprecedented scale and scope; international mega-corporations determining what news, information and perspectives Americans are allowed to read, hear and access.

Again, there remains no evidence to support this contention, but even if it was true, so what? Parler's own CEO gleefully shared with a reporter how he was sitting around banning "leftist" trolls who showed up on Parler. This is what Section 230 allows. It allows a website to moderate how they see fit, so that it can ban trolls of any political persuasion. Sites like that need to ban trolls because otherwise the experience sucks. But under Rachel's own (false) argument for how 230 is supposed to work, Parler wouldn't be allowed to ban leftist trolls either.

And, similarly, Facebook would no longer be able to give Trump fans more leeway to post disinformation. Or is that not the "viewpoint discrimination" Bovard is actually concerned about?

A handful of Big Tech companies are now controlling the flow of most information in a free society, and they are doing so aided and abetted by government policy. That these are merely private companies exercising their First Amendment rights is a reductive framing which ignores that they do so in a manner that is privileged — they are immune to liabilities to which other First Amendment actors like newspapers are subject — and also that these content moderation decisions occur at an extraordinary and unparalleled scale.

Nearly everything here is wrong as well. They are not "controlling the flow of information" any more than Fox News or OANN is. Is Rachel saying that Fox News and OANN are "controlling the flow of information"? People go to them for information. And people have lots of choices for where they get their information. None of that has anything to do with Section 230.

More importantly, they are not immune to liabilities that others like newspapers are subject to. First off, most newspaper content is consumed online these days, meaning that they have the exact same liability protections under 230 as any other website does. Second, Google, Twitter, and Facebook (just like newspapers) still retain liability for content they themselves create. Again, this is the same across the board. It's misleading to pretend that there's a real difference.

When Google decides to suppress or amplify content, it does so for 90% of the global marketplace. Twitter’s choices to cut off circulation of certain content — as they did when they banned circulation of a story critical of the Biden family, a month before the November election — very much shapes the national news narrative. Facebook, by its own admission, has the power to swing elections — which is troubling, as some of the platform’s “fact checkers” are partially bankrolled by a Chinese company.

Almost all of this is presented misleadingly. Google search may have 90% market share, but what news does she think it's "suppressing"? And is general search a proxy for news? No, it's not. The Facebook claim about "the power to swing elections" is taking some marketing puffery by their ad sales team out of context, as is the scare quote claim about fact checkers (Facebook partners with a whole bunch of different fact checking organizations, including one with whom Rachel Bovard is a frequent author.

So it would be just as honest to say that Facebook's ability to fact check news is "troubling, as some of the platform's fact checkers are partially run by an organization willing to post blatant misinformation from serial fabulist Rachel Bovard." Can't have one without the other.

And, finally, let's get to the Twitter/Hunter Biden/NY Post story. We've discussed that here in the past, mainly to note that it seemed like a dumb decision, though there were existing policy reasons why it happened. But the most important thing is that it did not "suppress" that story. It did the opposite. The article got way more attention because of the hamfisted moderation issues.

Bovard is selling USA Today readers a load of misinformation. And she knows it.

That policy makers have a role here is obvious. While private companies have the right to set the rules for their own platforms and online communities, they do not have a right to do it with the privilege of Section 230 protections. And the more these companies engage in behavior that ranges away from the original goal of ensuring a “true diversity of political discourse” and toward gatekeeping independent thought in America, the more they prove themselves undeserving of special government treatment.

Again, this completely gets the point of 230 backwards. It was designed to allow platforms to discriminate as they see fit in order to create very differentiated communities.

The question at hand distills to this: Are we to allow the lords of Silicon Valley to determine the terms of free speech, free thought, and free behavior in America? Or will we, a fiercely independent people, speak through our representative self-government to strip them of a congressional privilege they no longer deserve? Trump has opened the door. It is up to Congress to walk through it.

This is just completely misleading. Right before this, she supports Donald Trump's and various Republican elected officials' proposals for gutting Section 230, which would set up the government as internet speech police, determining how social media websites can and cannot moderate their content. That's a true threat to free speech online. Letting private companies moderate as they see fit is not a threat to free speech. At all.

Stripping websites of Section 230 will not give Rachel the world she seeks. It will lock in those large companies, who will retain their 1st Amendment right to not associate with any speech they don't want to associate with. What it will do is shut down many other spaces online -- spaces like Parler, where Bovard and friends like to gather to lie to each other. Removing Section 230 is what would actually do significant harm to the diversity of speech online, by forcing it only onto platforms that can handle the liability.

Rachel Bovard has a job to do: blatantly misrepresent Section 230 to attack Google. I get it. USA Today doesn't need to help.

56 Comments | Leave a Comment..

Posted on Techdirt - 10 December 2020 @ 3:35pm

Tillis Release Details Of His Felony Streaming Bill; A Weird Gift To Hollywood At The Expense Of Taxpayers

from the and-why-through-omnibus dept

Earlier today, we wrote about reports detailing the latest attempt to push through a bill to make streaming copyright-covered works online a possible felony, this time being pushed by Senator Thom Tillis, who wanted to attach it to the federal spending omnibus bill. As we noted, Tillis was pushing back on some of the criticism, saying that the bill is very narrowly tailored and wouldn't be used to criminalize random people. Of course, the response to that is twofold: (1) if this is the case, why haven't you released the text and (2) why are you shoving it onto a must-pass funding bill without any of the normal debate and discussion?

This afternoon Tillis dealt with the first part of this by finally releasing the text of the bill. And he's somewhat correct in noting that the bill is narrowly tailored. That doesn't make it good or necessary. The key bit is this:

PROHIBITED ACT.—It shall be unlawful to willfully, and for purposes of commercial advantage or private financial gain, offer or provide to the public a digital transmission service that

‘‘(1) is primarily designed or provided for the purpose of publicly performing works protected under title 17 by means of a digital transmission without the authority of the copyright owner or the law;

‘‘(2) has no commercially significant purpose or use other than to publicly perform works protected under title 17 by means of a digital transmission without the authority of the copyright owner or the law; or

‘‘(3) is intentionally marketed by or at the direction of that person to promote its use in publicly performing works protected under title 17 by means of a digital transmission without the authority of the copyright owner or the law.

So, the argument is that the "narrow" tailoring here is such that it only applies to websites, not users, if that site is "primarily" engaged in streaming unlicensed copyright-covered works, has no significant purpose other than that, and intentionally markets itself as such.

And, to Tillis' credit, this is much more narrowly tailored than previous such bills. It still doesn't explain why the text is only just being released now or (more importantly) why this has to be added to a must-pass Christmas Tree government funding bill.

To some extent, the thinking behind this bill is that it's focused on a very specific set of circumstances. There have been websites out there that stream content they host, and those already faced felony charges for the hosting -- but this seems to extend that to sites that stream the content that is hosted elsewhere. Of course, there is a much bigger question of why is this a criminal issue in the first place? It is yet another example of Hollywood trying to pass off what should be a civil issue, where the movie studios and record labels have every right and ability to sue these companies in court, and turn them into an issue that the US taxpayer now has to deal with? It's basically a giant subsidy to Hollywood, taking a private dispute and putting it on the public dime.

As Public Knowledge says in its response to the bill's release, "we do not see the need for further criminal penalties for copyright infringement." Indeed.

The end result is that this bill is not as horrific as past felony streaming bills, and is, in fact, narrowly tailored. However, that does not change the fact that moving copyright issues away from civil disputes to be handled by copyright holders, to the federal government, is something that we should not support. Indeed, it should be seen as somewhat odd that a Trump-supporting Republican, who claims to be for keeping government out of business, is directly subsidizing Hollywood by having the federal government and US taxpayers take over their own civil legal dispute by turning them into criminal issues.

And, more importantly, none of this explains why the bill should be released at the last moment, and then dumped into the must-pass federal spending bill. It's a bad idea. If Tillis really thinks this bill is good and necessary, he should have to defend it as such through the regular process bills go through.

Read More | 51 Comments | Leave a Comment..

Posted on Free Speech - 10 December 2020 @ 1:48pm

As A Parting Shot, Tulsi Gabbard Teams Up With Paul Gosar To Introduce Yet Another Unconstitutional Attack On Section 230

from the tis-the-season dept

Back in October, Reps. Tulsi Gabbard (who is leaving Congress in a few weeks) and Paul Gosar (whose had six of his own siblings tell voters that their brother should not be in Congress), teamed up to introduce an incredibly stupid anti-Section 230 bill, which would take 230's liability protections away from any site that does basic data tracking or has an algorithmically generated feed.

Apparently that wasn't enough, because they've now teamed up to introduce a second anti-230 bill that is (would you believe it?) even more ridiculous. They're calling it the "Break Up Big Tech Act."

The Breakup Big Tech Act of 2020 would take away legal immunity from interactive computer service providers that engage in certain manipulative activities, including social media companies who act as publishers by moderating and censoring content. Specifically, the Breakup Big Tech Act of 2020 would remove legal immunity for providers that engage in the following activities:

  • Selling and displaying personalized as well as contextual advertising without user’s consent
  • Collecting data for commercial purposes other than the direct sale of the interactive computer service, i.e. turning the user into a commodity or otherwise monetizing the transmission of content
  • Acting as a marketplace in the digital space by facilitating the placement of items into the stream of commerce
  • Employing digital products and designs intended to engage and addict users to the service
  • Acting as a publisher by using algorithms to moderate or censor content without opt-in from users

Now there are certainly plenty of legitimate questions to be had about that list of activities, and whether or not they should be allowed, or how and if they should be regulated. Those should all be subject to some level of debate and discussion. But to say "just wipe out 230's liability protections" for any company that does any of those things... is legitimately crazy.

This bill is going nowhere, because this Congress is basically over, so I won't go point by point on how stupid a bill this is, but I'll just note that last point is punishing a company for making editorial choices, and the 1st Amendment would probably like to explain to Gabbard and Gosar just how incredibly unconstitutional that would be.

Gabbard still really seems to be smarting from the fact that her dumb lawsuit against Google was easily dismissed. But here's the key: the lawsuit was dismissed on 1st Amendment grounds, not because of 230. And changing 230 doesn't change the 1st Amendment (which she swore to protect and uphold, but apparently doesn't care about).

73 Comments | Leave a Comment..

Posted on Techdirt - 10 December 2020 @ 9:37am

Not This Again: Senator Tillis Tries To Slide Dangerous Felony Streaming Bill Into Must Pass Government Funding Bill

from the guys,-we've-done-this-before... dept

We've documented that Senator Thom Tillis is working on a massive copyright reform bill for which he's asked stakeholders for input (we provided some). He's expected to unveil that bill next week (which seems like a suspiciously short turnaround from asking for ideas to actually releasing a bill). Yet, apparently, he decided that he couldn't even wait for that process to play out to try to push forward the latest incarnation of the infamous felony streaming bill which Tillis is pushing to add to the must-approve government spending omnibus bill (similar to how others are trying to add the CASE Act to that bill).

If you don't recall, felony streaming has been a goal of the recording industry for the past decade. Back in 2011, Senator Amy Klobuchar pushed the bill, and even Justin Bieber spoke out against it, noting that he built his entire fanbase by streaming his own covers of songs on YouTube. At the time, Bieber said that rather than locking up people for streaming copyright covered content online, we should lock up Senator Klobuchar for trying to pass such a bill.

Even if you don't trust Justin Bieber's legal analysis of the bill, it might help to read the analysis done by Harvard law professor Jonathan Zittrain, who highlighted just how dangerous a felony streaming law would be -- likely turning millions of individuals into potential felons, should law enforcement suddenly decide to turn on them. The whole idea of making streaming copyright covered works a felony is ridiculous. As it stands now, it can be a misdemeanor, and even that is crazy. Copyright should be a civil issue, not a criminal one. The standards to make it criminal are insanely low -- such that tons of people could face criminal liability for doing things that seem perfectly normal. The threat to free speech (which is the key thing we raised in our comments on Tillis' larger copyright reform) should not be ignored:

“A felony streaming bill would likely be a chill on expression,” said Katharine Trendacosta, associate director of policy and activism with the Electronic Frontier Foundation. “We already see that it’s hard enough in just civil copyright and the DMCA for people to feel comfortable asserting their rights. The chance of a felony would impact both expression and innovation.”

Of course, as the American Prospect article notes, it's not at all difficult to understand why Tillis is trying to shove such a dangerous, anti-free speech bill through an omnibus spending bill, rather than having to debate and defend it through normal process. Because this:

Tillis, the chairman of the Intellectual Property Subcommittee, was recently re-elected for another six-year term by a margin of less than 2% over his Democratic opponent. In the final stretch of his campaign, Tillis received a surge of campaign contributions from PACs affiliated with entertainment companies and trade groups that lobby Congress for aggressive copyright enforcement against internet users, including prison time for unauthorized streaming.

You don't say. How odd. Or, rather, how totally expected, and totally corrupt.

In the third and fourth quarters of 2020, Tillis’ campaign and leadership PAC received donations from PACs affiliated with the Motion Picture Association, Sony Pictures, ASCAP, Universal Music Group, Comcast & NBCUniversal, The Internet and Television Association, Salem Media Group, Warner Music, and others in the entertainment and cable industry that seek to suppress the unauthorized sharing of content. Many other entertainment industry PACs gave Tillis contributions earlier in the 2019-20 cycle, totaling well over $100,000, according to Federal Election Commission records. Executives of Fox Corporation, Sony Entertainment, Charter Communications, and CBS also made large donations to Tillis in the third quarter of this year.

After the Prospect article linked above and quoted here began to get attention, Tillis took to Twitter to push back on it, claiming it's inaccurate, and that his (still unpublished) proposal is "narrowly tailored" such that the DOJ can only use it to "prosecute commercial criminal organizations." Which... is the same argument that was made a decade ago with Klobuchar's similar bill. But it ignores that the standards for what makes a "commercial criminal organization" regarding copyright are insanely low. Under current law, it means that you gain some sort of "commercial advantage or financial gain" in which you reproduce or distribute (or, in this case stream) at least 10 works, "with a retail value of more than $2,500."

Assuming this definition is then applied to streaming as well, all it really means is that if a streamer uses 10 copyright works, within a 180 day period, for which he or she gains some sort of financial gain, they can now be considered "commercial criminal organization." That's... a ton of Twitch and YouTube streamers.

And, either way, if the bill is really nothing to be concerned about, why hasn't Tillis released the text and why is he pushing it into this must pass bill?

35 Comments | Leave a Comment..

Posted on Techdirt - 9 December 2020 @ 1:52pm

Open Season: FTC & 48 Attorneys General File Separate Antitrust Lawsuits Against Facebook

from the well-this-will-be-interesting dept

Everyone knew that this was coming eventually, but on Wednesday two separate antitrust lawsuits were filed against Facebook. First, the FTC filed a complaint, followed by 48 Attorneys General, representing 46 states, the District of Columbia and Guam (Guam!), similarly arguing that Facebook's acquisitions of Instagram and Whatsapp were an antitrust violation. I will say, upfront, that both cases appear to have a lot more meat to them than the DOJ's astoundingly weak case against Google. And yet... I'm still somewhat surprised at some of the claims made in both lawsuits that seem somewhat disconnected from reality.

As a quick summary, though, I'd say that both lawsuits make somewhat weak claims regarding acquisitions (mainly, but not limited to) Instagram and WhatsApp. Both lawsuits, though, do make much stronger claims about Facebook abusing its API to try to limit competition (though, doing so without the context of privacy questions that may have driven Facebook to close off more access to its API). I think the API questions are the most interesting to explore, and the ones where Facebook may face the most trouble in court.

Let's go through each one separately. The key to the FTC's case is twofold: (1) that Facebook buys up whatever upstart competitors it can find that represent a competitive threat (e.g., Instagram & Whatsapp) and (2) that it puts in place "restrictive policies" that hinder the upstart competitors it cannot acquire. I'm not sure that either claim is fully supported by the facts, but perhaps there's more evidence to support them. Of course, a key aspect in any antitrust case is proving (1) that there's a market in which the company is a monopoly and (2) that the company leverages that monopoly in a manner that is abusive to competition. The FTC case argues that the "market" here is "personal social networking," which seems like a fairly narrowly defined market:

Facebook holds monopoly power in the market for personal social networking services (“personal social networking” or “personal social networking services”) in the United States, which it enjoys primarily through its control of the largest and most profitable social network in the world, known internally at Facebook as “Facebook Blue,” and to much of the world simply as “Facebook.”

In the United States, Facebook Blue has more than [REDACTED] daily users and more than [REDACTED] monthly users. No other social network of comparable scale exists in the United States.

Of course no one denies that Facebook is the largest, but does that automatically make it a monopolist? In the space of "personal social networking," you could easily argue that there are a number of significantly sized competitors, including Twitter, Snap, YouTube, and TikTok (which, notably, is a relatively new entrant that was able to build up a large audience, despite the presence of Facebook). The complaint says that buying Instagram and WhatsApp were attempts to retain this monopoly position:

Since toppling early rival Myspace and achieving monopoly power, Facebook has turned to playing defense through anticompetitive means. After identifying two significant competitive threats to its dominant position—Instagram and WhatsApp—Facebook moved to squelch those threats by buying the companies, reflecting CEO Mark Zuckerberg’s view, expressed in a 2008 email, that “it is better to buy than compete.” To further entrench its position, Facebook has also imposed anticompetitive conditions that restricted access to its valuable platform—conditions that Facebook personnel recognized as “anti user[,]” “hypocritical” in light of Facebook’s purported mission of enabling sharing, and a signal that “we’re scared that we can’t compete on our own merits.”

I can see Instagram fitting into that model, but am a little more perplexed by the inclusion of WhatsApp, which is more about messaging than a traditional "social network." It really feels like they're shoehorning WhatsApp into this bucket to make the argument seem stronger. The case even admits that WhatsApp is about messaging, not social networking, but then argues that Facebook feared that WhatsApp could eventually pivot to social media. Which, sure. But... it still seems like a stretch to argue that this automatically makes it anticompetitive to buy.

If anything, this lawsuit suggests that Facebook's management has both read and internalized Clayton Christensen's "The Innovator's Dilemma" and recognized that competition could come from orthogonal directions.

As for Instagram, the company was quite small -- famously only 13 employees -- when Facebook acquired it. It is true that it was building up some level of success, but it seems to suggest some level of revisionist thinking to say that it was somehow going to take down Facebook, and that acquiring it somehow changed Facebook's competitive landscape in any meaningful way.

The other issue is that both the Instagram and Whatsapp decisions were reviewed at the time -- and approved by the FTC and the DOJ. If anything, this all feels a bit like revisionist history to go back many years later and say "well these were obviously anti-competitive" when they certainly didn't appear to be at the time of acquisition. And, yes, the lawsuits have quotes from people inside Facebook noting that Instagram and WhatsApp could potentially represent a competitive threat, but merely buying up some potential competitors doesn't automatically mean that it's anti-competitive behavior.

The FTC's case gets a lot stronger when it talks about certain practices the company put in place to discourage and stymie other competitors:

In order to communicate with Facebook (i.e., send data to Facebook Blue, or retrieve data from Facebook Blue) third-party apps must use Facebook APIs. For many years— and continuously until a recent suspension under the glare of international antitrust and regulatory scrutiny—Facebook has made key APIs available to third-party apps only on the condition that they refrain from providing the same core functions that Facebook offers, including through Facebook Blue and Facebook Messenger, and from connecting with or promoting other social networks.

As someone who believes strongly that, for competitive reasons, Facebook should open up its APIs, this argument seems more compelling to me. There's also a much stronger case to be made here that selectively opening up its API based on competitive rationale is more likely to have resulted in consumer harm.

By suppressing, neutralizing, and deterring the emergence and growth of personal social networking rivals, Facebook also suppresses meaningful competition for the sale of advertising. Personal social networking providers typically monetize through the sale of advertising; thus, more competition in personal social networking is also likely to mean more competition in the provision of advertising. By monopolizing personal social networking, Facebook thereby also deprives advertisers of the benefits of competition, such as lower advertising prices and increased choice, quality, and innovation related to advertising.

It feels like that argument would need a lot more evidence. While I agree that the API arguments are the strongest, the entire complaint ignores the fact that there are competitors, and that some of those competitors have sprung up in recent years, despite what it claims are recent barriers to entry. Snap and TikTok both became prominent well after it is argued that Facebook was hellbent on suppressing competition. And both are doing decently well. The complaint doesn't even mention TikTok, and only mentions Snap to say that Facebook failed in its attempt to purchase the company (which seems to cut against the argument that Facebook was just buying up any competitors). There are some other mentions of "Snap" but, amusingly, that's only because it was the internal code-name for Facebook's failed attempt to compete with Instagram, prior to purchasing Instagram.

Based on all of that, I think the claims about buying up Instagram and WhatsApp are pretty weak -- but there is an interesting area to explore regarding whether or not Facebook used its API access to suppress competition.

Specifically, between 2011 and 2018, Facebook made Facebook Platform, including certain commercially significant APIs, available to developers only on the condition that their apps neither competed with Facebook (including, at relevant times, by “replicating core functionality” of Facebook Blue or Facebook Messenger), nor promoted competitors. Facebook punished apps that violated these conditions by cutting off their use of commercially significant APIs, hindering their ability to develop into stronger competitive threats to Facebook Blue.

And:

On July 27, 2011, Facebook introduced a new policy that “Apps on Facebook may not integrate, link to, promote, distribute, or redirect to any app on any other competing social platform.”

This policy was intended to harm the prospects for—and deter the emergence of— competition, including personal social networking competitors. Indeed, the immediate impetus for the policy was Google’s launch of the Google+ personal social network. In a July 27, 2011, email, a Facebook manager explained: “[W]e debated this one a lot. In the absence of knowing what and how google was going to launch, it was hard to get very specific, so we tended towards something broad with the option to tighten up as approach and magnitude of the threat became clear.” Later that same day, another Facebook employee protested the anticompetitive move to colleagues: “I think its [sic] both anti user and sends a message to the world (and probably more importantly to our employees) that we’re scared that we can’t compete on our own merits.”

To me, this is the strongest argument in the entire filing, though it doesn't seem to be the one most people are focusing on.

One other thing highlighted in the complaint -- which we've discussed here before -- is how Facebook bought an analytics company called Onavo, which it pretended as a VPN service, but which it actually used as a tool to snoop on what apps people were using, in order to quickly identify up and coming competitive threats:

By acquiring Onavo, Facebook obtained control of data that it used to track the growth and popularity of other apps, with an eye towards identifying competitive threats for acquisition or for targeting under its anticompetitive platform policies. As a December 2013 internal slide deck noted: “With our acquisition of Onavo, we now have insight into the most popular apps. We should use that to also help us make strategic acquisitions.” Facebook also used Onavo data to generate internal “Early Bird” reports for Facebook executives, which focused on “apps that are gaining prominence in the mobile eco-system in a rate or manner which makes them stand out.” Facebook shut down Onavo in 2019 following public scrutiny; however, it continues to track and evaluate potential competitive threats using other data.

Of course, looking for useful data to identify competitors isn't, by itself, anti-competitive. It's kind of expected. I do think there's a potential "false advertising" claim that could be made about Facebook claiming Onavo was a VPN or was somehow designed to protect user privacy -- but that's different from an antitrust claim.

Moving on to the AGs' case. It replicates many of the same arguments, though in many more words. Once again, it focuses on Instagram and WhatsApp, but not in very convincing ways. The issue of Facebook's API policies is more compelling:

As part of its strategy to thwart competitive threats, Facebook pursued an open first–closed later approach in which it first opened its platform to developers so that Facebook’s user base would grow and users would engage more deeply on Facebook by using third-party services. This strategy significantly boosted engagement on Facebook, enhanced the data it collected, and made the company’s advertising business even more profitable. Later, however, when some of those third-party services appeared to present competitive threats to Facebook’s monopoly, Facebook changed its practices and policies to close the application programming interfaces (“APIs”) on which those services relied, and it took additional actions to degrade and suppress the quality of their interconnections with Facebook.

Again, I am in agreement that this is the behavior by Facebook that is most concerning -- but it also leaves out some important context. Namely, that Facebook was pressured to close down access to its APIs because many people (including some of the AGs who are part of this lawsuit) were making noises about the privacy concerns related to Facebook's more open API access (as noted in the Cambridge Analytica scandal). One of the more frustrating things to me about all of this is how few people are willing to grapple with the tradeoffs here. If you complain about privacy issues it pushes Facebook to close off access to its APIs, which give it more anticompetitive power. But if you demand more open APIs, then you have more privacy questions.

The AG's case, though, does make the (compelling) argument that Facebook used its more open API access setup to build growth, and then slowly cut back on it to block competitors:

One critical reason for Facebook’s accelerated growth trajectory was a series of initiatives that opened Facebook up to mutually beneficial partnerships with third parties.

In 2007, Facebook launched Facebook Platform—an innovative tool that set it apart from other firms. Facebook Platform had a set of open APIs—mechanisms for sharing data between independent services—that enabled developers to build applications that interoperated with the Facebook social networking site. Developers scrambled to create applications on the Facebook Platform, enhancing Facebook Blue’s functionality, driving more users to the Facebook site, and increasing the engagement of existing users. Facebook Platform created a symbiotic relationship between Facebook and developers that yielded significant value for both.

In 2008, Facebook introduced Facebook Connect—a tool that facilitated still greater interconnection with Facebook Blue. Through Facebook Connect, users could sign in to third-party websites using their Facebook credentials. By 2011, Facebook Connect had become one of the most popular ways to sign in to services across the internet as users took advantage of the efficiency afforded to them. As was the case with the Facebook Platform, third-party sites and Facebook itself found the relationship fostered by Facebook Connect to be a mutually valuable one. Facebook provided third parties with information about users and their friends and drove traffic to third-party sites by making it easier for users to sign in. In return, Facebook captured valuable data about users’ off-Facebook activity to enhance its social graph and ability to target advertising.

In April 2010, Facebook invited even more interaction with third-party websites and apps. It launched the Open Graph API, enabling those sites to add plug-ins, such as the Facebook “Like” button that allowed Facebook Blue users to become “fans” of the third-party site. The sites were highly motivated to install the Like button and encourage its use, as a “Like” would be shared on the user’s news feed and profile, thereby promoting the site to the user’s friends and family. One week after the introduction of Open Graph, 50,000 websites had installed Open Graph plug-ins. Those sites realized the immediate benefits of a massive new distribution channel, and Facebook’s growth increased accordingly.

But then when upstarts began using the openness to build up more competitive power, Facebook cut back on the access:

After the threat from Google+ had passed, and after years of promoting open access to Facebook Platform, Facebook increasingly turned to Platform as a tool to monitor, leverage, and harm (via rescinding API access) apps that Facebook viewed as actual or potential competitive threats.

In 2013, Facebook amended its Platform policy (described above) to forbid applications that “replicat[e] [Facebook’s] core functionality,” with no explanation as to what Facebook considered its core functionality, or how such policies would apply when Facebook expanded its functionality to a new area.

Facebook began to selectively enforce its policies to cut off API access to companies Facebook worried might one day threaten its monopoly. Facebook itself described its Platform as a “critical piece of infrastructure” for new apps being developed: this is particularly true for social apps which rely heavily on network effects. Facebook knew that an abrupt termination of established access to Facebook APIs could be devastating to an app—especially one still relatively new in the market. An app that suddenly lost access to Facebook’s APIs was hurt not only because its users would no longer be able to bring their friend list to the new app, but also because a sudden loss of functionality, which creates broken or buggy features, suggests to users that an app is unstable. Facebook’s actions therefore disincentivized developers from creating new features that might compete with Facebook: adding new social features to an existing app might come at the significant cost of access to Facebook’s APIs.

In 2014, Facebook announced significant changes to its Platform APIs with “Graph API 2.0” (also referred to internally as Platform 3.0). In connection with Graph API 2.0, Facebook required prior review of all requests to access many Platform APIs, including the Find Friends API, resulting in those APIs being cut off for third-party apps that previously had enjoyed such access on Platform. Under the new approach, developers could not access Facebook’s APIs unless they submitted an application and received approval, which Facebook refused to grant to apps it classified as competitors or potential competitors. That allowed Facebook to proactively and categorically ensure that no app that might constitute a competitive threat would get API access in the first place, sparing Facebook the need to withdraw access after-the-fact.

One Facebook employee described in January 2014 the jarring impact of enticing developers to interconnect and then suddenly revoking their access:

We sold developers a bill of goods around [Open Graph] 2 years ago and have been telling them ever since that one of the best things they could do is to a/b test and optimize the content and creative. Now that we have successes . . . in 2013, we’re talking about taking it away . . . . Even if we were to give them more traffic on home page in some other way, it still nullifies all of their work to integrate [Open Graph] for the last 2 years.

As a further part of its scheme to maintain its monopoly position, Facebook has used its control over Facebook Platform to degrade the functionality and distribution of potential rivals’ content when it perceived those firms as threats to Facebook’s monopoly power. This degradation suppressed the flow of user traffic to the rivals’ services, reducing overall output, and harming users in the process.

Again, this entire narrative leaves out some of the privacy questions raised by the open platform access -- but I do think that these are the strongest arguments in the complaint.

There are some mostly-redacted stories included in the complaint, arguing that certain companies, whose names are redacted, were directly harmed by Facebook's API policies. It would be interesting to see who these companies are, because I've seen some somewhat scammy companies make the argument that Facebook's API policies were designed to harm them, but then the details suggest that the reality was more that Facebook was trying to stop them from being scammy. The details matter here.

There are some examples with named companies, including Twitter's Vine, Path, and Circle. Some of those stories seem somewhat compelling, though I'm not sure they move the needle enough.

In the end, I think these complaints are stronger than the DOJ's complaint against Google. That's not to say that they are particularly strong. I do think the most compelling arguments are entirely around Facebook's use of the API, though the company will have arguments it can make in response regarding reasons why it made those decisions (with a focus on protecting user privacy). Whether or not that will be enough to hold up will likely be a key part of this case. The arguments about Instagram and WhatsApp, while they'll get most of the public attention, and are the arguments that people frequently make about Facebook's position, strike me as fairly weak. Both acquisitions were reviewed and approved at the time, and the presence of other competitors in the marketplace seems to undercut the arguments there.

The key question that I think the courts will look at is whether or not Facebook pulling back on API access trips an antitrust wire. Antitrust experts may have better suggestions for cases to look at, but the Aspen Skiing case appears to be roughly analogous -- in which the question was whether or not a dominant firm terminating an agreement to deal with a competitor was an antitrust violation. However, subsequent cases have shown that the Supreme Court no longer seems to support that notion. Verizon v. Trinko likely cut back the Aspen Skiing rules significantly -- and the courts would need to signal a pretty big shift to bring those rules back around for Facebook.

As for remedies, if Facebook were forced to spin off Instagram and WhatsApp, I honestly don't think it would have that big of an impact on Facebook itself. I'm not against that being the final decision, but I don't see how that would solve any of the issues at play. Either way, Facebook is going to be very busy in court for the next decade or so....

Read More | 11 Comments | Leave a Comment..

Posted on Techdirt - 9 December 2020 @ 9:32am

Biden's Top Tech Advisor Trots Out Dangerous Ideas For 'Reforming' Section 230

from the this-is-a-problem dept

It is now broadly recognized that Joe Biden doesn't like Section 230 and has repeatedly shown he doesn't understand what it does. Multiple people keep insisting to me, however, that once he becomes president, his actual tech policy experts will understand the law better, and move Biden away from his nonsensical claim that he wishes to "repeal" the law.

In a move that is not very encouraging, Biden's top tech policy advisor, Bruce Reed, along with Common Sense Media's Jim Steyer, have published a bizarre and misleading "but think of the children!" attack on Section 230 that misunderstands the law, misunderstands how it impacts kids, and which suggests incredibly dangerous changes to Section 230. If this is the kind of policy recommendations we're to expect over the next four years, the need to defend Section 230 is going to remain pretty much the same as it's been over the last few years.

Let's break down the piece and its myriad problems.

Mark Zuckerberg makes no apology for being one of the least-responsible chief executives of our time. Yet at the risk of defending the indefensible, as Zuckerberg is wont to do, we must concede that given the way federal courts have interpreted telecommunications law, some of Facebook's highest crimes are now considered legal.

Uh, wait. No. There's a very sketchy sleight-of-word right here in the opening, claiming that "Facebook's highest crimes are now considered legal." That is wrong. Any law that Facebook violates, it is still held liable for. The point of Section 230 is that Facebook (and any website) should not be held liable for any laws that its users violate. Reed and Steyer seek to elide this very important distinction in a pure "blame the messenger" way.

It may not have been against the law to livestream the massacre of 51 people at mosques in Christchurch, New Zealand or the suicide of a 12-year-old girl in the state of Georgia. Courts have cleared the company of any legal responsibility for violent attacks spawned by Facebook accounts tied to Hamas. It's not illegal for Facebook posts to foment attacks on refugees in Europe or try to end democracy as we know it in America.

This is more of the same. The Hamas claim is particularly bogus. The lawsuit in that case involved some plaintiffs who were harmed by Hamas... and decided that the right legal remedy was to sue Facebook because some Hamas members used Facebook. There was no attempt to even show that the injuries the plaintiffs faced had anything to do with Hamas using Facebook. The cases were tossed because Section 230 did exactly the right thing: note that the legal liability should be on the parties actually responsible. We don't blame AT&T when a terrorist makes a phone call. We don't blame Ford because a terrorist drives a Ford car. We shouldn't blame Facebook just because a terrorist uses Facebook.

This is fairly basic stuff, and it is shameful for Reed and Steyer to misrepresent things in such a way that is designed to obfuscate the actual details of the legal issues at play, while purely pulling at heartstrings. But the heartstring-pulling was just beginning, because this whole piece shifts into the typical "but think of the children!" pandering quite quickly.

Since Section 230 of the 1996 Communications Decency Act was passed, it has been a get-out-of-jail-free card for companies like Facebook and executives like Zuckerberg. That 26-word provision hurts our kids and is doing possibly irreparable damage to our democracy. Unless we change it, the internet will become an even more dangerous place for young people, while Facebook and other tech platforms will reap ever-greater profits from the blanket immunity that their industry enjoys.

Of course, it hasn't been a get out of jail card for any of those companies. The law has never barred federal criminal prosecutions, as federal crimes are exempt from the statute. Almost every Section 230 case has been about civil disputes. It's also shameful that Reed and Steyer seem to mix-up the differences between civil and criminal law.

Also, I'd contest the argument that it's Section 230 that has made the internet a dangerous place for kids or democracy. Section 230 has enabled many, many forums and spaces for young people to congregate and communicate -- many of which have been incredibly important. It's where many LGBTQ+ kids have found like minded people to discover they're not alone. It's where kids who are interested in niche areas or specific communities have found others with similar views. All of that is possible because of Section 230.

Yes, there is bullying online, and that's a problem, but Section 230 has also enabled tremendous variation and competition in how different websites respond to that, with many creating quite clever ideas in how to deal with the downsides of purely open communication. Changing Section 230 will likely remove that freedom of experimentation.

It wasn't supposed to be this way. According to former California Rep. Chris Cox, who wrote Section 230 with Oregon's Sen. Ron Wyden, "The original purpose of this law was to help clean up the internet, not to facilitate people doing bad things on the internet." In the 1990s, after a New York court ruled that the online service provider Prodigy could be held liable in the same way as a newspaper publisher because it had established standards for allowable content, Cox and Wyden wrote Section 230 to protect "Good Samaritan" companies like Prodigy that tried to do the right thing by removing content that violated their guidelines.

But through subsequent court rulings, the provision has turned into a bulletproof shield for social media platforms that do little or nothing to enforce established standards.

This is just flat out wrong, and it's embarrassing that Reed and Steyer are repeating this out and out myth. You will find no sites out there, least of all Facebook (the main bogeyman named in this article) "that do little or nothing to enforce established standards." Facebook employs tens of thousands of content moderators, and has a truly elaborate system for reviewing and modifying its ever changing standards, which it tries to enforce.

We can agree that the companies may fail to catch everything, but that's not because they're not trying. It's because it's impossible. That was the very basis of 230: recognizing that an open platform is literally impossible to fully police, and 230 would enable sites to try different systems for policing it. What Reed and Steyer are really saying is that they don't like how Facebook has chosen to police its platform. Which is a reasonable argument to make, but it's not because of 230. It seems to be because Steyer and Reed are ignorant of what Facebook has actually done.

Facebook and other platforms have saved countless billions thanks to this free pass. But kids and society are paying the price. Silicon Valley has succeeded in turning the internet into an online Wild West — nasty, brutal, and lawless — where the innocent are most at risk.

Bullshit. Again, Facebook employs tens of thousands of moderators and actually takes a fairly heavy hand in its moderation practices. To say that this is a "Wild West" is to express near total ignorance about how content moderation actually works at Facebook. Facebook spends more on moderation that Twitter makes in revenue. To say that it's "saving billions" thanks to this "free pass" is to basically say that you don't know what you're talking about.

The smartphone and the internet are revolutionary inventions, but in the absence of rules and responsibilities, they threaten the greatest invention of the modern world: a protected childhood.

This is "but think of the children" moral panicking. Yes, we should be concerned about how children use social media, but Facebook, like most other sites doesn't allow users to have accounts if they're under 13-years old, and the problem being discussed is not about 230, but rather about teaching children how to be more discerning digital citizens when they're online. And this is important, because it's a skill they'll need to learn. Trying to shield them from absolutely everything -- rather than giving them the skills to navigate it -- is a dangerous approach that will leave kids unprepared for life on the internet.

But Reed and Steyer are full in on the "think of the children" moral panic... so much that they (and I only wish I was joking) compare children using social media... to child labor and child trafficking:

Since the 19th century, economic and technological progress enabled societies to ban child labor and child trafficking, eliminate deadly and debilitating childhood diseases, guarantee universal education and better safeguard young children from exposure to violence and other damaging behaviors. Technology has tremendous potential to continue that progress. But through shrewd use of the irresponsibility cloak of Section 230, some in Big Tech have turned the social media revolution into a decidedly mixed blessing.

Oh come on. Those things are not the same. This entire piece is a masterclass in extrapolating a few worst case scenarios and insisting that they're happening much more frequently than they really are. Eventually the piece finally gets to its suggestion on "what to do about it." And the answer is... destroy Section 230 in a way that won't actually help.

But treating platforms as publishers doesn't undermine the First Amendment. On the contrary, publishers have flourished under the First Amendment. They have centuries of experience in moderating content, and the free press was doing just fine until Facebook came along.

That... completely misses the point. Publishers handle things because they review every bit of content that goes out in their publication. The reason why we have 230 treat sites that host 3rd party content different than publishers who are publishing their own content is because the two things are not the same. And if websites had to review every bit of user content, like publishers do, then... we'd have many fewer spaces online where people can communicate. It would stifle speech online massively.

The tech industry's right to do whatever it wants without consequence is its soft underbelly, not its secret sauce.

But it's NOT a "right to do whatever it wants without consequence." Not even remotely. The sites themselves cannot break the law. The sites have very, very strong motivations to moderate -- including pressure from their own users (because if they don't do the right thing, their users will go elsewhere), the press, and (especially) from advertisers. We've seen just in the past few months that advertisers pulling their ads from Facebook has been an effective tool in getting Facebook to rethink its policies.

The idea that because 230 is there, Facebook and other sites do nothing is a myth. It's a myth that Reed and Steyer are exploiting to make you think that you have to "save the children." It's bullshit and they should be ashamed to peddle myths. But they lean hard into these myths:

Instead of acknowledging Facebook's role in the 2016 election debacle, he slow-walked and covered it up. Instead of putting up real guardrails against hate speech, violence, and conspiracy videos, he has hired low-wage content moderators by the thousands as human crash dummies to monitor the flow. Without that all-purpose Section 230 shield, Facebook and other platforms would have to take responsibility for the havoc they unleash and learn to fix things, not just break them.

This is... not an accurate portrayal of anything. It's true that Zuckerberg was initially reluctant to believe that it had a role in 2016 (and there are still legitimate questions as to how much of an impact Facebook actually had or whether it was just a convenient scapegoat for a poorly-run Hillary Clinton campaign). But by 2017, Facebook had found religion and completely revamped its moderation processes regarding election content. Yes, it did hire thousands of content moderators. But it's bizarre that Reed and Steyer finally admit this way down in the article after paragraphs upon paragraphs insisting that Facebook does no moderation, doesn't care, and doesn't need to do anything.

But more to the point, if they don't want Facebook to hire all those content moderators, but do want Facebook to stop all the bad stuff online... how the hell do they think Facebook can do that? The answer to them is the same as "wave a magic wand." They say to take away Facebook's 230 protections, like that will magically solve stuff. It won't.

It would mean much greater taking down of content, including content from marginalized voices. It would mean Facebook would likely have to hire many more of those content moderators to review much more content. And, most importantly, it means that no competitor could ever be built to compete with Facebook because it would be the only company that could afford to take on such compliance costs.

And, the article gets worse. Reed and Steyer point to FOSTA as an example of how to reform 230. Really.

o the simplest way to address unlimited liability is to start limiting it. In 2018, Congress took a small step in that direction by passing the Stop Enabling Sex Traffickers Act and the Allow States and Victims to Fight Online Sex Trafficking Act. Those laws amended Section 230 to take away safe harbor protection from providers that knowingly facilitated sex trafficking.

Right, and what was the result? It certainly didn't do what the people promoting it expected. Craigslist shut down its dating section, clearing the field for Facebook to launch its own dating site. In other words, it gave more power to Facebook.

More importantly, it has been used to harm sex workers putting many lives at risk, and shutting down places where adults could discuss sex, all while making it harder for police to find sex traffickers. The end result has actually been an increase rather than a decrease in ads for sex online.

In other words, citing FOSTA as a "good example" of how to amend Section 230 suggests whoever is citing it doesn't know what they're talking about.

Congress could continue to chip away by denying platform immunity for other specific wrongs like revenge porn. Better yet, it could make platform responsibility a prerequisite for any limits on liability. Boston University law professor Danielle Citron and Brookings Institution scholar Benjamin Wittes have proposed conditioning immunity on whether a platform has taken reasonable efforts to moderate content.

We've debunked this silly, silly proposal before. There are almost no sites that don't do moderation. They all have "taken reasonable efforts" to moderate, except for perhaps the most extreme. Yet this whole article was about Facebook and YouTube -- both of which could easily show that they've "taken reasonable efforts" to moderate content online.

So, if this is their suggestion... it would literally do nothing to help the "problems" they insisted were there for YouTube and Facebook. And, instead, what would happen is smaller sites would never get a chance to exist, because Facebook and YouTube would set the "standard" for how you deal with content moderation -- just like how the EU has now set YouTube's expensive ContentID as "the standard" for any site dealing with copyright-covered content.

So this proposal does nothing to change Facebook or YouTube's policies, but locks them in as the dominant players. How is that a good idea?

But Reed and Steyer suggest maybe going further:

Washington would be better off throwing out Section 230 and starting over. The Wild West wasn't tamed by hiring a sheriff and gathering a posse. The internet won't be either. It will take a sweeping change in ethics and culture, enforced by providers and regulators. Instead of defaulting to shield those who most profit, the United States should shield those most vulnerable to harm, starting with kids. The "polluter pays" principle that we use to mitigate environmental damage can help achieve the same in the online environment. Simply put, platforms should be held accountable for any content that generates revenue. If they sell ads that run alongside harmful content, they should be considered complicit in the harm. Likewise, if their algorithms promote harmful content, they should be held accountable for helping redress the harm. In the long run, the only real way to moderate content is to moderate the business model.

Um. That would kill the open internet. Completely. Dead. And it's a stupid fucking suggestion. The "pollution" they are discussing here is 1st Amendment protected speech. This is why thinking of it as analogous to pollution is so dangerous. They are advocating for government rules that will stifle free speech. Massively. And, again, the few companies that can do something are the biggest ones already. It would destroy smaller sites. And it would destroy the ability for you or me to talk online.

There's more in the article, but it's all bad. That this is coming from Biden's top tech advisor is downright scary. It is as destructive as it is ignorant.

99 Comments | Leave a Comment..

Posted on Techdirt - 8 December 2020 @ 3:01pm

Trump Makes It Official: He's Going To Pull Military Funding, Because Congress Won't Kill The Open Internet

from the really-now dept

There were some questions as to whether or not Trump would actually go through with his threat to veto the National Defense Authorization Act, which has been passed and signed into law every year for the past six decades, but it appears that is the case. The Office of Management and Budget (OMB) has officially notified Congress that Trump is vetoing the NDAA... because they refuse to kill off the open internet.

The letter it sent to Congress is... just completely disconnected from reality.

The Administration recognizes the importance of the National Defense Authorization Act (NDAA) to our national security. Unfortunately, this conference report fails to include critical national security measures, includes provisions that fail to respect our veterans and our military's history, and contradicts efforts by this Administration to put America first in our national security and foreign policy actions. Therefore, the Administration strongly opposes passage of the conference report to Accompany H.R. 6395.

There are three key complaints he raises in the letter. (1) The NDAA doesn't completely repeal Section 230 of the Communications Act (which has nothing to do with the military). (2) That it allows for the renaming of bases that were named after the Confederacy and (3) that it limits his ability to scream "national emergency" and use those claims as a reason to steal money from the military to build his stupid wall (as he's been doing).

The 230 bit is particularly stupid:

Despite bipartisan calls for addressing Section 230 of the Communications Decency Act, this bill fails to make any meaningful changes to that provision.

Um, yes, because it's got literally nothing to do with the military or the purpose of the NDAA. There is no reason to include anything related to 230 in the NDAA and multiple elected officials have explained that to Trump. But he wants to throw one of his temper tantrums instead.

Section 230 facilitates the spread of disinformation online and is a serious threat to our national security and election integrity. It should be repealed.

So he's finally expressed some rationalization for how 230 impacts national security, but he's wrong. The 1st Amendment is why disinformation can spread online and taking away 230 won't change that. And, I should note that one of the biggest vectors of disinformation that is spread online is... the President himself. Especially over the last month. And, I'd argue that the President has also been the biggest threat to election integrity.

It's Section 230 that has enabled many experts to speak out and show how the nonsense and disinformation that Trump and his cronies are spewing is inaccurate.

As for the claim about renaming bases named after Confederate Army officials, it's difficult to see how that is failing "to respect our veterans and our military's history." Remember, the Confederacy fought against the US military. You'd think it would be more respectful to our veterans not to have them serve from bases named after an army that fought against us. But Trump's gotta Trump.

Republicans in Congress now have a choice. They've been hinting that they'll override a Trump veto, and now is the time for them to stand up and make it clear that's exactly what they'll do.

68 Comments | Leave a Comment..

Posted on Techdirt - 8 December 2020 @ 6:26am

ACLU Tells Congress: Do Not Add Copyright Trolling Bill To Government Funding Bill

from the this-is-not-a-bill-to-sneak-through dept

Last week we wrote about an effort in Congress, which appeared to be succeeding, to try to sneak through a controversial (and likely unconstitutional) copyright reform bill by adding it to a must-pass government funding bill. The ACLU has now stepped into explain why this a bad idea and should not move forward:

The American Civil Liberties Union, on behalf of its more than three million members, supporters, and activists, writes to you today regarding recently reported efforts to include S. 1273/H.R. 2426, the CASE Act in upcoming legislation to fund the government. The CASE Act is a controversial provision that would significantly alter the enforcement of copyright law and would have the unintended consequence of undermining free expression online. Because we recognize that it is essential to fund the federal government, particularly during the ongoing public health crisis, we ask that you decline to include the CASE Act in the upcoming funding bill and instead allow that provision to proceed through regular order where Members will have an opportunity to address the significant concerns raised by the bill before it passes into law.

Many supporters of the bill insist that those of us opposing it are against the idea of helping copyright owners, but nothing is further from the truth. What we oppose is the method set up in this bill, which will enable much more copyright trolling in a manner likely to stifle free speech. As the ACLU notes:

As we have said before, we do not oppose the CASE Act’s central idea of creating a small claims process to allow copyright owners to assert infringement and be awarded damages for the harm caused. There is evidence that strongly suggests a need for such a system, as many copyright holders have argued. However, because the CASE Act could affect every person that communicates online, we believe that changes are needed to ensure adequate safeguards for due process and the protection of the freedom of speech. In particular, the bill should be amended to provide for access to meaningful judicial review, a reduction in the damages available for small claims violations, and additional safeguards to ensure the process is procedurally fair for both parties. In order for those essential changes to be made, we ask that you decline to include this bill in any must-pass government funding bill, and instead allow the CASE Act to be considered through the regular order process where Members will have the opportunity to address these concerns.

What's been most upsetting and annoying throughout the process of the debate on this bill is the utter unwillingness of the bill's supporters to engage with people who have pointed out these fundamental problems with the bill. If they actually did engage and fix the problems in the bill, it would likely gain much more support across the board. Instead, they continue to try to shove it through in this form, using sneaky processes like adding it to the government funding bill.

Read More | 26 Comments | Leave a Comment..

Posted on Free Speech - 7 December 2020 @ 3:25pm

Florida State Police Raid Home Of COVID Whistleblower, Point Guns At Her & Her Family, Seize All Her Computer Equipment

from the this-is-fucked-up dept

This is insane. Earlier this year, we wrote about Rebekah Jones, the data scientist working for Florida, who put together that state's COVID-19 database (that had received widespread praise), and who was fired by the state for her failure to fake the data to make it look like Florida was handling the pandemic better than it actually was. Governor Ron DeSantis had made it clear he wanted data showing good results in order to justify reopening the state.

As Jones herself explained after being fired:

I was asked by DOH leadership to manually change numbers. This was a week before the reopening plan officially kicked off into phase one. I was asked to do the analysis and present the findings about which counties met the criteria for reopening. The criteria followed more or less the White House panel's recommendations, but our epidemiology team also contributed to that as well. As soon as I presented the results, they were essentially the opposite of what they had anticipated. The whole day while we're having this kind of back and forth changing this, not showing that, the plan was being printed and stapled right in front of me. So it was very clear at that point that the science behind the supposedly science-driven plan didn't matter because the plan was already made.

Since then, Jones has been running Florida COVID Action, which is a dashboard of Florida COVID information, like the one she used to run for the state.

And apparently Florida's Governor Ron DeSantis couldn't allow that to stand. This afternoon Rebekah posted a short Twitter thread, with video, showing Florida state police raiding her home. As she notes, when they asked her who else was in the home, she told them that her husband and children were upstairs, and they pulled out their guns:

This is horrifying on so many levels. Why was her home raided? Why did they pull out guns? Why did they do it after she told them that it was her children upstairs? Why did they seize all of her electronics equipment? Why are they doing any of this?

Jones has been doing everything to better inform the public of what's happening in the middle of a pandemic, and this is the thanks she gets? Having her home raided by the police and having guns drawn on her children?

This is not supposed to happen. This should not happen. It is horrifying and I hope that Jones is able to retain powerful legal help to fight back against this clear violation of her civil liberties, and a clear authoritarian overreach by Governor DeSantis.

Update: Since the original story broke, Florida state police claim that the search warrant was in response to someone breaching an emergency alert system and sending a group text saying: "It's time to speak up before another 17,000 people are dead. You know this is wrong. You don't have to be a part of this. Be a hero. Speak out before it's too late." The warrant claims that the breach was tied to an IP address at Jones' house. Jones has vehemently denied she had anything to do with this:

"I'm not a hacker," Jones said. She added that the language in the message that authorities said was sent was "not the way I talk," and contained errors she would not make.

"The number of deaths that the person used wasn't even right," Jones said. "They were actually under by about 430 deaths. I would never round down 430 deaths."

Later in the evening, the full search warrant was published, and it raise serious questions... not about Jones, as much as what the fuck Florida's Dept. of Health is doing with its communications systems. The service that Jones is accused of using involves a shared password among a ton of people:

On November 10, 2020, at approximately 1420 hours and 1442 hours, an unidentified subject gained access to a mull—user account group StatoESF— 8 Planning" and sent a group text stating the following: "it's time to speak up before another 17,000 people are dead. You know this is wrong. You don't have to be part of this. Be a here, Speak out before it's too late— From StateESF8 Planning". FDOH estimates that approximately 1,750 messages were delivered before the software vendor was able to stop the message from being transmitted.

FDOH has several groups within ReadyOp's application platform, one of which is StateESF8.Planning. ESF8 is Florida's Emergency Support Function for Public Health and Medical with which they coordinate the state‘s health and medical resources, capabilities, and capacities. They also provide the means for a public health response, triage, treatment, and transportation. The group StateESF8.Planning is utilized by multiple users, some of which are not employees of FDOH but are employees of other government agencies. Once they are no longer associated with ESF8 they are no longer authorized to access the multi—user group.

All users assigned to StateESF8.Planning group share the same username and password. SA Pratts requested and received a copy of the technical logs containing the Internet Protocol (IP) address for users accessing the ReadyOp web—based platform for the multi—user StateESF. Planning.

As security pro Jake Williams notes, it is bizarre beyond belief that (1) you have an important system relying on a single shared username and password and that such login info is not changed after someone is fired:

Still, it sounds like we may end up seeing a classic CFAA-style case here, regarding "unauthorized access." Unfortunately, there are some cases on the books where logging into a system where you had a password after you've been instructed not to do so any more means you've violated the CFAA. This is kind of stupid, because it should be on the organization itself to actually change the password, rather than putting the burden on the user... but if there's real evidence here that she did access the system, she could be in serious CFAA trouble.

Even so, that's no excuse for raiding her home with guns drawn.

119 Comments | Leave a Comment..

Posted on Free Speech - 7 December 2020 @ 10:47am

Georgia Court Streams Ridiculous 'Kraken' Lawsuit Hearing On YouTube; Then Tells People They Can't Repost Recordings

from the not-how-it-works dept

We have lots of concerns about court transparency, and how more transparent court systems would be nice. One of the more interesting consequences of the pandemic, in which many court hearings are now done virtually, is that courts have been much more open to allowing more realtime access to these court hearings. In one of the more high profile (and more ridiculous, if that's possible) lawsuits challenging the election results -- the so-called "Kraken" lawsuit in Georgia -- there was a hearing earlier today. The court announced that the audio would stream on YouTube:

That says that the audio will be streamed on YouTube and provides you with a link. However, beneath it, it says the following:

The U.S. District Court for the Northern District of Georgia is participating in an audio pilot program permitting a limited number of district courts to livestream audio of certain civil proceedings with the consent of the parties. Under the pilot program, audio of qualifying civil proceedings will be livestreamed on the court’s YouTube channel.

Audio recordings will not be available for playback on YouTube after proceedings have ended. Audio, in full or in part, from any proceeding may not be recorded, broadcast, posted or reproduced in any form.

And, uh, what? I kind of understand (if seriously disagree with) rules in courts saying that people in the courtroom are not allowed to record, but cannot fathom any possible way in which the court can say that audio that they've streamed out on the open web cannot be recorded or used in any form.

And already there seems to be some crackdown on those who did make use of the recordings. Reuters legal reporter Jan Wolfe was told to delete her tweets with the recording of Judge Timothy Bratten shutting down the lawsuit:

And, if you go to the original YouTube video where the court hearing was officially streamed, you now see this:

This seems absolutely ridiculous. I also cannot conceive of any possible basis for which the courts can force someone, especially a reporter, to not record or republish using the publicly available audio stream. And it's not that difficult to find the audio stream reposted elsewhere.

As reporter Brad Heath notes, this seems both short-sighted and beyond the authority of the courts:

I'd go beyond short-sighted. It's ridiculous. And demanding people take down such things seems to raise serious 1st Amendment issues.

44 Comments | Leave a Comment..

Posted on Techdirt - 4 December 2020 @ 1:33pm

Nancy Pelosi Sells Out The Public: Agrees To Put Massive Copyright Reform In 'Must Pass' Spending Bill

from the why? dept

I know everyone is focused on Trump's attempt to take away Section 230 in the NDAA, but an equally important issue is that members of Congress have been trying to do Hollywood's bidding and sneak massive copyright reform into a must-pass government appropriations bill. The CASE Act has many problems that we've discussed, including the fact that it would unleash a wave of copyright trolling for people accidentally or innocently sharing works they don't realize are covered by copyright. There are also significant Constitutional problems with it, in that it routes around the Title III courts by handing disputes about private rights to the executive branch. That's not allowed.

But rather than actually discussing and debating those issues, and fixing the bill to make sure it is constitutional and protects the public, we've heard from three different sources that Nancy Pelosi has given the go-ahead in the House to include the CASE Act in the spending bill. As we said earlier this week, if you're trying to ram through a bill by adding them to an appropriations bill, it's because you know it has problems and will cause major issues and you just don't care because the politics of pleasing donors is too important. Hollywood has been screaming for this bill, and Pelosi has agreed to put it in.

If you're a constituent of Pelosi, I would highly recommend reaching out to her office and making it clear that you absolutely oppose any effort to attach the CASE Act to any appropriations bill. To ignore the many concerns that have been raised about the bill and what it will do to people across the country (especially in the middle of a pandemic) is a travesty. There is no need to pass this bill now, and there is certainly no need to do it in this way. Even if you're not a Pelosi constituent, it's worth reaching out to your Congressional Representatives. The good folks over at EFF have a handy dandy form under the accurate headline: "Don't let a quasi-court bankrupt internet users."

Many people have raised thoughtful critiques of the CASE Act, and there are many suggestions out there for how the bill can be fixed. To date, Congress has ignored those fixes. This is a bill that is highly controversial and should not be put into law through a sneaky, underhanded move like this. Make sure that Congress understands this.

102 Comments | Leave a Comment..

Posted on Techdirt - 4 December 2020 @ 9:23am

Trump Doubles Down On Threat To Defund Military Because People Are Mean To Him Online; Republicans Threaten To Override His Veto

from the what-a-way-to-end-the-presidency dept

On Tuesday, we highlighted that it looked like Congressional Republicans were willing to finally stand up to their party's insecure and whiny lame duck president and refuse to include a Section 230 repeal as part of the military authorization bill, the NDAA.

Senator Jim Inhofe, who heads the Senate Armed Services Committee and who lead the negotiation on the bill, has been a longtime supporter of the President, and has said that the two talk by phone every couple days. But on Wednesday, Inhofe apparently did his phone call telling Trump that the 230 repeal wasn't going into the NDAA while on a speakerphone walking down the hallway of a Senate building, meaning that people overheard Inhofe tell Trump that the 230 repeal wasn't going to happen.

On Thursday, the negotiations closed and a deal was made on the NDAA that does not include anything on Section 230 because, as Inhofe rightly notes, that's got nothing to do with the military at all. In response, Trump continued his temper tantrum and claims he really will veto the bill, putting the military he always claims to support so much at risk of severe cuts.


That's Trump saying that because the NDAA doesn't revoke Section 230, which Trump falsely says is "so bad for our National Security and Election Integrity" (it's not), he will veto. The thing is, everyone knows he's full of shit. And Republicans are not only saying that they have the votes to override a veto, they seem to be getting snippy with the President about it. Here's Republican Congressman Adam Kinzinger saying he'll vote to override the veto and concluding with the kind of thing you don't often hear from Republicans these days when talking to Trump: "Because it's really not about you."

No, it's not about him. But it is about him throwing a total whiny tantrum because people made fun of him online, and wanting to punish the entire internet and free speech in response. The idea that it's worth undermining the military (which he claims to support, and which frequently supports him) is... quite something.

33 Comments | Leave a Comment..

Posted on Techdirt - 3 December 2020 @ 10:44am

Senator Tillis Is Mad That Twitter Won't Testify About Copyright Infringement; Since When Is Twitter A Piracy Problem?

from the weird-all-around dept

After writing about the MPA/RIAA's ever-shifting targets of who to freak out about regarding copyright infringement, it helps to take each new target with a grain of salt. They were mad about Napster, then LimeWire, then YouTube, then cyberlockers/cloud storage. And now, apparently the target is... random social media sites? There's been plenty of attention recently over the RIAA turning its attention to... background music in Twitch streams. But who the hell thinks that Twitter is some den of piracy? Apparently, the recording industry does.

Senator Thom Tillis, who is leading a new effort to completely overhaul copyright law is apparently angry that Twitter chose not to send someone to a hearing he's holding in mid-December.

The letter that Tillis sent to Twitter in response to this decision is way over the top. Unless subpoenaed, appearing before Congress is very much voluntary. People and companies refuse to appear all the time. And even if there was a subpoena, it seems worth noting that it's Tillis' party that has decided that ignoring Congressional subpoenas is just fine and dandy.

But, really, the bigger issue here is why is Twitter even a target at all? No one thinks about Twitter as a source for copyright infringing materials. And Twitter has always been known to be responsive to DMCA takedowns. They have a whole section in their transparency report about copyright takedown notices. That certainly shows that Twitter is very responsive to DMCA notices. It does highlight how it has refused to comply with some notices, but those are in cases where it's clearly abuse of the DMCA for censorship, such as when a bunch of DMCA notices were sent to try to silence critics of the Ecuadorian government.

In fact, if anything, we've often seen Twitter be too responsive to questionable DMCA takedown notices, like the time it pulled down a Trump campaign video (remember, Tillis is a big Trump supporter) over a highly questionable copyright claim.

And yet, here's Tillis trying to make it sound like Twitter is a den of piracy that ignores copyright takedowns.

But Twitter has been less engaged in working with copyright owners on voluntary measures and technological tools, and now has rebuffed my request to testify. The only reasonable conclusion one can draw from your actions is that Twitter simply does not take copyright piracy seriously.

Or, maybe, the nature of Twitter (mostly short bursts of text) is not at all conducive to some RIAA supported show trial about piracy. But it's really in the detailed questions in which Tillis gives away the game. The RIAA wants to force every website that hosts 3rd party content to have to buy a sitewide license. This is what Article 17 was all about in the EU, and Tillis more or less admits it with this question:

I have heard that Twitter has been slow to respond to copyright infringement on its platform and also refused to negotiate licenses or business agreements with music publishers or record labels. In contrast, other major-social media companies have done the right thing and mitigated infringing activity on their platfoms by entering into negotiated license agreements to allow uses of music. Does Twitter seek licenses for the use of music? If so, in what instances? Has Twitter made efforts to negotiate license agreements with music publishers and record labels to ensure songwriters- and artists are compensated?

No one is going to Twitter as a way to get music. If music appears in video snippets on Twitter it's almost entirely incidental. And Twitter has shown that it's absolutely responsive to DMCA notices (see the Trump campaign ad takedown mentioned above). This is entirely about the RIAA trying to get the US government to force every website to just write them a big check every year.

Despite the tremendous value that music brings to Twitter’s business, your platform continues to host and permit rampant infringement of music files on its platform

What? No. That's literally not happening, and music is not providing any significant value to Twitter's business.

Twitter has not taken meaningful steps to address the scale of the problem.

It clearly has taken steps and is incredibly responsive to DMCA notices (sometimes too responsive).

Instead, your company claims that it already goes above and beyond what the law requires. What steps has Twitter taken to ensure no unlicensed music is made available?

This is such a dumb question. It is literally impossible to "ensure no unlicensed music is made available." Of course some will always be because of the broken nature of today's copyright law, nearly everything anyone does involves some form of copying content without a license. In fact, many unlicensed uses of music are legal because of things like fair use or de minimis use. Demanding "no unlicensed music" is not only impossible, it literally is not required by law.

How many takedown notices has Twitter received each year since it launched in 2006?

This is a really bad question as well. This was the key tactic the labels have used against Google/YouTube, using the number of notices received as a proxy for how bad the sites are. But this is a number the labels have control over since they get to send the notices.

There are more questions, but the whole thing is clearly driven by the RIAA's interests to force Twitter into just writing them a giant check every year. I mean, I guess it worked against YouTube and Facebook (where at least there was some argument to be made that music was a bigger deal), so now they've moved on to other sites like Twitch and Twitter. But forcing every website to sign a license is crazy, not required by law, and acting as if the failure to sign a license is some indictment of how Twitter feels about copyright is complete nonsense.

38 Comments | Leave a Comment..

Posted on Techdirt - 2 December 2020 @ 5:43pm

Congress Decides To Ignore Trump's Ridiculous Veto Threat If Military Authorization Doesn't Wipe Out Section 230

from the good-for-them dept

This always seemed like the the most likely outcome, but Trump had complicated things with his temper tantrum demands and his threat to veto the National Defense Authorization Act (NDAA) if it didn't include a clause wiping out Section 230. However, Congress has come to its senses and leaders of both parties have said they'll ignore his impotent veto threat and move forward with the bill as is.

The final version of the National Defense Authorization Act that will soon be considered by the House and Senate won’t include Trump’s long-sought repeal of the legal immunity for online companies, known as Section 230, according to lawmakers and aides.

Key to this was Senate Armed Services Chair Jim Inhofe pointing out the obvious:

"First of all 230 has nothing to do with the military."

That's both first of all and last of all. The whole attempt to use the NDAA to attack CDA 230 was just bizarre.

Inhofe did say he still thinks that 230 should go, but not as a part of the NDAA. A few other Republicans are finally speaking up as well.

Still, Republicans on Wednesday showed some signs of exasperation with the president’s latest effort. As one GOP lawmaker put it: “Republicans are sick of this shit.”

Sen. Mike Rounds (R-S.D.), who sits on the Senate Armed Services Committee, put it more delicately. While he said he understood the president’s frustrations with Section 230, it was not worth imperiling the broader defense bill.

“The NDAA is so important to the men and women that wear the uniform that this should not be an item to veto the act over,” he said. “So I would hope he would reconsider his position on it.”

And Senate Majority Whip John Thune (R-S.D.) said his “preference” would be to pass the NDAA and then address Section 230 separately.

Democratic critics of Section 230 were equally as annoyed. Remember, Senator Richard Blumenthal has been one of the most vocal critics of Section 230 going back to the time before he was a Senator and when he was stymied in trying to sue Craigslist by Section 230 (he was upset that sex workers use Craigslist, and wanted to blame Craigslist for the fact that sex workers exist).

Sen. Richard Blumenthal (D-Conn.), a co-sponsor of the only bipartisan bill targeting Section 230 to advance out of committee this Congress, called the veto threat "deeply dangerous and just plain stupid.”

He added, “Reforming Section 230 deserves its own debate — one that I’ve helped lead in Congress, and which I look forward to continuing with a more serious, thoughtful administration in January.”

In another article, Rep. Frank Pallone stated the obvious:

House Energy and Commerce Committee Chairman Frank Pallone said in a statement that Trump is "holding a critical defense bill hostage in a petulant attempt to punish Twitter for fact-checking him. Our military and national security should not suffer just because Trump's ego was bruised."

There is still plenty of appetite to attack Section 230. And there will be lots of dumb fights about it, but it's not going down this way.

70 Comments | Leave a Comment..

Posted on Techdirt - 2 December 2020 @ 9:16am

Trump Promises To Defund The Entire Military, If Congress Won't Let Him Punish The Internet For Being Mean To Him

from the this-is-why-we-can't-have-nice-things dept

President Trump has continued to throw his little temper tantrum in response to #DiaperDon trending on Twitter. When that happened, he suddenly demanded a full repeal of Section 230 -- which would not stop Twitter from showing #DiaperDon trending when the President throws a temper tantrum like a 2 year old. Then, yesterday, we heard that the White House was really pushing for the Senate to include a 230 repeal in the must pass NDAA bill that funds the military.

Late last evening I heard from people in touch with various Congressional offices saying that this entire effort by the White House was dead in the water, because almost no one had an appetite to even try to attempt it, and despite the whackadoodle conspiracy theories from the President and Senators Ted Cruz, Marsha Blackburn, and Josh Hawley, it turns out that Senate Majority Leader Mitch McConnell doesn't care about 230 reform.

Of course, even later last night, things took an even stupider turn, as Trump declared on Twitter that unless the NDAA included a full repeal of Section 230, he would veto it. This is all sorts of stupid and we'll break it all down in a moment, so bear with me.

That says:

Section 230, which is a liability shielding gift from the U.S. to “Big Tech” (the only companies in America that have it - corporate welfare!), is a serious threat to our National Security & Election Integrity. Our Country can never be safe & secure if we allow it to stand..... Therefore, if the very dangerous & unfair Section 230 is not completely terminated as part of the National Defense Authorization Act (NDAA), I will be forced to unequivocally VETO the Bill when sent to the very beautiful Resolute desk. Take back America NOW. Thank you!

We'll get into why nearly everything in that statement is wrong, dangerous, and stupid, but I want to be crystal clear about what is happening here.

President Donald J. Trump is threatening to defund the US military, because he's upset that enough people mocked him on Twitter that it started trending.

That's it. That's the reality. This is the world we live in. And it's so insane, it needs to be repeated.

President Donald J. Trump is threatening to defund the US military, because he's upset that enough people mocked him on Twitter that it started trending.

Oh, and it's even stupider. On so many levels. First off, taking away Section 230 wouldn't stop #DiaperDon from trending on Twitter, because that's protected by the 1st Amendment and has nothing to do with Section 230. If anything, it would give much more incentive for Twitter to remove Donald Trump and his followers accounts entirely to avoid the suddenly increased legal liability.

But, now, let's take a deep breath, take a step back, and look at how incredibly stupid Trump's statement is.

Section 230, which is a liability shielding gift from the U.S. to “Big Tech” (the only companies in America that have it - corporate welfare!), is a serious threat to our National Security & Election Integrity.

None of this is even close to reality. This is pure nonsense. Section 230 applies to all websites for any 3rd party content they host. The claim that "big tech" are the only companies that have it is belied by this simple point: Donald Trump himself has invoked Section 230 in court. Multiple times. Incredibly, in 2017, he argued that he shouldn't be liable for the content of a retweet he did, because of Section 230. In fact, in court, Trump argued that Section 230 "should be given an 'expansive' reading" in order to protect himself from defamation claims. He's right. Section 230 should protect him in those cases, but it also highlights how it's absolutely bullshit to claim that it only protects "Big Tech" and that big tech companies are "the only companies in America that have it." It's just not true.

As for the claim that Section 230 is a "threat to our National Security," let's play a little thought exercise: which is a bigger threat to our national security: a law that says internet websites are not liable for the actions of their users or defunding the entire military? I'll give you a minute to think about it.

Because here's the point where I remind you that President Donald J. Trump is threatening to defund the US military, because he's upset that enough people mocked him on Twitter that it started trending.

Oh, and then there's the claim about "election integrity" and... what? What the fuck does election integrity have to do with Section 230? The answer is absolutely nothing. He's just spewing words.

I could go on, but it's all just incredibly stupid. It's one thing to say that Trump is an blundering fool, but here is a legitimate threat to national security, entirely because people are making fun of him. It's frightening beyond all belief.

And this is the point that in a functioning Congress, everyone would stand up to the President and say "no, this is not how this works." Congressional Republicans need to stop enabling this utterly dangerous nonsense. Because President Donald J. Trump is threatening to defund the US military, because he's upset that enough people mocked him on Twitter that it started trending. That should not be allowed to happen.

162 Comments | Leave a Comment..

Posted on Techdirt - 1 December 2020 @ 3:33pm

'Tis The Season: Congress Looks To Sneak In Unconstitutional Copyright Reform Bill Into 'Must Pass' Spending Bill

from the if-you-have-to-sneak-around... dept

If you have to sneak your transformational copyright bill into a "must pass" government spending bill, it seems fairly evident that you know the bill is bad. Earlier we talked about how the White House is trying to slip a Section 230 repeal into the NDAA (military appropriations) bill, and now we've heard multiple people confirm that there's an effort underway to slip the CASE Act into the "must pass" government appropriations bill (the bill that keeps the government running).

What does keeping the government running have to with completely overhauling the copyright system to enable massive copyright trolling? Absolutely nothing, but it's Christmas season, and thus it's the time for some Christmas tree bills in which Senators try to slip in little favors to their funders by adding them to must-pass bills.

We've detailed the many problems with the CASE Act, including how it would ratchet up copyright trolling in a time when we should actually be looking for ways to prevent copyright trolling. But the much larger issue is the fact that the bill is almost certainly unconstitutional. It involves the executive branch trying to route around the courts to set up a judicial body to handle disputes about private rights. That's not allowed.

At the very least, however, there are legitimate concerns about the overreach of the CASE Act, and, as such, those supporting it should at least be willing to discuss those issues honestly and debate them fairly. Slipping them into a must-pass government spending bill certainly suggests that they know that they cannot defend the bill legitimately, and need to cheat to make it law.

70 Comments | Leave a Comment..

Posted on Techdirt - 1 December 2020 @ 12:10pm

World's Worst Copyright Troll, Richard Liebowitz, Suspended From Practicing Law

from the revealed-in-a-new-benchslap dept

I had meant to write an update on the never ending clusterfuck that is copyright troll Richard Liebowitz last month, as things appeared to be going badly in the two cases where the judges had clearly grown completely tired of the games he was playing with the court: Usherson v. Bandshell and Chevrestt v. Barstool. In both cases, judges had gotten very, very angry at Liebowitz for continuing to lie, play games, mislead and so on. In the Chevrestt case, the judge actually let him off kind of easy last month, saying that for the next two years, any time that he is ordered to show cause for why he shouldn't be sanctioned again (basically, any time he gets in trouble with a judge), he has to share the details of what happened in the Chevrestt case (in which he does not come out of it looking good).

But the bigger story is in the Usherson case, where this week, Judge Jesse Furman mentions in passing that the Southern District of New York's Grievance Committee had issued an order suspending Liebowitz "from the practice of law before this Court." This is temporary, pending "final adjudication of the charges against [him]" so it's likely to get worse. Also, it only applies to SDNY, but that's where he's filed so many of his cases, and the stink over his practically non-stop sketchy behavior in court will follow him everywhere else. It's not clear exactly which of the many problems that Liebowitz has brought upon himself resulted in the Grievance Committee acting, but the list is very long.

In fact, it's rather convenient that it's Judge Furman who is revealing the suspended license, given that he was the one who catalogued the dozens upon dozens of times that Liebowitz had been caught lying to courts or has been sanctioned for lying to courts.

As you may recall, Judge Furman laid out those details in an order telling Liebowitz to file a copy of that order with every case that he was involved with. Liebowitz, in true Liebowitz fashion, waited until the last minute to whine that this was unfair and a violation of his rights. The judge was not impressed and neither was the appeals court.

Liebowitz then had one day to send a copy of Judge Furman's order to every one of his clients and to every court in which his cases were being heard. At the time, we pointed to at least one case where the order had not been filed, but we had heard from a few lawyers in other cases that no such filing had been made either. And those lawyers weren't just telling me: they told Judge Furman as well. At the beginning of October, Judge Furman asked Liebowitz to file a declaration addressing why he hadn't filed the order in some cases (and why he had filed it late in others). Liebowitz then filed quite an amazing declaration on October 15th, explaining how and why he had failed to file the order in 113 different cases. In typical Liebowitz fashion, he had excuses for all of them. He blamed PACER (which we agree is a terrible service), but he also admits that he never thought to use his case management system -- the one he'd been forced to install a year earlier as part of sanctions in another case (the one where he blamed the death of his grandfather for failing to appear in court, and then lied about the actual date of his grandfather's death). That case also involved the judge referring Liebowitz to the Grievance Committee.

Other excuses Liebowitz gave for not filing the order in cases was that he thought some cases were completely over and just missed that they had motions pending. Some cases he closed out between the time the original order was made and his attempted compliance with them. And then there were some cases which he argued he was more peripherally than directly involved in them.

Either way, Judge Furman, finds this literally unbelievable.

Had Mr. Liebowitz failed to file the Opinion and Order in a handful of cases, the failure to comply might have been understandable and excusable. But the failure to file it in 113 cases is astonishing and suggests contumaciousness, an egregiously disorganized case management system, or both. It is all the more astonishing in light of Mr. Liebowitz’s record, set forth in painstaking detail in the Court’s Opinion and Order, and his repeated representations to Judges — in this District and beyond — that he had taken steps to improve his case management practices.

Contumaciousness is a good word. Look it up.

Basically, Judge Furman notes that Liebowitz has not shown any real evidence that he's changed. At all. And thus, it's clear that the judge believes that Liebowitz deserves further sanctions. However, as he notes, the sanctions should be designed to lead to correction of the bad behavior -- and thanks to the Grievance Committee's suspension of Liebowitz's license, there's not a current threat of this behavior continuing.

That said, the ultimate purpose of sanctions is deterrence... and, as Mr. Liebowitz’s extraordinary record of both sanctions and noncompliance with court orders demonstrates, it is far from clear that there is any additional sanction that would serve to deter him. Moreover, on November 30, 2020, this Court’s Grievance Committee — noting Mr. Liebowitz’s “repeated disregard for orders from this Court and his unwillingness to change despite 19 formal sanctions and scores of other admonishments and warnings from judges across the country” — entered an Amended Order immediately suspending Mr. Liebowitz “from the practice of law before this Court pending final adjudication of the charges against [him].” In re Liebowitz, No. M-2-238, at 1-2 (S.D.N.Y. Nov. 30, 2020). Thus, for the time being, there will be nothing to deter when it comes to Mr. Liebowitz.

Accordingly, and in light of the Grievance Committee’s Order of November 25, 2020, the Court, exercising its discretion, determines that additional sanctions are not appropriate at this time.

However, just in case, Judge Furman clarifies that when he said that Liebowitz had to file the order in all of his cases, he did mean all of them, and amends the original order to make that abundantly clear and to make sure that Liebowitz cannot wriggle free from complying.

All in all, there seems to be a decent chance that Richard Liebowitz will no longer be practicing law.

Read More | 56 Comments | Leave a Comment..

Posted on Techdirt - 1 December 2020 @ 10:52am

Utter Insanity: Trump Lawyer Suggests Former Trump Cybersecurity Official Should Be 'Taken Out And Shot' For Saying The Election Was Secure

from the what-is-wrong-with-these-people? dept

Every day that I think I can't be shocked and horrified by anything being done in the name of politics today, I end up being more shocked and more horrified. The latest is that one of the President's campaign lawyers, Joe diGenova, who has been involved in a wide range of politically motivated conspiracy theory mongering, went on the Howie Carr show to say that fired CISA director Chris Krebs should be "taken out and shot."

There's a lot to unpack here. First off, we wrote about Krebs being fired by Trump for daring to contradict the narrative that the election was rigged. Krebs is one of a very few Trump appointees who was widely respected across the political spectrum. In his years running the newly created Cybersecurity and Infrastructure Security Agency (CISA), he'd been praised by many for the job he had done in actually dealing with cybersecurity threats, and coordinating information sharing about such threats to the private sector.

But him telling the truth and debunking the politically motivated nonsense the President and his dwindling team of supporters are trying to spew, apparently means that Krebs has been cast out as the enemy. Making matters worse (for Trump and his supporters) was that on Sunday, 60 Minutes had Krebs on, in which he made a very credible case that the President was just making shit up in claiming that there was interference or malfeasance in the election. In fact, in that interview, Krebs highlighted the death threats that are being made against election officials, rightly calling it "a travesty" that public servants are put through this nonsense.

And it's, in my view, a travesty what's happening right now with all these death threats to election officials, to secretaries of state. I want everybody to look at Secretary Boockvar in Pennsylvania, Secretary Benson in Michigan, Secretary Cegavske in Nevada, Secretary Hobbs in Arizona. All strong women that are standing up, that are under attack from all sides, and they're defending democracy. They're doin' their jobs. Look at-- look at Secretary Raffensperger in Georgia, lifelong Republican. He put country before party in his holding a free and fair election in that state. There are some real heroes out there. There are some real patriots.

And now Krebs is facing the same nonsense.

Howie Carr, the host of the show is a long time, Boston-based, Trump-supporting talk show host and columnist. He had diGenova on his show, which was simulcast to Newsmax (one of the two Trump-loving TV networks trying to take over the insane conspiracy theory pushing crown from Fox News) and allowed diGenova to say that Krebs should be killed. Carr doesn't appear to have the video of it up on his own YouTube channel yet, but MediaMatters has the clip that you can see for yourself.

diGenova: This was not a coincidence. This was all planned. And anybody who thinks the election went well, like that idiot Krebs who used to be the head of cybersecurity for DHS.

Carr:: Oh yeah, the guy who was on 60 Minutes last night.

diGenova: That guy... that guy is a class A moron. He should be drawn and quartered. Taken out at dawn and shot,

Carr then chuckles for a bit before changing the subject.

Let's be totally clear: this is offensive and dangerous. It would be offensive and dangerous coming from anyone, but the fact that it's coming from a lawyer currently representing the President of the United States is completely and utterly terrifying. No, it almost certainly doesn't reach the "true threats" test of the Supreme Court to be speech not protected by the 1st Amendment, but that doesn't mean it's not wildly inappropriate and dangerous.

I understand that Trump's circle of grifters and hanger-ons will not let truth, accuracy, or common decency stand in the way of spreading their cult of bullshit, lies, FUD, and nonsense, but the rest of the country ought to speak up and make it clear that this is totally unacceptable. And that includes Republicans in Congress who have continued to try to look the other way or pretend that what Trump and his band of legal misfits are doing is totally normal and acceptable. It is not.

80 Comments | Leave a Comment..

More posts from Mike Masnick >>

.

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it