Mike Masnick’s Techdirt Profile

mmasnick

About Mike Masnick Techdirt Insider

Mike is the founder and CEO of the Copia Institute and editor of the Techdirt blog.

He can be found on Twitter at http://www.twitter.com/mmasnick

https://twitter.com/mmasnick



Posted on Techdirt - 25 March 2021 @ 9:33am

Congressional Panel On Internet And Disinformation... Includes Many Who Spread Disinformation Online

from the because-of-course dept

We've pointed out a few times how silly all these Congressional panels on content moderation are, but the one happening today is particularly silly. One of the problems, of course, is that while everyone seems to be mad about Section 230, they seem to be mad about it for opposite reasons, with Republicans wanting the companies to moderate less, and Democrats wanting the companies to moderate more. That's only one of many reasons why today's hearing, like those in the past, are so pointless. They tend to bog down in silly "but what about this particular moderation decision" which will then be presented in a misleading or out of context fashion, allowing the elected official to grandstand about how they "held big tech's feet to the fire" or some such nonsense.

However, Cat Zakrzewski, over at the Washington Post has highlighted yet another reason why this particular "investigation" into disinformation online is so disingenuous: a bunch of the Republicans on the panel, exploring how these sites deal with mis- and disinformation -- are guilty of spreading disinformation themselves online.

A Washington Post analysis found that seven Republican members of the House Energy and Commerce Committee who are scheduled to grill the chief executives of Facebook, Google and Twitter about election misinformation on Thursday sent tweets that advanced baseless narratives of election fraud, or otherwise supported former president Donald Trump’s efforts to challenge the results of the presidential election. They were among 15 of the 26 Republican members of the committee who voted to overturn President Biden’s election victory.

Three Republican members of the committee, Reps. Markwayne Mullin (Okla.), Billy Long (Mo.) and Earl L. “Buddy” Carter (Ga.), tweeted or retweeted posts with the phrase “Stop the Steal” in the chaotic aftermath of the 2020 presidential election. Stop the Steal was an online movement that researchers studying disinformation say led to the violence that overtook the U.S. Capitol on Jan. 6.

Cool cool.

Actually, this highlights one of the many reasons why we should be concerned about all of these efforts to force these companies into a particular path for dealing with disinformation online. Because once we head down the regulatory route, we're going to reach a point in which the government is, in some form, determining what is okay and what is not okay online. And do we really want elected officials, who themselves were spreading disinformation and even voted to overturn the results of the last Presidential election, to be determining what is acceptable and what is not for social media companies to host?

As the article itself notes, rather than have a serious conversation about disinformation online and what to do about it, this is just going to be yet another culture war. Republicans are going to push demands to have these websites stop removing their own efforts at disinformation, and Democrats are going to push the websites to be more aggressive in removing information (often without concern for the consequences of such demands -- which often lead to the over-suppression of speech).

One thing I think we can be sure of is that Rep. Frank Pallone, who is heading the committee for today's hearing is being laughably naïve if he actually believes this:

Rep. Frank Pallone Jr. (N.J.), the Democrat who chairs the committee, said any member of Congress using social media to spread falsehoods about election fraud was “wrong,” but he remained optimistic that he could find bipartisan momentum with Republicans who don’t agree with that rhetoric.

“There’s many that came out and said after Jan. 6 that they regretted what happened and they don’t want to be part of it at all,” Pallone said in an interview. “You have to hope that there’s enough members on both sides of the aisle that see the need for some kind of legislative reform here because they don’t want social media to allow extremism and disinformation to spread in the real world and encourage that.”

Uh huh. The problem is that those who spread disinformation online don't think of it as disinformation. And they see any attempt to cut back on their ability to spread it to be (wrongly) "censorship." Just the fact that the two sides can't even agree on what is, and what is not, disinformation should give pause to anyone seeking "some kind of legislative reform" here. While the Democrats may be in power now, that may not last very long, and they should recognize that if it's the Republicans who get to define what is and what is not "disinformation" it may look very, very different than what the Democrats think.

Leave a Comment..

Posted on Techdirt - 24 March 2021 @ 12:05pm

Beware Of Facebook CEOs Bearing Section 230 Reform Proposals

from the good-for-facebook,-not-good-for-the-world dept

As you may know, tomorrow Congress is having yet another hearing with the CEOs of Google, Facebook, and Twitter, in which various grandstanding politicians will seek to rake Mark Zuckerberg, Jack Dorsey, and Sundar Pichai over the coals regarding things that those grandstanding politicians think Facebook, Twitter, and Google "got wrong" in their moderation practices. Some of the politicians will argue that these sites left up too much content, while others will argue they took down too much -- and either way they will demand to know "why" individual content moderation decisions were made differently than they, the grandstanding politicians, wanted them to be made. We've already highlighted one approach that the CEOs could take in their testimony, though that is unlikely to actually happen. This whole dog and pony show seems all about no one being able to recognize one simple fact: that it's literally impossible to have a perfectly moderated platform at the scale of humankind.

That said, one thing to note about these hearings is that each time, Facebook's CEO Mark Zuckerberg inches closer to pushing Facebook's vision for rethinking internet regulations around Section 230. Facebook, somewhat famously, was the company that caved on FOSTA, and bit by bit, Facebook has effectively lead the charge in undermining Section 230 (even as so many very wrong people keep insisting we need to change 230 to "punish" Facebook). That's not true. Facebook is now perhaps the leading voice for changing 230, because the company knows that it can survive without it. Others? Not so much. Last February, Zuckerberg made it clear that Facebook was on board with the plan to undermine 230. Last fall, during another of these Congressional hearings, he more emphatically supported reforms to 230.

And, for tomorrow's hearing, he's driving the knife further into 230's back by outlining a plan to further cut away at 230. The relevant bit from his testimony is here:

One area that I hope Congress will take on is thoughtful reform of Section 230 of the Communications Decency Act.

Over the past quarter-century, Section 230 has created the conditions for the Internet to thrive, for platforms to empower billions of people to express themselves online, and for the United States to become a global leader in innovation. The principles of Section 230 are as relevant today as they were in 1996, but the Internet has changed dramatically. I believe that Section 230 would benefit from thoughtful changes to make it work better for people, but identifying a way forward is challenging given the chorus of people arguing—sometimes for contradictory reasons—that the law is doing more harm than good.

Although they may have very different reasons for wanting reform, people of all political persuasions want to know that companies are taking responsibility for combatting unlawful content and activity on their platforms. And they want to know that when platforms remove harmful content, they are doing so fairly and transparently.

We believe Congress should consider making platforms’ intermediary liability protection for certain types of unlawful content conditional on companies’ ability to meet best practices to combat the spread of this content. Instead of being granted immunity, platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it. Platforms should not be held liable if a particular piece of content evades its detection—that would be impractical for platforms with billions of posts per day—but they should be required to have adequate systems in place to address unlawful content.

Definitions of an adequate system could be proportionate to platform size and set by a third-party. That body should work to ensure that the practices are fair and clear for companies to understand and implement, and that best practices don’t include unrelated issues like encryption or privacy changes that deserve a full debate in their own right.

In addition to concerns about unlawful content, Congress should act to bring more transparency, accountability, and oversight to the processes by which companies make and enforce their rules about content that is harmful but legal. While this approach would not provide a clear answer to where to draw the line on difficult questions of harmful content, it would improve trust in and accountability of the systems and address concerns about the opacity of process and decision-making within companies.

As reform ideas go, this is certainly less ridiculous and braindead than nearly every bill introduced so far. It attempts to deal with the largest concerns that most people have -- what happens when illegal, or even "lawful but awful," activity is happening on websites and those websites have "no incentive" to do anything about it (or, worse, incentive to leave it up). It also responds to some of the concerns about a lack of transparency. Finally, to some extent it makes a nod at the idea that the largest companies can handle some of this burden, while other companies cannot -- and it makes it clear that it does not support anything that would weaken encryption.

But that doesn't mean it's a good idea. In some ways, this is the flip side of the discussion that Mark Zuckerberg had many years ago regarding how "open" Facebook should be regarding third party apps built on the back of Facebook's social graph. In a now infamous email, Mark told someone that one particular plan "may be good for the world, but it's not good for us." I'd argue that this 230 reform plan that Zuckerberg lays out "may be good for Facebook, but not good for the world."

But it involves some thought, nuance, and predictions of how this plays out to understand why.

First, let's go back to the simple question of what problem are we actually trying to solve for. Based on the framing of the panel -- and of Zuckerberg's testimony -- it certainly sounds like there's a huge problem of companies not having any incentive to clean up the garbage on the internet. We've certainly heard many people claim that, but it's just not true. It's only true if you think that the only incentives in the world are the laws of the land you're in. But that's not true and has never been true. Websites do a ton of moderation/trust & safety work not because of what legal structure is in place but because (1) it's good for business, and (2) very few people want to be enabling cesspools of hate and garbage.

If you don't clean up garbage on your website, your users get mad and go away. Or, in other cases, your advertisers go away. There are plenty of market incentives to make companies take charge. And of course, not every website is great at it, but that's always been a market opportunity -- and lots of new sites and services pop up to create "friendlier" places on the internet in an attempt to deal with those kinds of failures. And, indeed, lots of companies have to keep changing and iterating in their moderation practices to deal with the fact that the world keeps changing.

Indeed, if you read through the rest of Zuckerberg's testimony, it's one example after another of things that the company has already done to clean up messes on the platform. And each one describes putting huge resources in terms of money, technology, and people to combat some form of disinformation or other problematic content. Four separate times, Zuckerberg describes programs that Facebook has created to deal with those kinds of things as "industry-leading." But those programs are incredibly costly. He talks about how Facebook now has 35,000 people working in "safety and security," which is more than triple the 10,000 people in that role five years ago.

So, these proposals to create a "best practices" framework, judged by some third party, in which you only get to keep your 230 protections if you meet those best practices, won't change anything for Facebook. Facebook will argue that its practices are the best practices. That's effectively what Zuckerberg is saying in this testimony. But that will harm everyone else who can't match that. Most companies aren't going to be able to do this, for example:

Four years ago, we developed automated techniques to detect content related to terrorist organizations such as ISIS, al Qaeda, and their affiliates. We’ve since expanded these techniques to detect and remove content related to other terrorist and hate groups. We are now able to detect and review text embedded in images and videos, and we’ve built media-matching technology to find content that’s identical or near-identical to photos, videos, text, and audio that we’ve already removed. Our work on hate groups focused initially on those that posed the greatest threat of violence at the time; we’ve now expanded this to detect more groups tied to different hate-based and violent extremist ideologies. In addition to building new tools, we’ve also adapted strategies from our counterterrorism work, such as leveraging off-platform signals to identify dangerous content on Facebook and implementing procedures to audit the accuracy of our AI’s decisions over time.

And, yes, he talks about making those rules "proportionate to platform size" but there's a whole lot of trickiness in making that work in practice. Size of what, exactly? Userbase? Revenue? How do you determine and where do you set the limits? As we wrote recently in describing our "test suite" of internet companies for any new internet regulation, there are so many different types of companies, dealing with so many different markets, that it wouldn't make any sense to apply a single set of rules or best practices across each one. Because each one is very, very different. How do you apply similar "best practices" on a site like Wikipedia -- where all the users themselves do the moderation -- to a site like Notion, in which people are setting up their own database/project management setups, some of which may be shared with others. Or how do you set up the same best practices that will work in fan fiction communities that will also apply to something like Cameo?

And, even the "size" part can be problematic. In practice, it creates so many wacky incentives. The classic example of this is in France, where stringent labor laws kick in only for companies at 50 employees. So, in practice, there are a huge number of French companies that have 49 employees. If you create thresholds, you get weird incentives. Companies will seek to limit their own growth in unnatural ways just to avoid the burden, or if they're going to face the burden, they may make a bunch of awkward decisions in figuring out how to "comply."

And the end result is just going to be a lot of awkwardness and silly, wasteful lawsuits for companies arguing that they somehow fail to meet "best practices." At worst, you end up with an incredible level of homogenization. Platforms will feel the need to simply adopt identical content moderation policies to ones who have already been adjudicated. It may create market opportunities for extractive third party "compliance" companies who promise to run your content moderation practices in the identical way to Facebook, since those will be deemed "industry-leading" of course.

The politics of this obviously make sense for Facebook. It's not difficult to understand how Zuckerberg gets to this point. Congress is putting tremendous pressure on him and continually attacking the company's perceived (and certainly, sometimes real) failings. So, for him, the framing is clear: set up some rules to deal with the fake problem that so many insist is real, of there being "no incentive" for companies to do anything to deal with disinformation and other garbage, knowing full well that (1) Facebook's own practices will likely define "best practices" or (2) that Facebook will have enough political clout to make sure that any third party body that determines these "best practices" is thoroughly captured so as to make sure that Facebook skates by. But all those other platforms? Good luck. It will create a huge mess as everyone tries to sort out what "tier" they're in, and what they have to do to avoid legal liability -- when they're all already trying all sorts of different approaches to deal with disinformation online.

Indeed, one final problem with this "solution" is that you don't deal with disinformation by homogenization. Disinformation and disinformation practices continually evolve and change over time. The amazing and wonderful thing that we're seeing in the space right now is that tons of companies are trying very different approaches to dealing with it, and learning from those different approaches. That experimentation and variety is how everyone learns and adapts and gets to better results in the long run, rather than saying that a single "best practices" setup will work. Indeed, zeroing in on a single best practices approach, if anything, could make disinformation worse by helping those with bad intent figure out how to best game the system. The bad actors can adapt, while this approach could tie the hands of those trying to fight back.

Indeed, that alone is the very brilliance of Section 230's own structure. It recognizes that the combination of market forces (users and advertisers getting upset about garbage on the websites) and the ability to experiment with a wide variety of approaches, is how best to fight back against the garbage. By letting each website figure out what works best for their own community.

As I started writing this piece, Sundar Pichai's testimony for tomorrow was also released. And it makes this key point about how 230, as is, is how to best deal with misinformation and extremism online. In many ways, Pichai's testimony is similar to Zuckerberg's. It details all these different (often expensive and resource intensive) steps Google has taken to fight disinformation. But when it gets to the part about 230, Pichai's stance is the polar opposite of Zuckerberg's. Pichai notes that they were able to do all of these things because of 230, and changing that would put many of these efforts at risk:

These are just some of the tangible steps we’ve taken to support high quality journalism and protect our users online, while preserving people’s right to express themselves freely. Our ability to provide access to a wide range of information and viewpoints, while also being able to remove harmful content like misinformation, is made possible because of legal frameworks like Section 230 of the Communications Decency Act.

Section 230 is foundational to the open web: it allows platforms and websites, big and small, across the entire internet, to responsibly manage content to keep users safe and promote access to information and free expression. Without Section 230, platforms would either over-filter content or not be able to filter content at all. In the fight against misinformation, Section 230 allows companies to take decisive action on harmful misinformation and keep up with bad actors who work hard to circumvent their policies.

Thanks to Section 230, consumers and businesses of all kinds benefit from unprecedented access to information and a vibrant digital economy. Today, more people have the opportunity to create content, start a business online, and have a voice than ever before. At the same time, it is clear that there is so much more work to be done to address harmful content and behavior, both online and offline.

Regulation has an important role to play in ensuring that we protect what is great about the open web, while addressing harm and improving accountability. We are, however, concerned that many recent proposals to change Section 230—including calls to repeal it altogether—would not serve that objective well. In fact, they would have unintended consequences—harming both free expression and the ability of platforms to take responsible action to protect users in the face of constantly evolving challenges.

We might better achieve our shared objectives by focusing on ensuring transparent, fair, and effective processes for addressing harmful content and behavior. Solutions might include developing content policies that are clear and accessible, notifying people when their content is removed and giving them ways to appeal content decisions, and sharing how systems designed for addressing harmful content are working over time. With this in mind, we are committed not only to doing our part on our services, but also to improving transparency across our industry.

That's standing up for the law that helped enable the open internet, not tossing it under the bus because it's politically convenient. It won't make politicians happy. But it's the right thing to say -- because it's true.

Read More | 50 Comments | Leave a Comment..

Posted on Techdirt - 24 March 2021 @ 9:33am

If Trump Ever Actually Creates A Social Network Of His Own, You Can Bet It Will Rely On Section 230

from the i-mean,-come-on dept

There have been rumors for ages that former President Donald Trump might "start" a social network of his own, and of course, that talk ramped up after he was (reasonably) banned from both Twitter and Facebook. Of course Trump is not particularly well known for successfully "starting" many businesses. Over the last few decades of his business career, he seemed a lot more focused on just licensing his name to other businesses, often of dubious quality. So it was no surprise when reports came out last month that, even while he was President, he had been in talks with Parler to join that site in exchange for a large equity stake in the Twitter-wannabe-for-Trumpists. For whatever reason, that deal never came to fruition.

But, over the weekend, Trump spokesperson (and SLAPP suit filer) Jason Miller told Fox News that Trump was preparing to launch his own social network in the next few months. Amusingly, right before Miller made this claim, he noted exactly what I had said about how Trump being banned from Twitter and Facebook wasn't censorship, since Trump could get all the press coverage he wanted:

“The president’s been off of social media for a while,” he told Fox News Media Buzz host Howard Kurtz, “[but] his press releases, his statements have actually been getting almost more play than he ever did on Twitter before.”

But he then followed that up with an offhand comment saying:

I do think that we’re going to see President Trump returning to social media in probably about two or three months here with his own platform.

And this is something that I think will be the hottest ticket in social media, it’s going to completely redefine the game, and everybody is going to be waiting and watching to see what exactly President Trump does. But it will be his own platform.

Many, many people have assumed that -- just like revealing his tax returns, infrastructure week, and his shiny new healthcare plan -- that this announcement was just bluster and nonsense with no actual expectation that anything will ever be done. And that is perhaps likely. Even Trump's normal allies seem less than thrilled with the idea, though mainly because it may lead to further fragmenting among the "social media website for MAGA conspiracy theorists." Others have, quite reasonably, pointed out that a social media site built on Trump's cult of personality is likely to be crazy boring and just not that interesting.

However, I kind of do hope that it actually comes to be, if only to see just how quickly Trump's new social network has to rely on Section 230 to defend itself in court. Remember, Trump spent the last year of his presidency slamming Section 230 (which he completely misrepresented multiple times and never seemed to actually understand). You may recall that one of the parting shots of his presidency was to try to block military funding if Congress wouldn't completely repeal Section 230.

But, of course, if a TrumpBook ever came into actual existence, you can bet that (1) it, like Parler, would need to speedrun the content moderation learning curve, and (2) would certainly be subject to some lawsuits regarding whatever insane crap its users would post. Trump's own comments on his own site would not be protected by Section 230, as that would be content created by an "employee" of the site itself, but the site would be protected from liability from whatever nonsense his sycophantic fans posted. And you can bet that his lawyers (assuming he could find any who would work for him) would very quickly lean on Section 230 to protect the company from any such lawsuits.

I mean, we've already seen Trump rely on anti-SLAPP laws in court, despite demands to "open up our libel laws." So he's already got a precedent for relying on the very same laws he hates in court. Hell, Trump has even relied on Section 230 in court to argue that he wasn't legally liable for his own retweets.

So, sure, let him start his own social network, and then be forced to recognize how Section 230 is actually something that he needs.

33 Comments | Leave a Comment..

Posted on Techdirt - 23 March 2021 @ 10:50am

Senator Mark Warner Doesn't Seem To Understand Even The Very Basic Fundamentals Of Section 230 As He Seeks To Destroy It

from the this-is-astounding dept

On Monday morning, Protocol hosted an interesting discussion on Reimagining Section 230 with two of its reporters, Emily Birnbaum and Issie Lapowsky. It started with those two reporters interviewing Senator Mark Warner about his SAFE TECH Act, which I've explained is one of the worst 230 bills I've seen and would effectively end the open internet. For what it's worth, since posting that I've heard from a few people that Senator Warner's staffers are now completely making up lies about me to discredit my analysis, while refusing to engage on the substance, so that's nice. Either way I was curious to see what Warner had to say.

The Warner section begins at 12 minutes into the video if you want to just watch that part and it's... weird. It's hard to watch this and not come to the conclusion that Senator Warner doesn't understand what he's talking about. At all. It's clear that some people have told him about two cases in which he disagrees with the outcome (Grindr and Armslist), but that no one has bothered to explain to him any of the specifics of either those cases, or what his law would actually do. He also doesn't seem to understand how 230 works now, or how various internet websites actually handle content moderation. It starts out with him (clearly reading off a talking point list put in front of him) claiming that Section 230 has "turned into a get out of jail free card for large online providers to do nothing for foreseeable, obvious and repeated misuse of their platform."

Um. Who is he talking about? There are, certainly, a few smaller platforms -- notably Gab and Parler -- that have chosen to do little. But the "large online platforms" -- namely Facebook, Twitter, and YouTube -- all have huge trust & safety efforts to deal with very difficult questions. Not a single one of them is doing "nothing." Each of them has struggled, obviously, in figuring out what to do, but it's not because of Section 230 giving them a "get out of jail free card." It's because they -- unlike Senator Warner, apparently -- recognize that every decision has tradeoffs and consequences and error bars. And if you're too aggressive in one area, it comes back to bite you somewhere else.

One of the key points that many of us have tried to raise over the years is that any regulation in this area should be humble in recognizing that we're asking private companies to solve big societal problems that governments have spent centuries trying, and failing, to solve. Yet, Warner just goes on the attack -- as if Facebook is magically why bad stuff happens online.

Warner claims -- falsely -- that his bill would not restrict anyone's free speech rights. Warner argues that Section 230 protects scammers, but that's... not true? Scammers still remain liable for any scam. Also, I'm not even sure what he's talking about because he says he wants to stop scamming by advertisers. Again, scamming by advertisers is already illegal. He says he doesn't want the violation of civil rights laws -- but, again, that's already illegal for those doing the discriminating. The whole point of 230 is to put the liability on the actual responsible party. Then he says that we need Section 230 to correct the flaws of the Grindr ruling -- but it sounds like Warner doesn't even understand what happened in that case.

His entire explanation is a mess, which also explains why his bill is a mess. Birnbaum asks Warner who from the internet companies he consulted with in crafting the bill. This is actually a really important question -- because when Warner released the bill, he said that it was developed with the help of civil rights groups, but never mentioned anyone with any actual expertise or knowledge about content moderation, and that shows in the clueless way the bill is crafted. Warner's answer is... not encouraging. He says he talked with Facebook and Google's policy people. And that's a problem, because as we recently described, the internet is way more than Facebook and Google. Indeed, this bill would help Facebook and Google by basically making it close to impossible for new competitors to exist, while leaving the market to those two. Perhaps the worst way to get an idea of what any 230 proposal would do is to only talk to Facebook and Google.

Thankfully, Birnbaum immediately pushed back on that point, saying g that many critics have noted that smaller platforms would inevitably be harmed by Warner's bill, and asking if Warner had spoken to any of these smaller platforms. His answer is revealing. And not in a good way. First, he ignores Birnbaum's question, and then claims that when Section 230 was written it was designed to protect startups, and that now it's being "abused" by big companies. This is false. And Section 230's authors have said this is false (and one of them is a colleague of Warner's in the Senate, so it's ridiculous that he's flat out misrepresenting things here). Section 230 was passed to protect Prodigy -- which was a service owned by IBM and Sears. Neither of those were startups.

Birnbaum: Critics have said that small platforms and publishers will be disproportionately harmed by some of these sweeping Section 230 reforms, including those contained within your bill. So did you have an ongoing conversation with some of those smaller platforms before the bill was introduced? Are you open to any changes that would ensure that they are not disproportionately harmed while Facebook just pays more, which they can afford?

Warner:Section 230 in the late '90s was then about protecting those entrepreneurial startups. What it has transformed into is a "get-out-of-jail-free" card for the largest companies in the world, to not moderate their content, but frankly, to ignore repeated misuse abuse in a way that we've tried to address.

What an odd way to respond to a question about smaller websites -- to immediately focus on the largest companies, and not ever address the question being raised.

Lapowsky jumps in to point out that Warner is not answering the question, and that to just focus on the (false) claim that the "big tech" platforms use 230 as a "get out of jail free card" ignores all the many smaller sites who use it to help deal with frivolous and vexatious litigation. Lapowsky follows that up by noting, correctly, that it's really the 1st Amendment that protects many of the things that Warner is complaining about, and that Section 230 has the procedural benefits that help get such cases kicked out of court earlier. Her question on this is exactly right and really important: Facebook and Google can spend the money to hire the lawyers to succeed on 1st Amendment grounds on those cases. Smaller platforms (like, say, ours) cannot.

Warner, looking perturbed, completely misses the point, and stumbles around with a bunch of half sentences before finally trying to pick a direction to go in. But one thing Warner does not do is actually answer Lapowsky's question. He just repeats what he claims his law will do (ignoring the damage it will actually do). He also claims that the law is being used against the wishes of the authors (the authors have explicitly denied this). He also claims -- based on nothing -- that the courts have "dramatically expanded" what 230 covers, and that other lawmakers don't understand the difference between the 1st Amendment and 230.

And then things go completely off the rails. Lapowsky pushes back, gently, on Warner's misunderstanding of the point and intent of 230, and Warner cuts her off angrily, again demonstrating his near total ignorance of the issue at hand and refusing to address her actual point, but just slamming the table insisting that the big companies are somehow ignoring all bad stuff on their websites. This is (1) simply not true and (2) completely unrelated to the point Lapowsky is making about every other website. What's incredible is how petulant Warner gets when asked to defend just the very basics of his terrible law.

Lapowsky: There's also another part of your bill, though, that deals with affirmative defense requirements. And the idea is basically so defendants couldn't just immediately fast track to the 230 defense to get cases quickly dismissed. And this is something a lot of critics say, effectively, guts the main purpose of Section 230 protections. So tell me a little bit about why you introduced this requirement.

Warner: Are you saying that the original intent of Section 230 was to in a sense, wipe away folks' legal rights?

Lapowsky: Not the intent, but certainly—

Warner: But if we're gonna go back to the intent of the legislation, versus the way the courts have so dramatically expanded what was potentially the original intent, I think it's one of the reasons why we're having this debate. And candidly, some policymakers may not be as familiar with the nuance and the differential between First Amendment rights, which we want to stand by and protect, and what we think has been the misuse of this section and the over-expansion of the court's rulings. We want to draw it back in and to make sure that things that are already illegal — like for example, illegal paid scams that take place on a lot of these platforms, I actually think there should be an ability to bring a suit against those kinds of illegal scams. The idea that you can flash your "get-out-of-jail" Section 230 card up front, before you even get to the merits of any of those discussions, I just respectfully think ought to not be the policy of the United States.

Lapowsky: My understanding of the intent was that this was a bill that was meant to encourage good faith efforts to moderate content, but also protect companies when they get things wrong, when they don't catch all the content or when they take something down that they shouldn't have. And obviously, this was written at a time when the internet—

Warner: Can I just ask, are you saying that Section 230 has reinforced good faith intent on moderation? Again, if that's your view of how it's been used by the large platforms, we just have a fundamental disagreement. I think Section 230 has been used and abused by the most powerful companies in the world.

Lapowsky: I wouldn't—

Warner: [They've been allowed] to not act responsibly, and instead it has allowed whether it's abuse of civil rights, abuse of individuals' personal behaviors as in the Grindr case, whether it's for large platforms to say, "Well, I know this scam artist is paying me to put up content that probably is ripping people off, but I'm going to use Section 230 as a way to prevent me from acting responsibly and actually taking down that content." So if you don't believe those things are happening, then that's a position to have, again, respectfully, I would just fundamentally disagree with you.

Lapowsky: It's not my position—

Warner: Emily, are there other questions? I thought we were gonna hear from a variety of questions. I'm happy to do this debate but I thought that—

There's so much to comment on here. First, Lapowsky is asking a specific question that Warner either does not understand or does not want to answer. She's pointing out, accurately, what 230 actually does and how it protects lots and lots of internet users and sites, beyond the "big" guys. And Warner is obsessing over some perceived problem that he fundamentally does not seem to understand. First of all, no large online platform wants scammers on their website. They don't need to hide behind Section 230, because public pressure in the form of angry users, journalists exposing the bad behavior, and just common sense has every major online site seeking to take down scams.

Warner's bill doesn't do anything to help in that situation other than make sure that if a smaller platform fucks up and misses a scammer, then suddenly they'll face crippling liability. The big platforms -- that Warner is so sure are doing nothing at all -- have massive trust and safety operations on the scale that no other site could possibly match. And they're going to miss stuff. You know why? Because that's the nature of large numbers. You're going to get stuff wrong.

As for the Grindr case, that actually proves the opposite point. The reason the Grindr case was a problem was not that Grindr fucked up, but that law enforcement ignored Matthew Herrick's complaint against his vengeful ex for too long. And eventually they got it right and arrested his ex who had abused Grindr to harass Herrick. Making Grindr liable doesn't fix law enforcement's failures. It doesn't fix anything. All it does is make sure that many sites will be much more aggressive in stifling all sorts of good uses of their platform to make sure they don't miss the rare abusive uses. This is fundamentally why 230 is so important. It creates the framework that enables nearly every platform to work to minimize the mistakes without fearing what happens if they get it wrong (exactly as Lapowsky pointed out, and which Warner refuses to address).

At this point, Lapowsky again tries to explain in more detail what she's asking, and a clearly pissed off Warner cuts her off, ignores her and turns to the other reporter, Birnbaum, to ask if she has any other questions for him, snottily noting that he expected questions from listeners. Lapowsky tries again, pointing out that she thinks it's important to hear Warner respond to the actual criticisms of his bill (rather than just repeating his fantasy vision of what is happening and what his bill does).

Finally, Lapowsky is able to raise one of the key problems we raised in our article: that the SAFE TECH Act, by wiping out 230 protections for any content for which money exchanges hands, is way too broad and would remove 230 for things like web hosting or any kind of advertising. Warner goes on a long rambling rant about how he thinks this should be debated around conference tables as they "iterate," but then also says that the companies should be forced to come to hearings to defend their content moderation practices. Then, rather than actually responding to the point that the language is incredibly broad, he immediately focuses in on one extreme case, the Armslist case, and demands to know Lapowsky's view on what should happen with that site.

But... notice that he never actually answers her question about the incredibly broad language in the bill. It's incredibly ridiculous to focus on an extreme outlier to defend language that would basically impact every website out there by removing any 230 protections for web hosts. This is the worst kind of political grandstanding. Take one extreme example, and push for a law that will impact everyone, and if anyone calls you on the broad reach, just keep pointing at that extreme example. It's disgusting.

At the end, Warner states that he's open to talking to smaller platforms, which is kind of laughable, considering that his staffers have been going around trashing and lying about people like myself that have pointed out the problems with his bill.

Either way, the interview makes clear that Warner does not understand how content moderation works, or what his bill actually does. Clearly, he's upset about a few extreme cases, but he doesn't seem to recognize that in targeting what he believes are two bad court rulings, he would completely upend how every other website works. And when pushed on that, he seems to get angry about it. That's not a good way for legislation to be made.

34 Comments | Leave a Comment..

Posted on Techdirt - 22 March 2021 @ 10:48am

Senators Leahy And Tillis -- Both Strongly Supported By Hollywood -- Ask Merrick Garland To Target Streaming Sites

from the because-of-course dept

As you'll likely recall, at the very end of last year, Senator Thom Tillis, the head of the intellectual property subcommittee in the Senate, slipped a felony streaming bill into the grand funding omnibus. As we noted at the time, this bill -- which was a pure gift to Hollywood -- was never actually introduced, debated, or voted on separately. It was just introduced and immediately slipped into the omnibus. This came almost a decade after Senators had tried to pass a similar bill, connected to the SOPA/PIPA. You may even recall when Senator Amy Klobuchar introduced such a bill in 2011, Justin Bieber actually suggested that maybe Senator Klobuchar should be locked up for trying to turn streaming into a felony.

Of course, this whole thing was a gift to the entertainment industry, who has been a big supporter of Senator Tillis. With the flipping of the Senate, now Senator Leahy has become the chair of the IP subcommittee. As you'll also likely recall, he was the driving force behind the PIPA half of SOPA/PIPA, and has also been a close ally of Hollywood. So close, in fact, that they give him a cameo in every Batman film. Oh, and his daughter is literally one of Hollywood's top lobbyists in DC.

So I guess it's no surprise that Tillis and Leahy have now teamed up to ask new Attorney General Merrick Garland to start locking up those streamers. In a letter sent to Garland, they claim the following:

Unlawful streaming services cost the U.S. economy an estimated $29 billion per year. This illegal activity impacts growth in the creative industries in particular, which combined employ 2.6 million Americans and contribute $229 billion to the economy per year. In short, unlawful streaming is a threat to our creative industries and the economic security and well-being of millions of Americans.

If you've been following these stories long enough, you know where this number comes from. It's from a report put out by the US Chamber of Commerce's "The Global IP Center" and written by NERA Consulting. The US Chamber of Commerce has always been a huge backer of stronger copyright -- mainly because the MPA pays them to be -- and NERA Consulting releases reports for Hollywood all the time. This report is not nearly as bad as some of their earlier reports, but it still makes a ton of assumptions about consumption that seem unlikely to be anywhere close to reality.

Either way, Tillis and Leahy want Garland to get down to doing exactly what Hollywood wants:

Now that have you been confirmed, will you commit to making prosecutions under the PLSA a priority? If so, what steps will you take during your first one hundred days to demonstrate your commitment to combating copyright piracy?

How quickly do you intend to update the U.S. Attorneys manual to indicate prosecutors should pursue actions under the PLSA?

Hurry up and throw streamers in jail!

As if recognizing just how bad this looks, they did include one final point as a sort of nod towards the fact that the DOJ probably shouldn't be going after ordinary everyday streamers.

When updating the U.S. Attorneys manual, what type of guidance do you intend to provide to make clear that prosecutions should only be pursued against commercial piracy services? Such guidance should make clear that the law does not allow the Department to target the ordinary activities of individual streamers, companies pursuing licensing deals in good faith, or internet service providers (ISPs) and should be reflective of congressional intent as reflected in our official record.

Just the fact that they need to include this certainly suggests that they know how dangerous the law they passed was, and how it could easily be misinterpreted and/or abused to go after such individuals or companies.

Hopefully, AG Garland realizes that he's got more important things to do than being Hollywood's latest cop on the beat.

Read More | 16 Comments | Leave a Comment..

Posted on Free Speech - 22 March 2021 @ 9:42am

Appeals Court Judge Attacks Fundamental Principle Of 1st Amendment Law, Because He Thinks The Media Likes Democrats Too Much

from the ooooooh-boy dept

Two years ago, Supreme Court Justice Clarence Thomas shocked a lot of people by arguing -- somewhat out of nowhere -- that the Supreme Court should revisit the NY Times v. Sullivan ruling. If you're unaware, that 1964 ruling is perhaps the most important and fundamental Supreme Court ruling regarding the 1st Amendment. It's the case that established a few key principles and tests that are incredibly important in stopping vexatious, censorial SLAPP suits -- often by those in power, against those who criticize.

Now, a DC Circuit appeals court judge -- and close friend of Thomas's -- is suggesting that the court toss that standard. And his reasons are... um... something quite incredible. Apparently, he's mad that the media and big tech are mean to Republicans, and he's worried that Fox News and Rupert Murdoch aren't doing enough to fight back against those evil libs, who are "abusing" the 1st Amendment to spew lies about Republicans. As you'll see, the case in question isn't even about the media, the internet, or Democrats/Republicans at all. It's about a permit in Liberia to drill for oil. Really. But there's some background to go through first.

The key part of the Sullivan case is that, if the plaintiff is considered a "public figure," then they need to show "actual malice" to prove defamation. The actual malice standard is widely misunderstood. As I've heard it said, "actual malice" requires no actual malice. It doesn't mean that the person making the statements really dislikes who they're talking about. It means that the person making the statements knew that the statements were false, or made the statements "with reckless disregard for the truth." Once again, "reckless disregard for the truth" has a specific meaning that is not what you might think. In various cases, the Supreme Court has made it clear that this means that the person either had a "high degree of awareness" that the statements are probably false or "entertained serious doubts as to the truth" of the statements. In other words, it's not just that they didn't do due diligence. It's that they did, found evidence suggesting the content was false, and then still published anyway.

This is, obviously, a high bar to get over. But that's on purpose. That's how defamation law fits under the 1st Amendment (some might argue that defamation law itself should violate the 1st Amendment as it is, blatantly, law regarding speech -- but by limiting it to the most egregious situations, the courts have carved out how the two can fit together). Five years ago, 1st Amendment lawyer Ken White noted that there was no real concerted effort to change this standard, and it seemed unlikely that many judges would consider it.

Unlike, say, Roe v. Wade, nobody's been trying to chip away at Sullivan for 52 years. It's not a matter of controversy or pushback or questioning in judicial decisions. Though it's been the subject of academic debate, even judges with philosophical and structural quarrels with Sullivan apply it without suggesting it is vulnerable. Take the late Justice Scalia, for example. Scalia thought Sullivan was wrongly decided, but routinely applied it and its progeny in cases like the ones above. You can go shopping for judicial candidates whose writings or decisions suggest they will overturn Roe v. Wade, but it would be extremely difficult to find on... chemtrail-level, but several firm strides in that direction. Nor is the distinction between fact and opinion controversial — at least not from conservatives. There's been some back and forth over whether opinion is absolutely protected (no) or whether it might be defamatory if it implies provably false facts (yes) but there's no conservative movement to make insults and hyperbole subject to defamation analysis. The closest anyone gets to that are liberal academics who want to reinterpret the First Amendment to allow prohibitions of "hate speech" and other "hurtful" words. It seems unlikely that Trump would appoint any of these.

In short, there's no big eager group of "overturn Sullivan" judges waiting in the wings to be sent to the Supreme Court. The few academics who argue that way are likely more extreme on other issues than Trump would want.

And that's why Clarence Thomas's attack on the Sullivan standard was so shocking two years ago. It came basically out of nowhere. Thomas tried to make it all about "originalism", suggesting that if the framers of the Constitution didn't set up different standards for public figures, neither should the Supreme Court. Indeed, what was motivating Thomas' anger at the Sullivan standard seemed to be... that it let too many people be mean to public figures. He even seemed to argue that defamation law should be flipped to be more protective of public figures, since apparently those public figures are delicate little flowers who can't be forced to face pointed criticism. From his statement:

Far from increasing a public figure’s burden in a defamation action, the common law deemed libels against public figures to be, if anything, more serious and injurious than ordinary libels. See 3 Blackstone *124 (“Words also tending to scandalize a magistrate, or person in a public trust, are reputed more highly injurious than when spoken of a private man”); 4 id., at *150 (defining libels as “malicious defamations of any person, and especially a magistrate, made public by either printing, writing, signs, or pictures, in order to provoke him to wrath, or expose him to public hatred, contempt, and ridicule” (emphasis added)). Libel of a public official was deemed an offense “‘most dangerous to the people, and deserv[ing of] punishment, because the people may be deceived and reject the best citizens to their great injury, and it may be to the loss of their liberties.’”

In the two years since he wrote that, thankfully, there's been little other movement in the courts to attack the Sullivan standard. Indeed, as White had suggested, any move to do so seems to be viewed as blatantly conspiratorial. However, now an appeals court judge has done exactly what Thomas seemed to be signaling he wanted. And, perhaps not surprisingly, that judge happens to be not just a close friend of Clarence Thomas, but the judge who convinced Clarence Thomas to become a judge in the first place.

Judge Laurence Silberman has been on the DC Circuit since 1985, and has been on "senior status" since 2000. But apparently he's got a real bone to pick with the Sullivan standard. In an absolutely incredible back-and-forth majority opinion and dissent in a defamation case, it is made quite clear that Silberman hates the Sullivan actual malice standard, believes the media is super biased and mean to conservatives, and is no fan of the two other judges on the panel, Judge Sri Srinivasan (currently the Chief Judge on the DC Circuit) and Judge David Tatel.

Both the majority opinion, by Tatel with Srinivasan joining, and the dissent, snipe at the other side in quite pointed ways. But we'll get to that. First, the details of the case. Without going too deep into the weeds, it involves a deal in which Exxon sought to buy an oil drilling license from Liberia. There had been concerns about corruption regarding oil licensing deals in Liberia in the past -- including the very specific plot that Exxon was seeking to drill in. Liberia had put together a committee to help oversee these kinds of negotiations. After the deal -- the largest ever for Liberia -- was completed, the National Oil Company of Liberia awarded bonuses to the negotiators on the committee. Two of those negotiators, Christiana Tah and Randolph McClain, were Liberia's Minister of Justice and the CEO of the National Oil Company of Liberia. Each received a $35,000 bonus.

Global Witness, a non-profit that tries to highlight corruption and human rights violations related to "natural resource exploitation" put out a report alleging that these bonuses were bribes to get the deal to go through. Accusing someone of accepting a bribe is, at least on its face, a much more serious claim and could actually be defamatory (unlike many cases we see where people scream defamation over opinions). However, this case ran into a big problem: the lack of actual malice, which allowed the district court to dismiss the case relatively quickly (as an aside, Global Witness also sought to use DC's anti-SLAPP law, but unfortunately since the DC Circuit has said for years that DC's anti-SLAPP law cannot be used in federal court that failed at both the district and the appeals court level).

Here, the majority opinion explains (in quite readable fashion!) the actual malice standard, and why Tah and McClain failed to establish it. For those who want a nice summary of how actual malice works, the opinion is a good summation:

The actual malice standard is famously “daunting.” McFarlane v. Esquire Magazine, 74 F.3d 1296, 1308 (D.C. Cir. 1996). A plaintiff must prove by “clear and convincing evidence” that the speaker made the statement “with knowledge that it was false or with reckless disregard of whether it was false or not.” Jankovic III, 822 F.3d at 589–90 (second part quoting New York Times Co., 376 U.S. at 279–80). “[A]lthough the concept of reckless disregard cannot be fully encompassed in one infallible definition,” the Supreme Court has “made clear that the defendant must have made the false publication with a high degree of awareness of probable falsity,” or “must have entertained serious doubts as to the truth of his publication.” Harte-Hanks Communications, Inc. v. Connaughton, 491 U.S. 657, 667 (1989) (alteration omitted) (internal quotation marks omitted); see also id. at 688 (using these formulations interchangeably). The speaker’s failure to meet an objective standard of reasonableness is insufficient; rather the speaker must have actually “harbored subjective doubt.” Jankovic III, 822 F.3d at 589.

But soon after this, the barbs at Silberman begin. The ruling notes that Silberman seems to have his own objective in dissenting -- even highlighting that the plaintiffs in the case didn't even make the argument Silberman so desperately seems to want them to make.

The dissent thinks this is an easy case. “In Global Witness’s story,” the dissent asserts, “Exxon was the briber,” Dissenting Op. at 1, yet the report admits that “Global Witness ha[d] no evidence that Exxon directed NOCAL to pay Liberian officials, nor that Exxon knew such payments were occurring,” Report at 31.

Critically, however, neither Tah nor McClain advances this theory—in their briefing to us, they never even mention the sentence on which the dissent relies. They make four specific arguments in support of their claim that Global Witness possessed actual malice, supra at 8, not one of which is that Global Witness had no evidence that Exxon was the briber, and for good reason. At most, the report implies that NOCAL, not Exxon, was the briber, thus rendering any lack of evidence as to Exxon’s direction or knowledge of the payments totally irrelevant.

The opinion then even calls out Silberman for trying to coax the lawyers to make the argument he wanted them to make instead of the argument they were actually making:

Indeed, when our dissenting colleague surfaced his theory at oral argument, it was so foreign to appellants’ counsel that our colleague had to spoon-feed him after he failed to get the initial hint. See Oral Arg. Tr. at 10 (“Well, no, it’s worse. Isn’t it stronger than that, counsel? We have no evidence.”). As our dissenting colleague himself has made clear, “we do not consider arguments not presented to us.” Diamond Walnut Growers, Inc. v. NLRB, 113 F.3d 1259, 1263 (D.C. Cir. 1997) (en banc). Or put another way, “appellate courts do not sit as self-directed boards of legal inquiry and research, but essentially as arbiters of legal questions presented and argued by the parties before them.” Carducci v. Regan, 714 F.2d 171, 177 (D.C. Cir. 1983).

Ooof. And, indeed, when you read the dissent, you can see why Tatel was so annoyed. Silberman pretty clearly has a point he wants to make and he's going to make it whether or not Tah and McClain raised the issue in the case or not. And that point is (1) the actual malice standard is bad, (2) mainstream media companies are bad because they support Democrats, (3) big tech is bad because it support Democrats, and (4) to some extent, Silberman thinks his colleagues on the bench are bad. Oh, but Fox News, Rupert Murdoch, and his buddy Clarence Thomas are all good. It's... quite incredible. I mean, check out this statement:

My disagreement with the district court is limited to the actual malice question (my disagreement with the Majority is much broader).

A key part of the disagreement is whether Exxon or NOCAL was considered the "briber" in this case, though the reason that's important seems fairly tortured, so I won't even get into it here. Suffice it to say, Silberman believes that the story Global Witness wrote is "inherently implausible" and therefore that should satisfy the standard for defamation. But in discussing it, Silberman again throws tremendous shade on his colleagues:

The Majority’s assertion that this argument was never made by the Appellants leads me to wonder whether we received the same briefs. In my copy, Appellants argue that “Global Witness subjectively knew that it had not been able to determine whether the payments of $35,000 to Christiana Tah and Randolph McClain were corrupt bribery payments. Yet . . . Global Witness proceeded to present to readers the defamatory message that in fact [] Tah and [] McClain had taken bribes.” Appellant Br. 36 (emphasis in original). That sounds to me a whole lot like accusing Global Witness of publishing its story with no evidence to back it up. The Majority, moreover, faults me for assessing the inherent (im)plausibility of Global Witness’s story, without a specific request from Tah and McClain to do so. But (as discussed) “inherently implausible” is a legal standard by which we assess Appellants’ arguments—not an argument to be advanced.

And from there, Silberman is off to the races, he spends a few pages accusing the majority of making stuff up, before finally getting around to the point he really wants to make. He wants to take Justice Thomas up on the offer to get rid of the actual malice standard entirely:

After observing my colleagues’ efforts to stretch the actual malice rule like a rubber band, I am prompted to urge the overruling of New York Times v. Sullivan. Justice Thomas has already persuasively demonstrated that New York Times was a policy-driven decision masquerading as constitutional law. See McKee v. Cosby, 139 S. Ct. 675 (2019) (Thomas, J., concurring in denial of certiorari). The holding has no relation to the text, history, or structure of the Constitution, and it baldly constitutionalized an area of law refined over centuries of common law adjudication. See also Gertz v. Robert Welch, Inc., 418 U.S. 323, 380–88 (1974) (White, J., dissenting). As with the rest of the opinion, the actual malice requirement was simply cut from whole cloth. New York Times should be overruled on these grounds alone.

He at least acknowledges that it would be "difficult" to get the Supreme Court to "overrule such a 'landmark' decision," noting correctly that it would "incur the wrath of press and media." And it would, because it would open up the media (and basically everyone else) to a bunch of censorial SLAPP suits. Silberman then reminisces about pushing the Supreme Court to overrule another "similarly illegitimate constitutional decision" -- one that has been quite important in allowing people whose civil rights were violated by police to seek redress. He goes on to whine that other judges, including then Supreme Court Justice Kennedy, got upset with him for urging such an overturning of precedent. Kennedy, responding to Silberman, suggested that "we must guard against disdain for the judicial system." Silberman seems to relish his contrarian position:

To the charge of disdain, I plead guilty. I readily admit that I have little regard for holdings of the Court that dress up policymaking in constitutional garb. That is the real attack on the Constitution, in which—it should go without saying—the Framers chose to allocate political power to the political branches. The notion that the Court should somehow act in a policy role as a Council of Revision is illegitimate. See 1 The Records of the Federal Convention of 1787, at 138, 140 (Max Farrand ed., 1911). It will be recalled that maintaining the Brezhnev doctrine strained the resources and legitimacy of the Soviet Union until it could no longer be sustained.

He then goes through the details of the Sullivan ruling, arguing that it was clear judicial activism, and insists that such a ruling would never have happened today. Then he complains that it has given the press way too much power:

There can be no doubt that the New York Times case has increased the power of the media. Although the institutional press, it could be argued, needed that protection to cover the civil rights movement, that power is now abused. In light of today’s very different challenges, I doubt the Court would invent the same rule.

As the case has subsequently been interpreted, it allows the press to cast false aspersions on public figures with near impunity.

And then it's all "those media orgs are so mean to my friends."

Although the bias against the Republican Party—not just controversial individuals—is rather shocking today, this is not new; it is a long-term, secular trend going back at least to the ’70s. (I do not mean to defend or criticize the behavior of any particular politician). Two of the three most influential papers (at least historically), The New York Times and The Washington Post, are virtually Democratic Party broadsheets. And the news section of The Wall Street Journal leans in the same direction. The orientation of these three papers is followed by The Associated Press and most large papers across the country (such as the Los Angeles Times, Miami Herald, and Boston Globe). Nearly all television—network and cable—is a Democratic Party trumpet. Even the government-supported National Public Radio follows along.

Uh... what?

Also, big tech is bad:

As has become apparent, Silicon Valley also has an enormous influence over the distribution of news. And it similarly filters news delivery in ways favorable to the Democratic Party. See Kaitlyn Tiffany, Twitter Goofed It, The Atlantic (2020) (“Within a few hours, Facebook announced that it would limit [a New York Post] story’s spread on its platform while its third-party fact-checkers somehow investigated the information. Soon after, Twitter took an even more dramatic stance: Without immediate public explanation, it completely banned users from posting the link to the story.”).

What does this have to do with a case regarding oil drilling in Liberia? You know as much as I do. But don't worry, Judge Silberman wants you to know that at least there's Rupert Murdoch to step in and balance the scales at least somewhat. Really. I'm not kidding.

To be sure, there are a few notable exceptions to Democratic Party ideological control: Fox News, The New York Post, and The Wall Street Journal’s editorial page. It should be sobering for those concerned about news bias that these institutions are controlled by a single man and his son. Will a lone holdout remain in what is otherwise a frighteningly orthodox media culture? After all, there are serious efforts to muzzle Fox News. And although upstart (mainly online) conservative networks have emerged in recent years, their visibility has been decidedly curtailed by Social Media, either by direct bans or content-based censorship.

He also has another footnote attacking the 1st Amendment rights of the internet companies, which he insists -- without any actual evidence, because none exists -- are "biased" against his Republican friends.

Of course, I do not take a position on the legality of big tech’s behavior. Some emphasize these companies are private and therefore not subject to the First Amendment. Yet—even if correct— it is not an adequate excuse for big tech’s bias. The First Amendment is more than just a legal provision: It embodies the most important value of American Democracy. Repression of political speech by large institutions with market power therefore is—I say this advisedly—fundamentally un-American. As one who lived through the McCarthy era, it is hard to fathom how honorable men and women can support such actions. One would hope that someone, in any institution, would emulate Margaret Chase Smith.

He then proceeds to complain about how the media and big tech are helping Democrats.

There can be little question that the overwhelming uniformity of news bias in the United States has an enormous political impact. That was empirically and persuasively demonstrated in Tim Groseclose’s insightful book, Left Turn: How Liberal Media Bias Distorts the American Mind (2011). Professor Groseclose showed that media bias is significantly to the left. Id. at 192–197; see also id. at 169–77. And this distorted market has the effect, according to Groseclose, of aiding Democratic Party candidates by 8–10% in the typical election. Id. at ix, 201–33. And now, a decade after this book’s publication, the press and media do not even pretend to be neutral news services.

It should be borne in mind that the first step taken by any potential authoritarian or dictatorial regime is to gain control of communications, particularly the delivery of news. It is fair to conclude, therefore, that one-party control of the press and media is a threat to a viable democracy. It may even give rise to countervailing extremism. The First Amendment guarantees a free press to foster a vibrant trade in ideas. But a biased press can distort the marketplace. And when the media has proven its willingness—if not eagerness—to so distort, it is a profound mistake to stand by unjustified legal rules that serve only to enhance the press’ power.

And that's how it closes. Even if there are legitimate reasons to question the "actual malice" standard, to go on an unhinged Fox News-style rant about "anti-conservative bias" seems particularly ridiculous. It sure looks like Silberman has been spending a bit too much time believing propaganda, and is seeking to torpedo a free press in response.

Read More | 46 Comments | Leave a Comment..

Posted on Techdirt - 19 March 2021 @ 12:11pm

House Republicans Want To Flip Section 230 On Its Head, Bring Back Distributor Liability

from the yikes dept

There was a time when a key part of the Republicans' political platform was for "tort reform" and reducing the ability of civil lawsuits to be brought against companies. The argument they made (and to which they still give lip service) is that too much liability leads to a barrage of frivolous nuisance litigation, which only benefits greedy trial lawyers. Apparently, that concept has been tossed out the window -- as with so many Republican principles -- if you mention "big tech." The latest example of this is a new Section 230 reform bill introduced by Representative Jim Banks called the "Stop Shielding Culpable Platforms Act" which would massively increase liability on any company that hosts user content online.

Banks trumpeted his own confusion on this issue earlier in the week by tweeting -- falsely -- that "Section 230 knowingly lets Big Tech distribute child pornography without fear of legal repercussions." This is wrong. Child sexual abuse material (CSAM) is very, very, very much illegal and any website hosting it faces serious liability issues. Section 230 does not cover federal criminal law, and CSAM violates federal criminal law. Furthermore, federal law requires every website to report the discovery of CSAM to the CyberTipline run by NCMEC.

The law is pretty clear here and you'd think that a sitting member of Congress could, perhaps, have had someone look it up?

(a)Duty To Report.—

(1)In general.—

(A)Duty.—In order to reduce the proliferation of online child sexual exploitation and to prevent the online sexual exploitation of children, a provider—

(i)shall, as soon as reasonably possible after obtaining actual knowledge of any facts or circumstances described in paragraph (2)(A), take the actions described in subparagraph (B); and

(ii)may, after obtaining actual knowledge of any facts or circumstances described in paragraph (2)(B), take the actions described in subparagraph (B).

(B)Actions described.—The actions described in this subparagraph are—

(i)providing to the CyberTipline of NCMEC, or any successor to the CyberTipline operated by NCMEC, the mailing address, telephone number, facsimile number, electronic mailing address of, and individual point of contact for, such provider; and

(ii)making a report of such facts or circumstances to the CyberTipline, or any successor to the CyberTipline operated by NCMEC.

And yet, Banks seems to ignore all of this. And that leads to this new law. To be fair, the law itself is not as insane and disconnected from reality as so many other Section 230 bills, but it's still ridiculous. It's all built on the false argument that websites are free to knowingly host this kind of content. In fact, the bill is mostly performative. The vast majority of the bill is actually Banks misrepresenting news stories to make it sound -- falsely -- like websites are free to knowingly host CSAM.

The actual change to 230 is much shorter -- but the impact would basically flip Section 230's role on its head, and would lead to two things I thought Republicans were against: widespread suppression of speech online and a massive influx of frivolous and vexatious litigation. Here's the change. It would add in this paragraph to Section 230:

‘‘(B) NO EFFECT ON TREATMENT AS DISTRIBUTOR.—Nothing in subparagraph (A) shall be construed to prevent a provider or user of an interactive computer service from being treated as the distributor of information provided by another information content provider.’’.

To understand all this, it helps to understand the different kinds of liability that existed pre-Section 230. This history is well documented in Jeff Kosseff's excellent book on the history of Section 230. The key case here was Smith v. California, which involved a book store that was found to have violated a city ordinance against obscenity for having a book on its shelves that was deemed obscene (that book which is currently listed on Amazon, though out of stock, was apparently fairly tame by modern standards, but did involve some scenes where -- gasp -- sex happens).

Either way, the Supreme Court ruled in the Smith case that while obscene books are not constitutionally protected, you can't hold the bookseller liable if they did not have knowledge of the contents of the book and how it was obscene. And thus, the Supreme Court established a somewhat messy "distributor liability" standard, in which you could be liable for books you distributed... but there had to be some knowledge by the distributor of the illegality of the material. The court -- somewhat explicitly -- refused to discuss what "mental element is requisite" to prove knowledge. This distributor liability was considered different from "publisher liability," because the assumption was that if you're the actual publisher, then you obviously have knowledge of the material in question.

This resulted in a lot of confusion in the ensuing years -- and pre-Section 230 -- there was a lot of concern about how that would play out on the early internet (or even with other distributors). Eventually, with the ruling in the Stratton Oakmont v. Prodigy case, a judge leapt right past distributor liability, and said that Prodigy actually had publisher liability for defamatory material, simply because it did some moderation.

Section 230 was written, explicitly, to overrule the decision in the Prodigy case. However, since the Prodigy case focused on actual publisher liability, and didn't even get into the weeds of distributor liability, there was some early confusion as to whether or not Section 230 actually protected against distributor liability as well. Indeed, some observers of internet law were initially unimpressed by Section 230, suggesting that it might be useful, but not until courts really had weighed in on the "jumbled mess" of secondary liability frameworks and how 230 impacted them. That changed after the first big case involving Section 230, Zeran v. AOL, which read 230 broadly to say that it prohibited all such civil liability -- including distributor liability.

Since then, Section 230's authors -- Chris Cox and Ron Wyden -- have repeatedly said that the court in Zeran got it exactly right. They have noted, correctly, that any other interpretation of 230 would make it close to useless, because it would lead to a bunch of frivolous lawsuits involving wasteful fighting over discovery to prove "knowledge."

But, apparently, that's what Jim Banks and the Republican Study Committee he leads want. A lot more liability and costly legal fights over discovery to prove knowledge and create liability for distributors. I mean, it's so ridiculous that it might even lead trial lawyers -- a group that has historically backed Democrats -- to start stumping for Republicans since this will open up the field to tons of costly litigation. And, of course, adding back in distributor liability won't magically fix the issues that Banks claims he's trying to fix because -- as already noted -- allowing CSAM on any website is already very, very much illegal, and a huge liability. So none of that changes.

The only actual change created by this bill is that it... will enable lawsuits against tons of websites. And, in order to avoid some of that costly litigating, many websites will also enable the heckler's veto. All anyone will ever have to do to remove content they dislike from the internet is send a notice claiming the content violates some law (defamation being the easiest, but there are others as well). Then, they'll be able to claim "knowledge" if the website refuses to remove it. That means that most websites will be quick to remove any content that someone claims is defamatory, no matter how ridiculous.

We already know how this works out because it's kind of the way the DMCA works today -- except that at least with the DMCA there are some built in counternotice provisions. But already the DMCA is abused to try to hide information. And Banks' change, should it become law, would make that much more widely available. At least under the DMCA, sites can more easily see that, say a negative review, is obviously not copyright infringing. Whether or not it's defamatory is not something a website can easily judge -- and therefore it's much more likely to just pull down the content.

This bill wouldn't just change Section 230, it would flip the entire logic of 230 on its head. Rather than giving websites the flexibility and personal responsibility to moderate in a manner to fit their own community, it would cause nearly every website to start pulling down content at the first whiff of controversy. Rather than enabling free speech online, it would stifle it. Indeed, one might argue that under this law, Twitter would be forced to pull down Banks' tweet about this law. After all it could be argued that it defames Twitter itself in falsely claiming the company knowingly hosts CSAM. Under Banks' law, Twitter would become liable for those false claims... about Twitter.

It's a weird flex for a Republican to push for greater suppression of speech and more frivolous lawsuits, but that's what Jim Banks is doing here. What amazing times we live in.

Read More | 48 Comments | Leave a Comment..

Posted on Free Speech - 19 March 2021 @ 10:44am

China Warns Microsoft That LinkedIn Isn't Suppressing Enough Voices

from the now-that's-censorship dept

As a bunch of US lawmakers keep threatening new laws that would force websites to remove more content, we should note just how much such moves reflect what is happening in China. The NY Times reports that Microsoft is in hot water in China, because LinkedIn apparently has been too slow to block content that displeases the Chinese government. As the article notes, LinkedIn is the one major US social network that is allowed in China -- but only if it follows China's Great Firewall censorship rules.

If you're not familiar with how that works, it's not that the government tells you what to take down -- it's just that the government makes it clear that if you let something through that you shouldn't, you're going to hear about it, and risk punishment. And it appears that's exactly what's happened to Microsoft:

China’s internet regulator rebuked LinkedIn executives this month for failing to control political content, according to three people briefed on the matter. Though it isn’t clear precisely what material got the company into trouble, the regulator said it had found objectionable posts circulating in the period around an annual meeting of China’s lawmakers, said these people, who asked for anonymity because the issue isn’t public.

As a punishment, the people said, officials are requiring LinkedIn to perform a self-evaluation and offer a report to the Cyberspace Administration of China, the country’s internet regulator. The service was also forced to suspend new sign-ups of users inside China for 30 days, one of the people added, though that period could change depending on the administration’s judgment.

Or, Microsoft/LinkedIn could do the right thing and tell the Chinese government "sorry," and just stop doing business in China. The NY Times article even notes that LinkedIn doesn't even get that much usage in China. So why bother with this hassle in a way that makes the company look so bad?

Also, I'll just note the grand irony of Microsoft doing this just a week or so after Microsoft's President Brad Smith testified before Congress on how "technology companies" must support "democracy." Of course, in that context, Smith was just doing it to attack Google and the open web. But, hey, as long as it can get money from China, apparently all that "democracy" stuff isn't so important to Microsoft any more.

7 Comments | Leave a Comment..

Posted on Techdirt - 18 March 2021 @ 9:38am

The Internet Is Not Just Facebook, Google & Twitter: Creating A 'Test Suite' For Your Great Idea To Regulate The Internet

from the test-it-out dept

A few weeks ago, Stanford's Daphne Keller -- one of the foremost experts on internet regulation -- highlighted how so much of the effort at internet reform seems to treat "the internet" as if it was entirely made up of Facebook, Google and Twitter. These may be the most visible sites to some, but they still make up only a small part of the overall internet (granted: sometimes it seems that Facebook and, to an only slightly lesser extent, Google, would like to change that, and become "the internet" for most people). Keller pointed out that the more that people -- especially journalists -- talk about the internet as if it were just those three companies, the more it becomes a self-fulfilling prophecy, in part because it drives regulation that is uniquely focused on the apparently "problems" associated with those sites (often mis- and disinformation).

I was reminded of this now, with the reintroduction of the PACT Act. As I noted in my writeup about the bill, one of the biggest problems is that it treats the internet as if every website is basically Google, Facebook, and Twitter. The demands that it puts on websites aren't a huge deal for those three companies -- as they mostly meet the criteria already. The only real change it would make for those sites is that they'd maybe have to beef up their customer support staff to have telephone support.

But for tons of other companies -- including Techdirt -- the bill is an utter disaster. It treats us the same as it treats Facebook, and acts like we need to put in place a massive, expensive customer service/content moderation operation that wouldn't make any sense, and would only serve to enable our resident trolls to demand that we have to provide a detailed explanation why the community voted down their comments.

In that same thread, Keller suggested something that I think would be quite useful. Saying that there should be a sort of "test suite" of websites that anyone proposing internet regulation should have to explore how the regulations would effect those sites.

She suggested that the test suite could include Wikipedia, Cloudflare, Automattic, Walmart.com and the NY Times.

I'd extend that list significantly. Here would be mine:

  • Wikipedia
  • Github
  • Cloudflare
  • Zoom
  • Clubhouse
  • Automattic
  • Amazon
  • Shopify
  • NY Times / WSJ
  • Patreon
  • Internet Archive
  • Mastodon
  • Reddit
  • Nextdoor
  • Steam (Valve)
  • Eventbrite
  • Discord
  • Dropbox
  • Yelp
  • Twilio
  • Substack
  • Matrix
  • Glitch
  • Kickstarter
  • Slack
  • Stack Overflow
  • Notion
  • Airtable
  • WikiHow
  • ProductHunt
  • Instructables
  • All Trails
  • Strava
  • Bumble
  • Ravelry
  • DuoLingo
  • Shapeways
  • Coursera
  • Kahoot
  • Threadless
  • Bandcamp
  • Magic Cafe
  • Wattpad
  • Figma
  • LibraryThing
  • Fandom
  • Geocaching
  • VSCO
  • BoardGameGeek
  • DnDBeyond
  • GuitarMasterClass
  • Metafilter
  • BoingBoing
  • Cameo
  • OnlyFans
  • Archive of Our Own
  • Itch.io
  • Etsy
  • Tunecore
  • Techdirt
This list may feel a bit long for a "test suite" (and, indeed, as I started to put it together, I expected it to be much shorter). But, as I thought about each of these sites, I realized that they all deal with user generated content and content moderation questions -- and for each one, the moderation questions are handled in vastly different ways. The list could be a lot longer. These are just ones that I came up with quickly.

And... that's kind of the point. The great thing about Section 230 is that it allows each of these websites to take their own approach to content moderation, an approach that fits their community. Some of them rely on users to moderate. Some of them rely on a content moderation team. But if you ran through this list and explored something like the PACT Act -- or the even worse SAFE-TECH Act -- you quickly realize that it would create impossible demands for many, many of these sites.

Incredibly, all this would do is move most of the functions of many of these sites -- especially the small, niche, targeted communities... over to the internet giants of Facebook and Google. Does anyone legitimately think that a site like LibraryThing needs to issue twice-a-year transparency reports on its content moderation decisions? Or that All Trails should be required to set up a live call center to respond to complaints about content moderation? Should Matrix be required to create an Acceptable Use Policy? Should the NY Times have to release a transparency report regarding what comments it moderated?

For many of the companies -- especially the more niche community sites -- the likely response is that there's no way that they can even do that. And so many of those sites will go away, or will vastly curtail their community features. And, that takes us right back to the point that we started with, as raised by Keller. When we treat the internet as if it's just Facebook, Google, and Twitter, and regulate it as such, then it's going to drive all communities to Facebook, Google, and Twitter as the only companies which can actually handle the compliance.

And why would anyone (other than perhaps Facebook, Google, and Twitter!) want that?

71 Comments | Leave a Comment..

Posted on Techdirt - 17 March 2021 @ 12:18pm

PACT Act Is Back: Bipartisan Section 230 'Reform' Bill Remains Mistargeted And Destructive

from the second-verse,-same-as-the-first dept

Last summer we wrote about the PACT Act from Senators Brian Schatz and John Thune -- one of the rare bipartisan attempts to reform Section 230. As I noted then, unlike most other 230 reform bills, this one seemed to at least come with good intentions, though it was horribly confused about almost everything in actual execution. If you want to read a truly comprehensive takedown of the many, many problems with the PACT Act, Prof. Eric Goldman's analysis is pretty devastating and basically explains how the drafters of the bill tried to cram in a bunch of totally unrelated things, and did so in an incredibly sloppy fashion. As Goldman concludes:

This bill contains a lot of different policy ideas. It adds multiple disclosure obligations, regulates several aspects of sites’ editorial processes, makes three different changes to Section 230, and asks for two different studies. Any one of these policy ideas, standing alone, might be a significant policy change. But rather than proposing a narrow and targeted solution to a well-identified problem, the drafters packaged this jumble of ideas together to create a broad and wide-ranging omnibus reform proposal. The spray-and-pray approach to policymaking betrays the drafters’ lack of confidence that they know how to achieve their goals.

Daphne Keller also has a pretty thorough explanation of problems in the original -- noting that the bill contains some ideas that seem reasonable, but often seems sorely lacking in important details or recognition of the complexity involved.

And, to their credit, staffers working on the bill did seem to take these and other criticisms at least somewhat seriously. They reached out to many of the critics of the PACT Act (including me) to have fairly detailed conversations about the bill, its problems, and other potential approaches. Unfortunately, in releasing the new version today, it does not appear that they took many of those criticisms to heart. Instead, they took the same basic structure of the bill and just played around at the margins, leaving the new bill a problematic mess, though a slightly less problematic mess than last year's version.

The bill still suffers from the same point that Goldman made originally. It throws a bunch of big (somewhat random) ideas into one bill, with no clear explanation of what problem it's actually trying to solve. So it solves for things that are not problems, and calls other things problems that are not clearly problems, while creating new problems where none previously existed. That's disappointing to say the least.

Like the original, the bill requires that service providers publish an "Acceptable Use Policy," and then puts in place a convoluted complaint and review process, along with transparency reporting on this. This entire section demonstrates the fundamental problem with those writing the PACT Act -- and it's a problem that I know people explained to them: it treats this issue as if it's the same across basically every website. But, it's not. This bill will create a mess for a shit ton of websites -- including Techdirt. Forcing every website that accepts content from users to post an "acceptable use policy" leads us down the same stupid road as requiring every website have a privacy policy. It's a nonsensical approach -- because the only reasonable way to write up such a policy is to keep it incredibly broad and vague, to avoid violating it. And that's why no one reads them or finds them useful -- they only serve as a potential way to avoid liability.

And writing an "acceptable use" policy that "reasonably informs users about the types of content that are allowed on the interactive computer service" is a fool's errand. Because what is and what is not acceptable depends on many, many variables, including context. Just by way of example, many websites famously felt differently about having Donald Trump on their platform before and after the January 6th insurrection at the Capitol. Do we all need to write into our AUPs that such-and-such only applies if you don't encourage insurrection? As we've pointed out a million times, content policy involves constant changes to your policies as new edge cases arise.

People who have never done any content moderation seem to assume that most cases are obvious and maybe you have a small percentage of edge cases. But the reality is often the opposite. Nearly every case is an edge case, and every case involves different context or different facts, and no "Acceptable Use Policy" can possibly cover that -- which is why big companies are changing their policies all the time. And for smaller sites? How the fuck am I supposed to create an Acceptable Use Policy for Techdirt? We're quite open with our comments, but we block spam, and we have our comment voting system -- so part of our Acceptable Use Policy is "don't write stuff that makes our users think you're an asshole." Is that what Schatz and Thune want?

The bill then also requires this convoluted notice-takedown-appeal process for content that violates our AUP. But how the hell are we supposed to do that when most of the moderation takes place by user voting? Honestly, we're not even set up to "put back" content if it has been voted trollish by our community. We'd have to re-architect our comments. And, the only people who are likely to complain... are the trolls. This would enable trolls to keep us super busy having to respond to their nonsense complaints. The bill, like its original version, requires "live" phone-in support for these complaints unless you're a "small business" or an "individual provider." But, the terms say that you're a small business if you "received fewer than 1,000,000 unique monthly visitors" and that's "during the most recent 12-month period." How do they define "unique visitors"? The bill does not say, and that's just ridiculous, as there is no widely accepted definition of a unique monthly visitor, and every tracking system I've seen counts it differently. Also, does this mean that if you receive over 1 million visitors once in a 12-month period you no longer qualify?

Either way, under this definition, it might mean that Techdirt no longer qualifies as a small business, and there's no fucking way we can afford to staff up a live call center to deal with trolls whining that the community voted down their trollish comments.

This bill basically empowers trolls to harass companies, including ours. Why the hell would Senator Schatz want to do that?!?

The bill also requires transparency reports from companies regarding the moderation they do, though it says they only have to come out twice a year instead of four times. As we've explained, transparency is good, and transparency reports are good -- but mandated transparency reports are huge problem.

For both of these, it's unclear what exactly is the problem that Schatz and Thune think they're solving. The larger platforms -- the ones that everyone talks about -- basically do all of this already. So it won't change anything for them. All it will do is harm smaller companies, like ours, by putting a massive compliance burden on us, accomplishing nothing but... helping trolls annoy us.

The next big part of the bill involves "illegal content." Again, it's not at all clear what problem this is solving. The issue that the drafters of the bill would likely highlight is that some argue that there's a "loophole" in Section 230: if something is judged to be violating a law, Section 230 still allows a website to keep that content up. That seems like a problem... but only if you ignore the fact that nearly every website will take down such content. The "fix" here seems only designed to deal with the absolute worst actors -- almost all of which have already been shut down on other grounds. So what problem is this actually solving? How many websites are there that won't take down content upon receiving a court ruling on its illegality?

Also, as we've noted, we've already seen many, many examples of people faking court orders or filing fake defamation lawsuits against "John Does" who magically show up the next day to "settle" in order to get a court ruling that the content violated the law. Enabling more such activity is not a good idea. The PACT Act tries to handwave this away by giving the companies 4 days (in the original version it was 24 hours) to investigate and determine if they have "concerns about the legitimacy of the notice." But, again, that fails to take reality into account. Courts have no realistic time limit on adjudicating legality, but websites will have to review every such complaint in 4 days?!

The bill also expands the exemptions for Section 230. Currently, federal criminal law is exempt, but the bill will expand that to federal civil law as well. This is to deal with complaints from government agencies like the FTC and HUD and others who worried that they couldn't take civil action against websites due to Section 230 (though, for the most parts, the courts have held that 230 is not a barrier in those cases). But, much more problematic is that it extends the exemption for federal law to state Attorneys General to allow them to seek to enforce those laws if their states have comparable laws. That is a potentially massive change.

State AGs have long whined about how Section 230 blocks them from suing sites -- but there are really good reasons for this. First of all, state AGs have an unfortunate history of abusing their position to basically shake down companies that haven't broken any actual law, but where they can frame them as doing something nefarious... just to get headlines that help them seek higher office. Giving them more power to do this is immensely problematic -- especially when you have industry lobbyists who have capitalized on the willingness of state AGs to act this way, and used it as a method for hobbling competitors. It's not at all clear why we should give state AGs more power over random internet companies, when their existing track record on these issues is so bad.

Anyway, there is still much more in the bill that is problematic, but on the whole this bill repeats all of the mistakes of the first -- even though I know that the drafters know that these demands are unrealistic. The first time may have been due to ignorance, but this time? It's hard to take Schatz and Thune seriously on this bill when it appears that they simply don't care how destructive it is.

Read More | 77 Comments | Leave a Comment..

Posted on Techdirt - 16 March 2021 @ 10:53am

Google's Efforts To Be Better About Your Privacy, Now Attacked As An Antitrust Violation

from the wait,-what? dept

We've talked a lot in the past about how almost no one seems to actually understand privacy, and that leads to a lot of bad policy-making, including policy-making that impacts the 1st Amendment and other concepts that we hold sacred. Sometimes, it creates truly bizarre scenarios, like the arguments being made by Texas's Attorney General in the latest amended antitrust complaint against Google.

As you'll likely recall, back in December, Texas's Attorney General Ken Paxton -- along with nine other states -- filed an antitrust lawsuit against Google. There were some bits in the laws that suggested some potentially serious claims, but the key pars were heavily redacted. Of the non-redacted parts there were really embarrassing mistakes, including claiming that Facebook allowing WhatsApp users to backup their accounts to Google Drive was giving Google a "backdoor" into WhatsApp communications.

That makes the latest amended complaint even more bizarre. It attacks Google for doing more to protect its users' privacy. As you remember, a couple weeks ago, Google noted that as it got rid of 3rd party cookies in Chrome, it wasn't going to replace it with some other form of tracking. This is, clearly, good for privacy. It is, also, good for Google, since it's better positioned to weather a changing ad market that doesn't rely on 3rd party cookies tracking you everywhere you go.

So the new amended complaint takes a move that is clearly good for everyone's privacy and whines that this is an antitrust violation.

Google’s new scheme is, in essence, to wall off the entire portion of the internet that consumers access through Google’s Chrome browser. By the end of 2022, Google plans to modify Chrome to block publishers and advertisers from using the type of cookies they rely on to track users and target ads. Then, Google, through Chrome, will offer publishers and advertisers new and alternative tracking mechanisms outlined in a set of proposals that Google has dubbed Privacy Sandbox. Overall, the changes are anticompetitive because they raise barriers to entry and exclude competition in the exchange and ad buying tool markets, which will further expand the already dominant market power of Google’s advertising businesses.

Google’s new scheme is anticompetitive because it coerces advertisers to shift spend from smaller media properties like The Dallas Morning News to large dominant properties like Google’s. Chrome is set to disable the primary cookie-tracking technology almost all non-Google publishers currently use to track users and target ads. A small advertiser like a local car dealership will no longer be able to use cookies to advertise across The Dallas Morning News and The Austin Chronicle. But the same advertiser will be able to continue tracking and targeting ads across Google Search, YouTube, and Gmail—amongst the largest sites in the world—because Google relies on a different type of cookie (which Chrome will not block) and alternative tracking technologies to offer such cross-site tracking to advertisers. By blocking the type of cookies publishers like The Dallas Morning News currently use to sell ads, but not blocking the other technologies that Google relies on for cross-site tracking, Google’s plan will pressure advertisers to shift to Google money otherwise spent on smaller publishers.

No good deed goes unpunished. Yes, it is true that Google's move will undoubtedly harm companies that rely on intrusive 3rd party cookies. But that's good for privacy. And it's funny that this is coming in the very same antitrust lawsuit that whines that one of Google's antitrust problems is that it snoops on WhatsAspp (when it doesn't). Here, Google is clearly taking a stand -- the same stand that Mozilla and Apple took earlier -- against creepy and problematic 3rd party cookies, and it gets spun by Texas's AG as an attack on "small advertisers."

I'm kind of curious what the AGs bringing this lawsuit are aiming for here. Yes, I get that the entire point is just to attack Google, but if they win, do they want to require Google to be worse about privacy? Because that's ridiculous.

The complaint does try to argue that Google's Privacy Sandbox isn't really about privacy, and that it's all a "ruse." It even (selectively) quotes an old EFF blog post that rightly calls out some of the problems with the Privacy Sandbox approach. But the EFF blog post is not -- as the amended complaint implies -- suggesting that Google should allow more 3rd party cookie tracking. It's just calling out some of the other problems of the Privacy Sandbox. No one is arguing that Privacy Sandbox is a panacea for privacy issues. And no one is arguing that Google is magically all "good" for online privacy -- indeed there are plenty of legitimate reasons to be concerned about Google's impact on privacy. But the announcement it made recently was, quite clearly, a step in the right direction for privacy, and while it may make life difficult for intrusive advertisers who rely on other methods of advertising, that's really on those advertisers to be better.

There are other things in these lawsuits that may be damning for Google and its practices. I'm still really interested in learning about the redacted sections, which might reveal actual bad practices. But, it's not encouraging at all that Texas's AG is taking a step towards better protection of user privacy as some sort of evidence of nefariousness -- and doing so in the very same complaint arguing that part of the reason Google violates antitrust laws is that it "violates the privacy" of Android users (via the WhatsApp backup feature).

Read More | 20 Comments | Leave a Comment..

Posted on Techdirt - 15 March 2021 @ 9:24am

Amazon's Refusal To Let Libraries Lend Ebooks Shows Why Controlled Digital Lending Is So Important

from the libraries-need-books dept

The Washington Post tech columnist Geoffrey Fowler recently had a very interesting article about how Amazon won't allow the ebooks it publishes to be lent out from libraries. As someone who regularly borrows ebooks from my local libraries, I find this disappointing -- especially since, as Fowler notes, Amazon really is the company that made ebooks popular. But, when it comes to libraries, Amazon won't let libraries lend those ebooks out:

When authors sign up with a publisher, it decides how to distribute their work. With other big publishers, selling e-books and audiobooks to libraries is part of the mix — that’s why you’re able to digitally check out bestsellers like Barack Obama’s “A Promised Land.” Amazon is the only big publisher that flat-out blocks library digital collections. Search your local library’s website, and you won’t find recent e-books by Amazon authors Kaling, Dean Koontz or Dr. Ruth Westheimer. Nor will you find downloadable audiobooks for Trevor Noah’s “Born a Crime,” Andy Weir’s “The Martian” and Michael Pollan’s “Caffeine.”

I've seen a lot of people responding to this article with anger towards Amazon, which is understandable. I do hope Amazon changes this policy. But there's a much bigger culprit here: our broken copyright laws. In the physical world, this kind of thing isn't a problem. If a library wants to lend out a book, it doesn't need the publisher's permission. It can just buy a copy and start lending it out. Fowler's correct that a publisher does get to decide how it wants to distribute a work, but with physical books, there's the important first sale doctrine, which lets anyone who buys a book go on and resell it. And that meant that in the past, libraries have never needed "permission" to lend out a book. They just needed to buy it.

Unfortunately, courts seem to take a dim view of the first sale doctrine when it comes to digital goods.

However, a few years back, some very smart librarians and copyright professors and experts got together and created a system called Controlled Digital Lending (CDL), which aimed to (1) rectify this massive gap in public access to knowledge while (2) remaining on the correct side of copyright law. In its most basic form, CDL, involves libraries buying physical copies of books (as they did in the past), scanning those books (which has already been ruled to be fair use for libraries) and then lending out that digital copy only if they had the matching physical copy on the shelf. As the libraries and copyright experts correctly note, it's difficult to argue that this is any different than lending out the copy of the book that they bought.

But, of course, publishers have always hated libraries' ability to lend out books in the first place -- and have been itching to use the power of copyright to block that. Already, they charge libraries insanely high prices for ebooks to lend, and put ridiculous limits on how those books can be lent.

So, no surprise, last year, in the middle of a once-in-a-century pandemic, the publishers sued the Internet Archive, arguing that its Open Library project, which runs based on the CDL principles, violated copyright law. And, incredibly, a ton of people have cheered on this nonsense lawsuit -- even those who hate Amazon.

Yet, if you really want to stop Amazon from being able to block libraries from lending out ebooks, there's a simple answer: fix copyright law. Make it clear that Controlled Digital Lending is legal. Or, go even further and say that the First Sale Doctrine also applies to digital goods, as it absolutely should.

54 Comments | Leave a Comment..

Posted on Free Speech - 12 March 2021 @ 9:33am

Judge Tosses Laughably Stupid SLAPP Lawsuit The Trump Campaign Filed Against The NY Times

from the because-it-was-never-meant-to-win dept

A little over a year ago we wrote about a laughably stupid SLAPP suit that the Trump campaign, represented by Charles Harder, filed against the NY Times. As we noted at the time, the lawsuit appeared to have no intention of succeeding -- it was purely performative nonsense. The lawsuit claimed that an opinion piece by Max Frankel was defamatory because it noted that whether or not there was any explicit collusion between the Trump Campaign and Russia, it didn't matter, since both sides seemed to expect certain outcomes and allowed them to act accordingly.

We also pointed out that the lawsuit completely misrepresented the article, pretending that Frankel's thesis -- again, that there didn't need to be any explicit deal -- was Frankel saying that there was "collusion" between the two. The case made no sense no matter how you looked at it. Frankel's article was an opinion piece -- and opinions aren't defamatory. It didn't allege what the campaign's lawsuit says it alleged, and there was no way in hell it could possibly meet the actual malice standard necessary for defamation.

It took a year, but the Supreme Court of New York (which, contrary to its name, is more like a district court), has tossed out the lawsuit, though denying the NY Times' request for sanctions against Harder. As we expected, this was not a difficult decision for the court to come to. First, it was obviously opinion, and thus not defamatory:

First, while the complaint alleges that the terms used in the article, such as “deal” and “quid pro quo,” are defamatory and false, Mr. Frankel’s commentary in his article is nonactionable opinion, and the overall context in which the article was published, in the opinion section of the newspaper, signaled to the reader that “the broader social context and surrounding circumstances [indicate] that what is being read . . . is likely to be opinion, not fact.” Gross v. N.Y. Times Co., 82 N.Y.2d 146, 153 (1993) (internal quotation marks and citation omitted). This is because “[t]he dispositive inquiry, under either Federal or New York law, is ‘whether a reasonable [reader] could have concluded that [the articles were] conveying facts about the plaintiff.’” 83 N.Y.2d at 152 (alterations in original) (quoting 600 W. 115th St. Corp. v. Von Gutfeld, 80 N.Y.2d 130, 139 (1992)); see also Gertz v. Robert Welch, Inc., 418 U.S. 323, 339 (1974) (finding that statements of opinion are not actionable because “there is no such thing as a false idea”).

Then there's the actual malice problem -- in that the campaign failed to make any real arguments that would show that Frankel knew his statements were false or were made with reckless disregard for the truth (meaning he had substantial doubts about their truth):

Third, even if Mr. Frankel’s commentary was actionable as factual assertions, and even if such assertions were of and concerning the Trump campaign, the complaint fails to allege facts sufficient to support the requirement that the Times published the challenged statements with actual malice, meaning “knowledge that [the statements] were false, or [made] with reckless disregard for the truth.”... In this regard, bias, or ulterior motive does not constitute actual malice... This heavy burden exists because news organizations function as a platform for facilitating constitutionally protected speech on issues of public concern and courts will not impose defamation liability against these entities absent a clear showing of actual malice.

There was one other reason for the dismissal... which I am a bit confused about. The judge claims that the Campaign (the plaintiff in this case) has no standing, since the comments in the article were not about it:

Second, the challenged statements are not “of and concerning” plaintiff, which is a necessary element for a defamation action. For example, in Lazore v. NYP Holdings, Inc., 61 A.D.3d 440 (1st Dep’t 2009), the Appellate Division, First Department dismissed a complaint alleging defamation because “the offending statements were directed against a governing body . . . , rather than against its individual members.” 61 A.D.3d at *1. Further, a corporate entity has no standing to sue over statements that concern an entity’s employees or affiliates, but not the entity itself.... Here, the focus of Mr. Frankel’s column was the former President’s associates and family members, not the Trump campaign itself.

I find that a lot less compelling, since the thrust of the article was about the Campaign, as represented by the President's associates and family members, but either way this case was getting dismissed for the other reasons.

As for sanctions, the judge rejects them... with no explanation at all. However, with New York now having a shiny new more useful anti-SLAPP law, I do wonder if the Times might now use that to seek fees...

Read More | 13 Comments | Leave a Comment..

Posted on Techdirt - 11 March 2021 @ 1:39pm

Court Allows Lawsuit Over Abusive Copyright Trolling DMCA Notices To Move Forward

from the keep-an-eye-on-this-one dept

Last summer we wrote about an interesting case involving the latest evolution of copyright trolling, involving Jon Nicolini, who some copyright troll watchers may recognize from his participating in an earlier generation of copyright trolling, when he was a sketchy "forensic expert" for copyright trolling firm CEG TEK. These days, Nicolini runs his own firm, Okularity, which appears to have created a new form of copyright trolling. According to the lawsuit, rather than file lawsuits as the pressure point (as was common in the past), Okularity sends a ton of DMCA takedown notices to social media companies, and then once your account gets taken down, Nicolini pounces and demands huge sums to rescind the notices, so you can get back your account.

As we wrote over the summer, one of Okularity's targets was the well known Paper Magazine, put out by the publisher Enttech Media Group. Enttech said that Okularity sought to have Paper Magazine's Instagram account shut down, and then offered to "settle," demanding a pretty massive sum in the process. The lawsuit alleged violations of DMCA 512(f) which is the (unfortunately) mostly toothless part of the DMCA that is supposed to allow those on the receiving end of bogus DMCA takedowns to fight back. In practice, however, courts have mostly rejected all 512(f) claims, or made it so they're basically impossible to do anything useful with. Because of that, any time we see a 512(f) claim that has legs, we pay attention.

The original complaint also tried to argue that Okularity violated the RICO statute, and long time readers here know what we think of RICO claims. While there did appear to be some unauthorized practice of law happening, there didn't seem to be nearly enough to make a RICO claim -- because there's basically never enough to make a RICO claim. We predicted that the RICO claim would get tossed out, but that the 512(f) claim might live on.

Turns out, we were right.

While the case has had some twists and turns, this week the judge tossed out the RICO claims, but is allowing the 512(f) claims to move forward. Nicolini and Okularity had argued that Enttech's lawyer, Robert Tauler, should face Rule 11 sanctions for ignoring evidence regarding their fair use analysis, but the court rejected those as well. Tauler did have to file a third amended complaint, however, to get to this point, as the court did find the first two complaints somewhat deficient.

But on the key point -- 512(f) -- the court notes that the case can continue, even under the confused Lenz standard in the 9th Circuit, that basically said (1) DMCA filers have to "subjectively" consider fair use to be a "good faith" filing, but (2) automated takedowns may be okay... because we say so. Nikolini and Okularity argued that they do consider fair use before sending notices, while Enttech argued the notices appeared to be totally automated. The court basically says -- Enttech has met the initial burden that the case can move forward.

One key point of contention in this: the takedown letters sent by Okularity do contain a "discussion of infringement and fair use," Okularity claims that shows that it does consider fair use. Enttech responded that every single notice Okularity sends contains an exact copy of this discussion, suggesting no actual analysis is done, and it's just a cut-and-paste. This point is what the judge focused in on:

ENTTech’s allegation that the DMCA notices contained an analysis of infringement and fair use presents a question of first impression with respect to the standard for pleading a claim under § 512(f). Is it sufficient for ENTTech to allege that, notwithstanding the takedown notices’ explicit and extensive fairuse analysis, Defendants did not actually or sufficiently consider fair use before issuing the takedown notices? At first blush, the fact that the DMCA takedown notices contain fair-use analyses—even if those analyses are identical and pro forma—seems to satisfy the requirement to “consider” fair use before issuing a takedown notice. See Lenz, 815 F.3d at 1154. The presence of the purported fair-use analysis in each takedown notice also distinguishes this case from Lenz where the plaintiff alleged that the defendant did not consider fair use at all. Cf. id.

Is ENTTech required to allege additional facts, in view of the appearance that Defendants considered fair use? For example, must ENTTech allege evidentiary facts concerning Defendants’ analytical process or subjective state of mind (the type of facts which, in most cases, are not available to a plaintiff before discovery is taken)? Does the Iqbal/Twombly plausibility standard require ENTTech to aver its own analysis of fair use to support an inference that Defendants merely paid “lip service” to the consideration of fair use? Cf. id. at 1163. Having considered these questions, the Court concludes that ENTTech’s allegations in the TAC are sufficient at this stage of the litigation.

The court points out that the ruling in Lenz supports allowing this case to move forward, saying that it's a factual question whether or not the takedown notice sender had a "good faith belief" that the notice was legit, and therefore, it's up to a jury to decide.

Although Lenz involved a motion for summary judgment, that decision is nevertheless instructive with respect to the issue presently before the Court. Lenz supports the conclusion that the question of whether a copyright owner formed a subjective good faith belief that an alleged infringer’s copying of the work did not constitute fair use is, in most instances, a factual issue that is not appropriate for resolution on a motion to dismiss. “Because the DMCA requires consideration of fair use prior to sending a takedown notification,” the Ninth Circuit held that “a jury must determine whether [the defendant’s] actions were sufficient to form a subjective good faith belief about the [allegedly infringing] video’s fair use or lack thereof.” Id. at 1154. In response to the arguments in the dissenting opinion regarding the propriety of granting summary judgment, the Lenz panel majority explained that the relevant question was “whether the analysis [the defendant] did conduct of the [alleged infringing material] was sufficient, not to conclusively establish as a matter of law that the . . . use of the [copyrighted material] was fair, but to form a subjective good faith belief that the video was infringing on [the] copyright.” Id. at 1154 n.3.

Therefore, because it is generally a factual issue whether the analysis that the defendant did conduct of the alleged infringing material was sufficient, see id., it necessarily follows that to plead a claim under § 512(f), it is enough for ENTTech to allege that Defendants did not consider fair use (sufficiently or at all) before issuing the takedown notices. And that is exactly what ENTTech alleges here. Requiring ENTTech to allege more would effectively impose a heightened pleading standard, see Fed. R. Civ. P. 9(b), and no authority holds that claims under § 512(f) must be pleaded with particularity. Thus, although it may be advisable for a plaintiff like ENTTech to aver additional facts (such as its own analysis of fair use) to support the allegation that a defendant’s fair use analysis was merely pro forma, the Court cannot conclude that ENTTech is required to plead such facts in order to state a plausible claim for relief under § 512(f).

In the grand scheme of things, this is only a small step forward, but it is a step forward for 512(f) -- a part of the law that rarely ever sees any positive news. This doesn't mean that Enttech is likely to win, but it does mean that the courts may crack the door open just ever so slightly in letting people and companies fight back against abusive DMCA notices.

Read More | 6 Comments | Leave a Comment..

Posted on Techdirt - 11 March 2021 @ 10:53am

It's Not Just Republican State Legislators Pushing Unconstitutional Content Moderation Bills

from the pointing-the-finger-at-the-other-side-of-the-aisle dept

Over the last month we've written quite a few times about various state legislatures (and Governors) picking up on the nonsensical and unsupported statements that (1) "conservatives" face too much bias in social media content moderation decisions and (2) that Section 230 is somehow to blame for this. They've pushed a whole bunch of blatantly unconstitutional state laws that would seek to limit how social media companies can moderate content -- effectively compelling them to host content they disagree with (which would violate the 1st Amendment). Of course, as we've noted for quite some time now, both Republicans and Democrats seem to be very mad at Section 230, but for totally contradictory reasons. Republican bills seek to make social media companies moderate less content, while Democratic bills seek to make social media companies moderate more content.

Both approaches are unconstitutional violations of the 1st Amendment. While most of the fights over the past few years have happened in Congress, now with these bad bills moving to the state legislators, it appears that Democrats don't want to be left behind. Over in Colorado, Colorado Senate president pro tempore Kerry Donovan would seek to force companies to moderate "hate speech," "fake news," and "conspiracy theories."

The full bill is really, really bad. Websites would need to register (for a fee) with a "digital communications commission" in Colorado, and that Commission would accept complaints against social media websites if they were used for hate speech, undermining election integrity, disseminating intentional disinformation, conspiracy theories, or fake news. There's a big problem with this: most of that is protected under the 1st Amendment. I know that many people don't like that those things are protected speech, but you actually should like it. Because if "fake news" or "undermining election integrity" was not protected under the 1st Amendment, just imagine how the Trump administration would have abused both things.

After all, it spent four years arguing that any criticism of the administration was "fake news" and claimed, repeatedly (despite the total lack of evidence) that the processes and procedures that helped make the 2020 election fair actually "undermined election integrity." This is why we don't let the government punish people for speech around those issues, because the government will define it in ways we dislike.

As Eugene Volokh notes, beyond the fact that all of this is pretty clearly unconstitutional, the bill doesn't even bother to define "hate speech." Or "undermine election integrity." Or "fake news." Or "conspiracy theories." Or "intentional disinformation."

Kerry Donovan is now running for US Congress as well (against conspiracy theorist Lauren Boebert). One would hope that she would have first learned how the 1st Amendment works before seeking to run for Congress. We might agree that Boebert clearly doesn't belong anywhere near Capitol Hill, but that's no excuse for misunderstanding some fairly basic principles in the Bill of Rights.

Read More | 56 Comments | Leave a Comment..

Posted on Techdirt - 10 March 2021 @ 10:44am

Utah Legislature Wraps Up Session By Passing Two Unconstitutional Internet Bills

from the nice-work,-everyone dept

Last week we wrote about the many, many, many constitutional problems with a bill proposed in Utah to try to tell internet companies how they can moderate content. As we noted, the bill clearly violates the 1st Amendment, the Commerce Clause, and is also pre-empted by Section 230.

So, of course, it passed.

The Salt Lake Tribune report has a stunning set of paragraphs that demonstrate that supporters of the bill not only ignored many, many experts telling them the constitutional problems with the bill, but they then pretended no one notified them of those concerns (this is blatantly false):

“What we are talking about here are large, private forums that are free to moderate themselves and to put up what they want to put up and censor and kick off those people they choose to,” added House Minority Leader Brian King, D-Salt Lake City. “If we pass this bill, the Utah taxpayers are going to pay large amounts of money to defend the constitutionality of this bill against a lot of large entities that have many resources.”

Brammer shot back that he was not made aware of any constitutional issues with the legislation. However, a legal analysis from the Office of Legislative Research and General Counsel shared with The Salt Lake Tribune raises several potential constitutional and legal problems.

Legislative attorneys advised that HB228 may violate the First Amendment by compelling speech through requiring these companies to provide information about their moderation practices, although that may not be an impermissible burden given their vast resources.

The memo also warns the bill could violate the Constitution by placing an “undue burden on interstate commerce.”

Finally, the legislation might be unenforceable because of provisions in the federal Communications Decency Act.

It's one thing to ignore me -- I'm just a loud mouth blogger. But to flat out ignore the points raised by legislative attorneys, making it clear that you're going to waste a ton of taxpayer money? That's just obnoxious. Rep. Brady Brammer should be ashamed.

And that wasn't the only unconstitutional tech-related bill the Utah legislature passed as it wrapped up its session. It also passed a porn filter bill that would mandate a porn filter on any phone, computer, tablet or other electronic device.

Just like the many, many, many other attempts at such bills, this one is also blatantly unconstitutional. In the key case that made all of the Communications Decency Act (minus Section 230) unconstitutional, Reno v. ACLU, the Supreme Court (with a 9-0 vote) made it quite clear that governments cannot mandate the blocking of pornographic material online. In that case, the Supreme Court went through many reasons why governments don't get to mandate filters for indecent content.

Utah's legislators haven't even attempted to address any of those concerns. Incredibly, the Salt Lake Tribune quotes even those who voted for the bill as saying that the bill has serious problems and will require follow up legislation to fix.

“As much as the intentions of this bill are good, logistically it just won’t work,” Anderegg, R-Lehi, said. “And I think if we pass this bill, it sends a good message. ... But we absolutely will be back here at some point in the future, maybe even in a special session to fix this.”

Anderegg ultimately voted in support of the bill, saying that while he has “a lot of trepidation” about the bill, he doesn’t “want to be the guy” who opposes an attempt to shield children from graphic content.

Incredible. Admitting you voted for a bill that you know won't work.. and saying you have to to protect the children, is quite an admission. The bill not only won't work, it's unconstitutional. And that's not the kind of thing you fix in a "special session." You just don't pass unconstitutional bills.

And if the goal is not to have children looking at porn, why not... let parents do their jobs and if they want to install a filter, let them do so. Remember "personal responsibility"?

54 Comments | Leave a Comment..

Posted on Techdirt - 9 March 2021 @ 10:42am

Tennessee Lawmakers' Latest Attack On Section 230 Would Basically Ban All Government Investment

from the seems-counterproductive dept

We've been highlighting a wide variety of state bills from Republican-led legislatures that all attempt to attack Section 230. Nearly all of them are blatantly unconstitutional attacks on the 1st Amendment. Somewhat incredibly, the latest one from Tennessee might not actually be unconstitutional. That doesn't mean it's good. In fact, it's not just incredibly stupid, but demonstrates that the bill's authors/sponsors are so fucking clueless that they have no idea what they're doing. In effect, they'd be banning the state from investing any money it holds. To spite Section 230.

The bill -- which is House Bill 1441 and Senate Bill 1011 -- from Representative Tim Rudd and Senator Janice Bowling represent such a lack of understanding of how literally anything works that it should embarrass both elected officials and anyone who ever voted for either of them. The bill is pretty simple: it bans the state from investing in any entity protected by Section 230. The problem with this? Almost every single person and every single company is, in some way, protected by Section 230. So, in effect, the bill bans the state from investing any of its money.

Let's dig in on the specifics. The bill is pretty short and sweet. Here's the key part:

On or after August 1, 2021, monies within the pooled investment fund must not be invested in any entity that receives immunity under Section 230 of the Communications Decency Act (47 U.S.C. § 230). Any monies from the fund that are invested in such an entity as of the effective date of this act, must be divested prior to August 1, 2021. Written notice of the divestment must be provided to any such entity at the earliest practicable time after the effective date of this act.

To be clear, this text would be inserted in Tennessee Code Title 9, Chapter 4, Part 6 which covers the disbursement and investment of state funds. Amusingly, Section 602 says that "It is the policy of the state of Tennessee that all funds in the state treasury shall be invested by the state treasurer to the extent practicable." The problem is that if this new law passes, there is almost no one who would be allowed to receive those funds, so "to the extent practicable" would be... non-existent.

Part of the issue, it seems, is that in their rush to attack "Section 230" neither Senator Bowling, nor Representative Rudd bothered to, you know, read Section 230. I'll cover the essential part for our discussion here today:

No provider or user of an interactive computer service shall be held liable...

No provider or user. Section 230 protects both the users and providers of an interactive computer service. So, basically any entity -- person or organization -- who uses an interactive computer service is protected under Section 230. And, based on the text of this bill, that means... just about anyone. If you use email, you're protected. So no entity that has email can receive investments from Tennessee state funds. No organization that has a website. I'm sure there might still be some neo-luddite organizations out there that don't use any computers at all, so perhaps the state of Tennessee will invest its funds in, like, a toy shop with an old fashioned cash register, and a rotary telephone or something. But that seems pretty limiting, and not a particularly good investment.

Obviously, this bill comes out of the very, very false belief that Section 230 only protects "big tech." That's a favored myth of Section 230 haters. Of course, that's never been the case. And you'd think that before writing legislation about it, someone who is elected to a state legislature would do the very least to actually read the law they're attacking. But, I guess that's too much to ask of Representative Tim Rudd and Senator Janice Bowling.

People of Tennessee, I beg of you: stop electing fools who are so focused on culture warrioring that they can't even be bothered to understand the bills they've introduced.

Read More | 32 Comments | Leave a Comment..

Posted on Techdirt - 8 March 2021 @ 10:47am

Trump Appointee Who Wanted To Turn Voice Of America Into Breitbart Spent Millions Of Taxpayer Dollars Investigating His Own Staff

from the holy-shit dept

Remember Michael Pack? That's the Steve Bannon protégé who Trump appointed last year to head the US Agency for Global Media. USAGM is the organization that oversees Voice of America, Radio Free Europe/Radio Liberty, Radio Free Asia, Middle East Broadcasting and the Open Technology Fund. It was an open secret that Pack was appointed to turn those widely respected, independent, news organizations into pure Breitbart-style propaganda outfits. He wasted little time causing a huge fucking mess, firing a ton of people in a manner so upsetting that even Republican Senators were concerned. It also turned out that many of the people he fired... he legally had no right to fire.

In the fall, things got even more ridiculous as it came out that Pack had been investigating VOA journalists to see if they were "anti-Trump" and then moved to get more power to directly dictate how VOA should be reporting. One of President Biden's first official acts in office... was to fire Pack, who laughably claimed that his being fired was "a partisan act" that would harm the credibility of USAGM.

Meanwhile, the latest story, as revealed by NPR, is that Pack spent millions of tax payer dollars investigating staff throughout the various organizations to try to come up with reasons they could be fired. This was in response to the courts pointing out he couldn't just randomly fire people in these organizations.

Last summer, an appointee of former President Donald Trump was irate because he could not simply fire top executives who had warned him that some of his plans might be illegal.

Michael Pack, who was CEO of the U.S. Agency for Global Media that oversees Voice of America, in August suspended those top executives. He also immediately ordered up an investigation to determine what wrongdoing the executives might have committed.

Instead of turning to inspectors general or civil servants to investigate, Pack personally signed a no-bid contract to hire a high-profile law firm with strong Republican ties.

The bill — footed by taxpayers — exceeded a million dollars in just the first few months of the contract.

And hiring an outside law firm is an abuse of his position, according to the Government Accountability Project, which discovered the details of this contract via a FOIA request:

"The engagement constitutes gross mismanagement, gross waste of taxpayer dollars and abuse of authority," David Seide of the Government Accountability Project, wrote in a letter Thursday to Congressional committees with oversight of the committee.

"The 'deliverables' provided by McGuireWoods are — always were — of questionable value," he wrote. "The investigations produced nothing that could justify the kind of discipline Mr. Pack sought to impose on current USAGM employees he did not like — he wanted them fired (they have since been reinstated). Investigations of former employees also yielded nothing."

It seems almost cartoonish what Pack did here:

The group's analysis of the new documents, shared with NPR, found the law firm McGuireWoods charged more than $320 per hour for 3,200 billable hours from August through October alone. It devoted five partners, six associates, two lawyers "of counsel," two staff attorneys, seven paralegals, three case assistants, 14 other timekeepers, and 11 "outsourced attorneys" to the work.

[....]

The invoices reflect that McGuireWoods' legal team, among other duties, reviewed social media posts, "news articles relating to Michael Pack" and an "[Office of Inspector General] audit on Hillary Clinton's email breach."

It truly is insane how obsessed Trumpists are over Hillary's emails.

But the main crux of the "investigation" appears to have been to cook up any reason at all to justify Pack firing all the non-Trump people he wanted to fire:

The nonprofit group's review found the McGuireWoods team spent nearly 2,000 hours in a massive review of documents and emails, 400 hours on fact investigation, and nearly 700 hours on what was labeled as "analysis/strategy." The records also show the legal team conducted voluminous legal research on federal ethics regulations and U.S. statutes. Such tasks for federal departments are typically, though not exclusively, undertaken by government attorneys, inspectors general, and human resources employees.

Incredible.

24 Comments | Leave a Comment..

Posted on Techdirt - 5 March 2021 @ 12:05pm

Reporter Sues DOJ To See If It Is Trying To Help Devin Nunes Unmask @DevinCow Twitter Account

from the is-the-fbi-investigating-a-satirical-cow? dept

As I'm pretty sure most of you know, Rep. Devin Nunes has been filing a ton of blatant SLAPP lawsuits trying to silence criticism and mockery of him, as well as critical reporting. Kind of ironic for a guy who co-sponsored a bill to discourage frivolous lawsuits and who has regularly presented himself as a free speech supporter. What kicked off those lawsuits, somewhat incredibly, was a satirical Twitter account, @DevinCow (mocking Devin Nunes for repeatedly holding himself out as a "dairy farmer" from Tulare California when it turns out his family farm moved to Iowa years ago).

You may also know that at the time Nunes sued the satirical cow for making fun of him online, the @DevinCow account had a grand total of 1,204 followers. Within a couple days, @DevinCow had 400k followers and had surpassed Nunes' himself. Today the Cow has 772k followers and is one of the most interesting Twitter accounts online, with a huge pasture of followers. Pretty incredible.

What a lot of people don't realize is that the case against the cow is still going on, and Nunes and his lawyer, Steven Biss, have constantly gone to fairly extreme lengths just try to figure out who is behind the Cow account. The craziest of all was that Biss used a totally unrelated case, that did not involve Nunes, and then abused his subpoena powers to ask Twitter to reveal who was behind @DevinCow, despite the Cow being totally unrelated to the case. Biss and Nunes made up some nonsense about how the cow was connected, but it was clearly ridiculous, and a judge rejected it.

Of course, that raised lots of concerns about whether or not Nunes might abuse other methods to try to uncover the cow. Freelance journalist Shawn Musgrave filed a FOIA request with the Justice Department and the FBI to see if Nunes might have sought to use either organization to try to uncover the Cow's identity. After all, Nunes was (incredibly) the chair of the House Intelligence Committee and would have greater access to the FBI and its surveillance tools than just about any other Congressional Representative. Musgrave made it abundantly clear in his FOIA that he was not seeking to identify the Cow and did not want any information that might reveal the Cow's identity. He just wanted to know if the DOJ or the FBI had sought to uncover the Cow's identity.

However, the DOJ and FBI have failed to comply, so now Musgrave is suing the DOJ to try to get them to actually properly respond to the FOIA request.

As a result of the repeated efforts by Cong. Nunes and his legal team to unmask @DevinCow—of which the above instances are merely examples—Musgrave filed two FOIA requests—with the permission of @DevinCow’s owner—to ascertain the degree to which FBI and DOJ—with or without Cong. Nunes’s involvement—have attempted to identify the anonymous owner of the @DevinCow Twitter account.

The lawsuit explains what the FOIA request sought:

On 9 November 2020, Musgrave submitted to FBI a request for five categories of records: (1) all main file records about the @DevinCow Twitter account; (2) all cross-references in the Central Records System about the @DevinCow Twitter account; (3) all internal emails or other correspondence records created or maintained by the Office of Congressional Affairs mentioning the @DevinCow Twitter account; (4) all emails in the FBI email system(s) or personal email folders on personal computers used by the Washington Field Office and San Francisco Field Office mentioning the @DevinCow Twitter account; and (5) all emails in the FBI email system(s) or personal email folders on personal computers used by the Criminal, Cyber, Response, and Services Branch mentioning the @DevinCow Twitter account.

Musgrave added, “Please note that we do not wish to know the identity of the owner of the @DevinCow account, so the FBI should automatically redact any personally identifying information about that individual while releasing the contextual information which would show that PII was redacted. We are only interested in records discussing the account and/or the possible identification of its owner, not in the identity itself.”

Just a week later (which is insanely fast for FBI FOIA requests), the FBI was giving a Glomar response that it can "neither confirm nor deny" the existence of such records, and also claiming it was doing so to protect the privacy of third party individuals. But, of course, that's nonsense, since the request made it abundantly clear that the FBI should redact any such information. Musgrave appealed:

“It is nonsensical to issue a (b)(6) Glomar response to a request for records about an anonymous Twitter account, especially when the request has formally indicated that we have no interest in learning the identity of the user.”

On February 1st, the FBI rejected the appeal and stuck with its Glomar response.

So now, Musgrave has sued (with some very good FOIA lawyers) to try to force the FBI/DOJ to actually respond to the FOIA request for real. Should be an interesting case to follow.

Read More | 24 Comments | Leave a Comment..

Posted on Techdirt - 5 March 2021 @ 9:38am

Parler Drops Its Loser Of A Lawsuit Against Amazon In Federal Court, Files Equally Dumb New Lawsuit In State Court

from the that-ain't-gonna-work-champ dept

As you may recall, Parler had filed a ridiculously weak antitrust lawsuit against Amazon the day after it had its AWS account suspended. A judge easily rejected Parler's request for an injunction, and made it pretty clear Parler's chances of succeeding were slim to none. Parler, which has since found a new host, had indicated it would file an amended complaint, but instead it chose to drop that lawsuit in federal court and file an equally laughable lawsuit in state court in Washington (though with some additional lawyers).

Rather than claiming antitrust (which was never going to work) the new complaint claims breach of contract, defamation and deceptive and unfair practices. The complaint makes a big deal over the fact that in December Twitter and Amazon signed an agreement to use AWS for hosting some Twitter content, and hints repeatedly that Amazon's move a month later was to help Twitter stomp out a competitor. But this is all just random conspiracy theory nonsense, and not at all how any of this actually works.

The defamation claim is particularly silly.

On January 9, 2021, AWS sent an email to Parler declaring that AWS would indefinitely suspend Parler’s service, claiming that Parler was unable or unwilling “to remove content that encourages or incites violence against others.” AWS or one of its employees publicly leaked that email in bad faith to BuzzFeed at or around the same time AWS sent the email to Parler.

AWS’s email was false and AWS knew it was false. Parler was willing and able to remove such content and AWS knew that, because there was a lengthy history between the parties of Parler removing such content as quickly as AWS brought it to Parler’s attention. What is more, AWS was well aware that Parler was testing a new AI-based system to remove such content before it was even posted, that Parler had success with initial testing of the program, and that Parler had in fact shared those testing results with AWS.

The AWS email received wide play in the media.

This itself is kind of fascinating. Because even though we had highlighted that Parler takes down content, publicly it had claimed over and over again that it did not take down content. In fact, nearly all of Parler's brand was built on it's (misleading) claim to not do content moderation. So, how the hell could Amazon claiming that Parler wasn't doing content moderation be defamatory when that's the very reputation that Parler itself tried to highlight for itself?!?

Also, uh, this isn't going to fly in any court:

Parler is not a public figure and the success of its “content moderation” policies was not a matter of public concern until Google and AWS decided to make it one, but the defendant cannot by a defamatory statement turn a private matter into a public one or all matters would be public in nature.

No, sorry, Parler was very much a public figure way before Google and AWS's decisions.

The complaint also argues, repeatedly, that AWS had always been happy with Parler until it told it was suspending the account, but the filings in the original lawsuit in federal court clearly indicated otherwise, and noted that Amazon had reached out to Parler months earlier. I'm confused as to whether Parler's lawyer thinks that Amazon won't immediately point that out? Also, it seems decently likely that Amazon is going to try to get the case removed right back to federal court, so it's not clear why Parler thinks it can avoid federal court with this case. The whole thing, once again, seems performative and stands just as much of a chance as the original.

Read More | 22 Comments | Leave a Comment..

More posts from Mike Masnick >>

.

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it