Cathy Gellis’s Techdirt Profile

cathy

About Cathy Gellis




Posted on Techdirt - 4 March 2021 @ 4:06pm

Washington State Also Spits On Section 230 By Going After Google For Political Ads

from the guys-it's-still-the-law dept

In the post the other day about Utah trying to ignore Section 230 so it could regulate internet platforms, I explained why it was important that Section 230 pre-empted these sorts of state efforts:

Just think about the impossibility of trying to simultaneously satisfy, in today's political climate, what a Red State government might demand from an Internet platform and what a Blue State might. That readily foreseeable political catch-22 is exactly why Congress wrote Section 230 in such a way that no state government gets to demand appeasement when it comes to platform moderation practices.

We don't have to strain our imaginations very hard, because with this lawsuit, by King County, Washington prosecutors against Google, we can see a Blue State do the same thing Utah is trying to do and come after a platform for how it handles user-generated content.

Superficially there are of course some differences between the two state efforts. Utah's bill ostensibly targets social media posts whereas Washington's law goes after political ads. What's wrong with Washington's law may also be a little more subtle than the abjectly unconstitutional attempt by Utah to trump internet services' expressive and associative rights. But these are not meaningful distinctions. In both cases it still basically all boils down to the same thing: a state trying to force a platform to handle user-generated content (which online ads generally are) the way the state wants by imposing requirements on platforms that will inevitably shape how they do.

In the Washington case prosecutors are unhappy that Google is apparently not following well enough the prescriptive rules Washington State established to help the public follow the money behind political ads. One need not quibble with the merit of what Washington State is trying to do, which, at least on first glance, seems perfectly reasonable: make campaign finance more transparent to the public. Nor is it necessary to take issue with the specific rules the state came up with to try to vindicate this goal. The rules may or may not be good ones, but whether they are good or not is irrelevant. That there are rules is the problem, and one that that Section 230 was purposefully designed to avoid.

As discussed in that other post, Congress went with an all-carrot, no-stick approach in regulating internet content, giving platforms the most leeway possible to do the best they could to help achieve what Congress wanted overall: the most beneficial and least harmful content online. But this approach falls apart once sticks get introduced, which is why Congress included pre-emption in Section 230 so that states couldn't try to. Yet that's what Washington is trying to do with its disclosure rules surrounding political ads: introduce sticks by imposing regulatory requirements that burdens how platforms can facilitate user-generated content, in spite of Congress's efforts to alleviate them of these burdens.

The burden is hardly incidental or slight. Remember that if Washington could enforce its own rules, then so could any other state or any of locality, even when those rules were far more demanding, or ultimately compromise this or any other worthy policy goal—either inadvertently or even deliberately. Furthermore, even if every state had good rules, the differences between them would likely make compliance unfeasible for even the best-intentioned platform. Indeed, even by the state's own admission, Google actually had policies aimed at helping the public learn who had sponsored the ads appearing on its services.

Per Google’s advertising policies, advertisers are required to complete advertiser identity verification. Advertisers seeking to place election advertisements through Google’s advertising networks are required to complete election advertisement verification. Google notifies all verified advertisers, including, but not limited to sponsors of election advertisements, that Google will make public certain information about advertisements placed through Google’s advertising networks. Google notifies verified sponsors of election advertisements that information concerning their advertisements will be made public through Google’s Political Advertising Transparency Report.

Google’s policy states:

With the information you provide during the verification process, Google will verify your identity and eligibility to run election ads. For election ads, Google will [g]enerate, when possible, an in-ad disclosure that identifies who paid for your election ad. This means your name, or the name of the organization you represent, will be displayed in the ad shown to users. [And it will p]ublish a publicly available Political Advertising transparency report and a political ads library with data on funding sources for election ads, the amounts being spent, and more.

Google notifies advertisers that in addition to the company’s online Political Advertising Transparency Report, affected election advertisements "are published as a public data set on Google Cloud BigQuery[,]" and that users "can export a subset of the ads or access them programmatically." Google notifies advertisers that the downloadable election ad "dataset contains information on how much money is spent by verified advertisers on political advertising across Google Ad Services. In addition, insights on demographic targeting used in political advertisement campaigns by these advertisers are also provided. Finally, links to the actual political advertisement in the Google Transparency Report are provided." Google states that public access to "Data for an election expires 7 years after the election." [p. 14-15]

Yet Washington is still mad at Google anyway because it didn't handle user-generated content exactly the way it demanded. And that's a problem, because if it can sanction Google for not handling user-generated content exactly the way it wants, then (1) so could any other state or any of the infinite number of local jurisdictions Google inherently reaches, (2) to enforce an unlimited number of rules, and (3) governing any sort of user-generated content that may happen to catch a local regulator's attention. Utah may today be fixated on social media content and Washington State political ads, but once they've thrown off the pre-emptive shackles of Section 230 they or any other state, county, city or smaller jurisdiction could go after platforms hosting any of the myriad other sort of expression people use internet services to facilitate.

Which would sabotage the internet Congress was trying to foster with Section 230. Again, Congress deliberately gave platforms a free hand to decide how best to moderate user content so that they could afford to do their best at keeping the most good content up and taking the most bad content down. But with all these jurisdictions threatening to sanction platforms, trying to do either of these things can no longer be platforms' priority. Instead they will be forced to devote all their resources to the impossible task of trying to avoid a potentially infinite amount of liability. While perhaps at times this regulatory pressure might result in nudging platforms to make good choices for certain types of moderation decisions, it would be more out of coincidence than design. Trying to stay out of trouble is not the same thing as trying to do the best for the public—and often can turn out to be in direct conflict.

Which we can see from Washington's law itself. In 2018 prosecutors attempted to enforce an earlier version of this law against Google, which led it to declare that it would refuse all political ads aimed at Washington voters.

Three days later, on June 7, 2018, Google announced that the company’s advertising networks would no longer accept political advertisements targeting state or local elections in Washington State. Google’s announced policy was not required by any Washington law and it was not requested by the State. [p. 7]

Prosecutors may have been surprised by Google's decision, but no one should have been. Such a decision is an entirely foreseeable consequence, because if a law makes it legally unsafe for platforms to facilitate expression, then they won't.

Even the complaint itself, albeit perhaps inadvertently, makes clear what a loss for discourse and democracy it is when expression is suppressed.

As an example of Washington political advertisements Google accepted or provided after June 4, 2018, Google accepted or provided political advertisements purchased by Strategies 300, Inc. on behalf of the group Moms for Seattle that ran in July 2019, intended to influence city council elections in Seattle. Google also accepted or provided political advertisements purchased by Strategies 300, Inc. on behalf of the Seattle fire fighters that ran in October 2019, intended to influence elections in Seattle. [p. 9]

While prosecutors may frame it as scurrilous that Google accepted ads "intended to influence elections," influencing political opinion is at the very heart of why we have a First Amendment to protect speech in the first place. Democracy depends on discourse, and it is hardly surprising that people would want to communicate in ways designed to persuade on political matters.

Nor is the fact that they may pay for the opportunity to express it salient. Every internet service needs some way of keeping the lights on and servers running. That it may sometimes charge people to use their systems to convey their messages doesn't alter the fact that it is still a service facilitating user generated content, which Section 230 exists to protect and needs to protect.

Of course, even in the face of unjust sanction sometimes platforms may try to stick it out anyway, and it appears from the Washington complaint that Google may have started accepting ads again at some point after it had initially stopped. It also agreed to pay $217,000 to settle a 2018 enforcement effort—although, notably, without admitting to any wrongdoing, which is a crucial fact prosecutors omit in its current pleading.

On December 18, 2018, the King County Superior Court entered a stipulated judgment resolving Google’s alleged violations of RCW 42.17A.345 from 2013 through the date of the State’s June 4, 2018, Complaint filing. Under the terms of the stipulated judgment, Google agreed to pay the State $200,000.00 as a civil penalty and an additional $17,000.00 for the State’s reasonable attorneys’ fees, court costs, and costs of investigation. A true and correct copy of the State’s Stipulation and Judgment against Google entered by the King County Superior Court on December 18, 2018, is attached hereto as Exhibit B. [p. 8. See p. 2 of Exhibit B for Google expressly disclaiming any admission of liability.]

Such a settlement is hardly a confession. Google could have opted to settle rather than fight for any number of reasons. Even platforms as well-resourced as Google will still need to choose their battles. Because it's not just a question of being able to afford to hire all the lawyers you may need; you also need to be able to effectively manage them all, and every skirmish on every front that may now be vulnerable if Section 230 no longer effectively preempts those attacks. Being able to afford a fight means being able to afford it in far more ways than just financially, and thus it is hardly unusual for those threatened with legal process to simply try to purchase relief from onslaught instead of fighting for the just result.

Without Section 230, or its preemption provision, however, that's what we'll see a lot more of: unjust results. We'll also see less effective moderation as platforms redirect their resources from doing better moderation to avoiding liability instead. And we'll see what Google foreshadowed, of platforms withdrawing their services from the public entirely as it becomes financially prohibitive to pay off all the local government entities that might like to come after them. It will not get us a better internet, more innovative online services, or solve any of the problems any of these state regulatory efforts hope to fix. It will only make everything much, much worse.

Read More | 4 Comments | Leave a Comment..

Posted on Techdirt - 2 March 2021 @ 4:22pm

The Unasked Question In Tech Policy: Where Do We Get The Lawyers?

from the they-don't-grow-on-trees dept

When we criticize Internet regulations like the CCPA and GDPR, or lament the attempts to roll back Section 230, one of the points we almost always raise is how unduly expensive these policy decisions can be for innovators. Any law that increases the risk of legal trouble increases the need for lawyers, whose services rarely come cheap.

But bare cost is only part of the problem. All too often, policymakers seem to assume an infinite supply of capable legal counsel, and it's an assumption that needs to be questioned.

First, there are not an infinite number of lawyers. For better or worse, the practice of law is a heavily regulated profession with significant barriers to entry. The legal industry can be fairly criticized, and often is, for making it more difficult and expensive to become a lawyer than perhaps it should be, but there is at least some basic threshold of training, competence, and moral character we should want all lawyers to have attained given the immense responsibility they are regularly entrusted with. These requirements will inevitably limit the overall lawyer population.

(Of course, there shouldn't be an infinite number of lawyers anyway. As discussed below, lawyers play an important role in society, but theirs is not the only work that is valuable. In the field of technology law, for example, our need for people to build new things should well outpace our need for lawyers to defend what has been built. We should be wary of creating such a need for the latter that the legal profession siphons off too much of the talent able to do the former.)

But even where we have lawyers we still need the right kind of lawyers. Lawyers are not really interchangeable. Different kinds of lawyering need different types of skills and subject-matter expertise, and lawyers will generally specialize, at least to some extent, in what they need to master for their particular practice area. For instance, a lawyer who does estate planning is not generally the one you'd want to defend you against a criminal charge, nor would one who does family law ordinarily be the one you'd want writing your employment manual. There are exceptions, but generally because that particular lawyer went out of their way to develop parallel expertise. The basic fact remains: simply picking any old lawyer out of the yellow pages is rarely likely to lead to good results; you want one experienced with dealing with the sorts of legal issues you actually have, substantively and practically.

True, lawyers can retrain, and it is not uncommon for lawyers to switch their focus and develop new skills and expertise at some point in their careers. But it's a problem if a disproportionate number start to specialize in the same area because, just as we need people available to work professions other than law, even within the law we still need other kinds of lawyers available to work on other areas of law outside these particular specialized areas.

And we also need to be able to afford them. We already have a serious "access to justice" problem, where only the most resourced are able to obtain legal help. A significant cause of this problem is the expense of law school, which makes it difficult for graduates to resist the siren call of more remunerative employment, but it's a situation that will only get worse if lawyer-intensive regulatory schemes end up creating undue demand for certain legal specializations. For example, as we increasingly pass a growing thicket of complex privacy regulations we create the need for more and more privacy lawyers to help innovators deal with these rules. But as the need for privacy lawyers outstrips the ready availability of lawyers with this expertise, it threatens to raise the costs for anyone needing any sort of lawyering at all. It's a basic issue of supply and demand: the more privacy lawyers that are needed, the more expensive it will be to attract them. And the more these lawyers are paid a premium to do this work, the more it will lure lawyers away from other areas that still need serving, thus making it all the more expensive to hire those who are left to help with it.

Then there is the question of where lawyers even get the expertise they need to be effective counsel in the first place. The dirty little secret of legal education is that, at least until recently, it probably wasn't at their law schools. Instead lawyers have generally been trained up on the job, and what newbie lawyers ended up learning has historically depended on what sort of legal job it was (and how good a legal job it was). Recently, however, there has been the growing recognition that it really doesn't make sense to graduate lawyers unable to competently do the job they are about to be fully licensed to do, and one way law schools have responded is by investing in legal clinics.

By and large, clinics are a good thing. They give students practical legal training by letting them basically do the job of a lawyer, with the benefit of supervision, as part of their legal education. In the process they acquire important skills and start to develop subject-matter expertise in the area the clinic focuses on, which can be in almost every practice area, including, as is relevant here, technology law. Meanwhile, clinics generally let students provide these legal services to clients far more affordably than clients would normally be able to obtain them, which partially helps address the access to justice problem.

However, there are still some significant downsides to clinics, including the inescapable fact that it is students who are basically subsidizing the legal services they are providing by having to pay substantial amounts of money in tuition for the privilege of getting to do this work. A recurrent theme here is that law schools are notoriously expensive, often underwritten with loans, which means that students, instead of being paid for their work, are essentially financing the client's representation themselves.

And that arrangement matters as policymakers remain inclined to impose regulations that increase the need for legal services without better considering how that need will be met. It has been too easy for too many to assume that these clinics will simply step in to fill the void, with an endless supply of students willing and able to pay to subsidize this system. Even if this supposition were true, it would still prompt the question of who these students are. The massive expense of law school is already shutting plenty of people out of the profession and robbing it of needed diversity by making it financially out of reach for too many, as well as making it impossible for those who do make it through to turn down more lucrative legal jobs upon graduation and take ones that would be more socially valuable instead. The last thing we need is a regulatory environment dependent on this teetering arrangement to perpetuate it.

Yet that's the upshot of much of the policy lawmakers keep crafting. For instance, in the context of Section 1201 Rulemakings, it has been openly presumed that clinics would always be available to do the massive amount of work necessary to earn back for the public the right to do something it was already supposed to be legally allowed to do. But it's not just these cited examples of copyright or privacy law that are a problem; any time a statute or regulatory scheme establishes an unduly onerous compliance requirement, or reduces any of the immunities and safe harbors innovation has depended on, it puts a new strain on the legal profession, which now has to come up with the help from somewhere.

At the same time, however, good policy doesn't mean necessarily eliminating the need for lawyers entirely, like the CASE Act tries to do. The bottom line is that legal services are not like other professional services. Lawyers play a critical role in upholding due process, and laws like the CASE Act that short-circuit those protections are a problem. But so are any laws that have the effect of interfering with that greater Constitutional purpose of the legal profession.

For a society that claims to be devoted to the "rule of law," ensuring that the public can realistically obtain any of the legal help it needs should be a policy priority at least on par with anything else driving tech regulation. Lawmakers therefore need to take care in how they make policy to ensure they do not end up distorting the availability and affordability of legal services in the process. Such care requires (1) carefully calibrating the burden of any imposed policy to not unnecessarily drive up the need for lawyers, and (2) specifically asking the question: who will do the work. They cannot continue to simply leave "insert lawyers here" in their policy proposals and expect everything to be fine. If they don't also pointedly address exactly where it is these lawyers will come from then it won't be.

22 Comments | Leave a Comment..

Posted on Techdirt - 1 March 2021 @ 3:30pm

Utah Prematurely Tries To Dance On Section 230's Grave And Shows What Unconstitutional Garbage Will Follow If We Kill It

from the not-dead-yet dept

As Mike has explained, just about every provision of the social media moderation bill being proposed in the Utah legislature violates the First Amendment by conditioning platforms' editorial discretion over what appears on its services—discretion that the First Amendment protects—on meeting a bunch of extra requirements Utah has decided to impose. This post is about how everything Utah proposes is also barred by Section 230, and why it matters.

It may seem like a fool's errand to talk about how Section 230 prohibits state efforts to regulate Internet platforms while the statute currently finds itself on life support, with fading vital signs as legislators on both sides of the aisle keep taking aim at it. After all, if it goes away, then it won't matter how it blocks this sort of state legislation. But that it currently does preclude what we're seeing out of Utah it is why it would be bad if Section 230 went away and we lost it as a defense against this sort of speech-chilling, Internet-killing regulatory nonsense from state governments. To see why, let's talk about how and why Section 230 currently forbids what Utah is trying to do.

We often point out in our advocacy that Congress wanted to accomplish two things with Section 230: encourage the most good content online, and the least bad. We don't even need to speak to the law's authors to know that's what the law was intended to do; we can see that's what it was for with the preamble text in subsections (a) and (b), as well as the operative language of subsection (c) providing platforms protection for the steps they take to vindicate these goals, making it safe for them to leave content up as well as safe for them to take content down.

It all boils down to Congress basically saying to platforms, "When it comes to moderation, go ahead and do what you need to do; we've got you covered, because giving you the statutory protection to make these Constitutionally-protected choices is what will best lead to the Internet we want." The Utah bill, however, tries to directly mess with that arrangement. While Congress wanted to leave platforms free to do the best they could on the moderation front by making it legally possible, as a practical matter, for them to do it however they chose, Utah does not want platforms to have that freedom. It wants to force platforms to moderate the way Utah has decided they should moderate. None of what the Utah bill demands is incidental nor benign; even the requirements for transparency and notice impinge on platforms' ability to exercise editorial and associative discretion over what user expression they facilitate by imposing significant burdens on the exercise of that discretion. Doing so however runs headlong into the main substance of Section 230, which specifically sought to alleviate platforms of burdens that would affect their ability to moderate content.

It also contravenes the part of the statute that expressly prevented states from interfering with what Congress was trying to accomplish with this law. The pre-emption provision can be found at subsection (e)(3): "No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section." Even where Utah's law does not literally countermand Section 230's statutory language, what Utah proposes to do is nevertheless entirely inconsistent with it. While Congress essentially said with Section 230, "You are free to moderate however you see fit," Utah is trying to say, "No, you're not; you have to do it our way, and we'll punish you if you don't." Utah's demand is incompatible with Congress's policy and thus, per this pre-emption provision, not Constitutionally enforceable on this basis either.

And for good reason. As a practical matter, both Congress and Utah can't speak on this issue and have it yield coherent policy that doesn't subordinate Congress's mission to get the best online ecosystem possible by letting platforms feel safe to do what they can to maximize the most good content and minimize the least bad. Every new threat of liability is a new pressure diverting platforms' efforts away from being good partners in meeting Congress's goal and instead towards doing only what is needed to avoid the trouble for themselves these new forms of liability threaten. There is no way to satisfy both regulators; Congress's plan to regulate platform moderation via carrots rather than sticks is inherently undermined once sticks start to be introduced. Which is part of the reason why Congress wrote in the pre-emption provision: to make sure that states couldn't introduce any.

Section 230's drafters knew that if states could impose their own policy choices on Internet platforms there would be no limit to what sort of obligations they might try to dream up. They also knew that if states could each try to regulate Internet platforms it would lead to messy, if not completely irreconcilable, conflicts among states. That resulting confusion would smother the Internet Congress was trying to foster with Section 230 by making it impossible for Internet platforms to lawfully exist. Because even if Utah were right, and its policy happened to be Constitutional and not a terrible idea, if any state were free to impose a good policy on content moderation it would still leave any other state free to impose a bad one. Such a situation is untenable for a technology service that inherently crosses state boundaries because it means that any service provider would somehow have to obey both the good state laws and also the bad ones at the same time, even when they might be in opposition. Just think about the impossibility of trying to simultaneously satisfy, in today's political climate, what a Red State government might demand from an Internet platform and what a Blue State might. That readily foreseeable political catch-22 is exactly why Congress wrote Section 230 in such a way that no state government gets to demand appeasement when it comes to platform moderation practices.

The only solution to the regulatory paralysis Congress rightly feared is what it originally devised: writing pre-emption into Section 230 to get the states out of the platform regulation business and leave it all instead to Congress. Thanks to that provision, the Internet should be safe from Utah's attack on platform moderation and any other such state proposals. But only so long as Section 230 remains in effect as-is. What Utah is trying to do should therefore stand as a warning to Congress to think very carefully before doing anything to reverse course and alter Section 230 in any way that would invite the policy gridlock it had the foresight to foreclose these twenty years ago with this prescient statute.

34 Comments | Leave a Comment..

Posted on Techdirt - 22 February 2021 @ 3:42pm

What Landing On Mars Again Can Teach Us, Again

from the humanity-pep-talk dept

It seems I'm always writing about Section 230 or copyright or some sort of regulatory effort driven by antipathy toward technology. But one of my favorite posts I've ever written here is this one, "We Interrupt All The Hating On Technology To Remind Everyone We Just Landed On Mars." Given that we just landed on Mars again it seems a good time to revisit it, because it seems no less important today than it was in 2018 when I originally wrote it. Just as it seems no less important that we just landed on Mars again. In fact, it all may matter even more now.

Today we find ourselves even more mired in a world full of technological nihilism. It has become a well-honed reflex: if it involves technology, it must be bad. And in the wake of this prevailing distrust we've developed a political culture that is, at best, indifferent to innovation if not often downright eager to stamp it out.

It's a poisonous attitude that threatens to trap us in our currently imperfect world, with no way to solve our way out of our problems. But recognizing what an amazing achievement it was to successfully land on Mars can work as an antidote, in at least two important ways:

First, it can remind us of what wonder feels like. To dream the most fantastic dreams, and then to go make those dreams happen. Mankind hasn't gazed at the stars in ambivalence; the heavens have been one of our greatest sources of inspiration throughout the ages. That we have now managed, for the first time in the history of human civilization, to put another planet within our grasp should not extinguish that wonder, with a glib "been there, done that" shrug. Rather, it is a cause for enormous celebration and should do nothing but inspire us to keep dreaming, next time even bigger.

Because if there's one thing this landing teaches us, apart from the tangible fruits of our exploration, it is to believe in ourselves. Our failures and disappointments here on Earth are serious indeed. But what this success demonstrates is that we can overcome what was once thought impossible. It may take diligence, hard work, and faithful adherence to science. And our human imperfections can sometimes make it hard to manage these things.

But landing on Mars reminds us that we can and provides us with an amazing example of how.

3 Comments | Leave a Comment..

Posted on Techdirt - 18 February 2021 @ 10:45am

Is Section 230 Just For Start-ups? History Says Nope

from the original-intentions dept

One of the arguments for changing Section 230 is that even if we needed it a long time ago when the Internet was new, now that the Internet has been around for a while and some of the companies providing Internet services are quite big, we don't need it anymore. This view is simply untrue: Internet service providers of every size still need it, including and perhaps even especially the big ones because they are the ones handling the greatest volume of user expression.

Furthermore, Section 230 was never specifically aimed at start-ups. Indeed, from the outset it was intended to address the legal problems faced by an established incumbent.

The origin story for Section 230 begins with the Stratton Oakmont v. Prodigy case. In this case a New York state court allowed Prodigy to be sued over speech a user had posted. By doing so the court not only hurt Prodigy right then, in that case, but it threatened to hurt it in the future by opening the door to more lawsuits against it or any other online service provider—which would also be bad news for online expression more broadly as well. In the shadow of this decision services weren't going to be able to facilitate the greatest amount of valuable user expression, or minimize the greatest amount of detrimental. Even back then Prodigy was handling far too many posts by users for it to be possible to vet all, or even most, of them. While that volume might today seem like a drop in the bucket compared to how much expression Internet services handle now, the operative point is that use of online services like Prodigy had already surpassed the point where a service provider could possibly be able to review everything that ever appeared on its systems and make decisions about what to leave up or take down perfectly. If that's what they needed to do to avoid being crushed by litigation, then they were looking at a future of soon being crushed by litigation.

And that was the case for Prodigy even though it was an established service. As an *Internet* service provider Prodigy may have been new to the game because the Internet had only just left the realm of academia and become something that commercial service providers could provide access to. But it was hardly new as an "interactive computer service" provider, which is what Section 230 actually applies to. True, Section 230 contemplates that interactive computer service providers may likely provide Internet-based services, but it doesn't condition its statutory protection on being connected to the Internet. (From 47 U.S.C. Section 230(f)(2): "The term 'interactive computer service' means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server…"). To be eligible for Section 230 the service provider simply needs to be in the business of providing some form of interactive computer service, and Prodigy had been doing that for well over a decade as a dial-up service with its own proprietary network—just like CompuServe had long done and eventually America Online did as well.

Furthermore, Prodigy was a service provider started by Sears and IBM (and briefly also CBS). At the time these were some of the largest companies in America. The "big tech" of the era was "Big Blue" (as IBM was known). And while Sears may have managed to bungle itself into irrelevance in the years since, at the time Section 230 was passed there were few companies more expert in remote commerce than it was. Nevertheless it was the needs of these big companies that Congress was addressing with Section 230 because Congress recognized that it wasn't just their needs that it was addressing. The same legal rules that kept start-ups from being obliterated by litigation were the same ones needed to keep the bigger players from being obliterated as well.

The irony, of course, is that Section 230 may have ultimately ended up hurting the bigger players, because in the long run it opened the door to competition that ultimately ate these companies' lunch. Of course, that's what we would still want Section 230 to do: open the door to service providers that can do a better job than the large existing incumbents. It can hardly be said that Section 230 was or is a subsidy to "big tech," then or now, when building that on-ramp for something better is what it has always done and needs to be allowed to continue to do.

2 Comments | Leave a Comment..

Posted on Techdirt - 17 February 2021 @ 12:03pm

Why We Filed A Comment With Facebook's Oversight Board

from the less-is-more dept

Back when Facebook's Oversight Board was just getting organized, a colleague suggested I represent people before it as part of my legal practice. As a solo lawyer, my entrepreneurial ears perked up at the possibility of future business opportunities. But the rest of me felt extremely uncomfortable with the proposition. I defend free speech, but I am a lawyer and I defend it using law. If Facebook removes you or your content that is an entirely lawful choice for it to make. It may or may not be a good decision, but there is nothing for law to defend you from. So it didn't seem a good use of my legal training to spend my time taking issue with how a private entity made the moderation decisions it was entirely within its legal rights to make.

It also worried me that people were regarding Facebook's Oversight Board as some sort of lawmaking body, and I was hesitant to use my lawyering skills to somehow validate and perpetuate that myth. No matter how successful the Board turns out to be, it is still limited in its authority and reach, and that's a good thing. What is not good is when people expect that this review system should (a) have the weight of actual law or (b) be the system that gets to evaluate all moderation decisions on the Internet.

Yet here I am, having just written a comment for the Copia Institute in one of its cases. Not because I changed my mind about any of my previous concerns, but because that particular high-profile case seemed like a good opportunity to help reset expectations about the significance of the Oversight Board's decisions.

As people who care about the online ecosystem we want those decisions to be as good as they can be because they will have impact, and we want that impact to be as good as it can be. With our comment we therefore tried to provide some guidance on what a good result would look like. But whether the Board gets its decisions right or wrong, it does no good for the public, or even the Board itself, to think its decisions mean more than they do. Nor is it necessary: the Oversight Board already has a valid and even valuable role to play. And it doesn't need to be any more than what it actually is for it to be useful.

It's useful because every platform makes moderation decisions. Many of these decisions are hard to make perfectly, and many are made at incredible scale and speed. Even with the best of intentions it is easy for platforms to make moderation decisions that would have been better decided the other way.

And that is why the basic idea of the Oversight Board is a good one. It's good for it to be able to provide independent review of Facebook's more consequential decisions and recommend how to make them better in the future. Some have alleged that the board isn't sufficiently independent, but even if this were true, it wouldn't really matter, at least insofar as Facebook goes. What is important is that there is any operational way to give Facebook's moderation decisions a second look, especially in a way that can be informed by additional considerations that may not have been included in the original decision. That the Oversight Board is designed to provide such review is an innovation worth cheering.

But all the Oversight Board can do is decide what moderation decision might have been better for Facebook and its user community. It can't articulate, and it certainly can't decree, a moderation rule that could or should apply at all times on every platform anywhere, including platforms that are much different, with different reaches, different purposes, and different user communities than Facebook has. It would be impossible to come up with a universally applicable rule. And it's also not a power this Board, or any similar board, should ever have.

As we said in our comment, and have explained countless times on these pages, platforms have the right to decide what expression to allow on their systems. We obviously hope that platforms will use this right to make these decisions in a principled way that serves the public interest, and we stand ready to criticize them as vociferously as warranted when they don't. But we will always defend their legal right to make their moderation choices however perfectly or imperfectly they may make them.

What's important to remember in thinking about the Oversight Board is that this is still Facebook making moderation decisions. Not because the Board may or may not be independent from Facebook, but because Facebook's decision to defer to the Board's judgment is itself a moderation decision. It is not Facebook waiving its legal right to make moderation choices but rather it exercising that very right to decide how to make those choices, and this is what it has decided. Deferring to the Board's judgment does not obviate real-world law protecting its choice; it's a choice that real world law pointedly allows Facebook to make (and, thanks to Section 230, even encourages Facebook to try).

The confusion about the mandate of the Oversight Board seems to stem in part from the way the Board has been empowered and operates. In many ways it bears the hallmarks of a self-contained system of private law, and in and of itself that's fine. Private law is nothing new. For instance, when you hear the term "arbitration," that's basically what arbitration is: a system of private law. Private law can exist alongside regular, public, democratically-generated law just fine, although sometimes there are tensions because for it to work all the parties need to agree to abide by it instead of public law, and sometimes that consent isn't sufficiently voluntary.

But consent is not an issue here: before the Oversight Board came along Facebook users had no legal leverage of any kind over Facebook, so this is now a system of private law that Facebook has agreed can give them some. We can and should of course care that this system of private law is a good one, well-balanced and equitable, and thus far we've seen no basis for any significant concern. We instead see a lot of thoughtful people working very hard to try to get it right and open to being nudged to do better if such nudging should happen to be needed. But even if they were getting everything all wrong, in the big picture it doesn't really matter either, because ultimately it is only Facebook's oversight board, inherently limited in its authority and reach to that platform.

The misapprehension that this Board can or should somehow rule over all moderation decisions on the Internet is also not helped by the decision to call it the "Oversight Board," rather than the "Facebook Oversight Board." Perhaps it could become a model for other platforms to use, and maybe, just maybe, if it really does become a fully spun-off independent, sustainable, self-contained private law system it might someday be able to supply review services to other platforms too—provided, of course, that the Board is equipped to address these platforms' own particularities and priorities, which may differ significantly from Facebook's.

But right now it is only a solution for Facebook and only set up to consider the unique nature of the Facebook platform and what Facebook and its user community want from it. It is far from a one-size-fits-all solution for Internet content moderation generally, and our comment said as much, noting that the relative merit of the moderation decision in question ultimately hinged on what Facebook wanted its platform to be.

Nevertheless, it is absolutely fine for it to be so limited in its mission, and far better than if it were more. Just as Facebook had the right to acquiesce to this oversight board, other platforms equally have the right, and need to have the right, to say no to it or any other such board. It won't stop being important for the First Amendment to protect this discretion, regardless of how good a job this or any other board might do. While the Oversight Board can, and likely should, try to incorporate First Amendment values into its decisions to the extent it can, actual First Amendment law operates on a different axis than this system of private law ever would or could, with different interests and concerns to be balanced.

It is a mistake to think we could simply supplant all of those considerations with the judgment of this Oversight Board. No matter how thoughtful its decisions, nor how great the impact of what it decides, the Oversight Board is still not a government body. Neither it (nor even Facebook) has the sort of power the state has, nor any of the Constitutional limitations that would check it. Facebook remains a private actor, a company with a social media platform, and Facebook's Oversight Board simply an organization built to help it make its platform better. We should be extremely wary of expecting it to be anything other than that.

Especially because that's already plenty for it to be in order for it to be able to do some good.

10 Comments | Leave a Comment..

Posted on Techdirt - 12 February 2021 @ 12:01pm

The Copia Institute To The Oversight Board Regarding Facebook's Trump Suspension: There Was No Wrong Decision

from the context-driven-coin-flip dept

The following is the Copia Institute's submission to the Oversight Board as it evaluates Facebook's decision to remove some of Trump's posts and his ability to post. While addressed to the Board, it's written for everyone thinking about how platforms moderate content.

The Copia Institute has advocated for social media platforms to permit the greatest amount of speech possible, even when that speech is unpopular. At the same time, we have also defended the right of social media platforms to exercise editorial and associative discretion about the user expression it permits on its services. This case illustrates why we have done both. We therefore take no position on whether Facebook's decision to remove former-President Trump's posts and disable his ability to make further posts was the right decision for Facebook to make because choosing to do so or choosing not to is each defensible. Instead our goal is to explain why.

Reasons to be wary of taking content down. We have long held the view that the reflex to remove online content, even odious content, is generally not a healthy one. Not only can it backfire and lead to the removal of content undeserving of deletion, but it can have the effect of preserving a false monoculture in online expression. Social media is richer and more valuable when it can reflect the full fabric of humanity, even when that means enabling speech that is provocative or threatening to hegemony. Perhaps especially then, because so much important, valid, and necessary speech can so easily be labeled that way. Preserving different ideas, even when controversial, ensures that there will be space for new and even better ones, whereas policing content for compliance with current norms only distorts those norms' development.

Being too willing to remove content also has the effect of teaching the public that when it encounters speech that provokes the way to respond is to demand its suppression. Instead of a marketplace of ideas, this burgeoning tendency means that discourse becomes a battlefield, where the view that will prevail is the one that can amass enough censorial pressure to remove its opponent—even if it's the view with the most merit. The more Facebook feeds this unfortunate instinct by removing user speech, the more vulnerable it will be to further pressure demanding still more removals, even when it may be of speech society would benefit from. The reality is that there will always be disagreements over the worth of certain speech. As long as Facebook assumes the role of an arbitrator, it will always find itself in the middle of an unwinnable tug-of-war between conflicting views. To break this cycle, removals should be made with reluctance and only limited, specific, identifiable, and objective criteria to justify the exception. It may be hard to employ them consistently at scale, but more restraint will in the long run mean less error.

Reasons to be wary of leaving content up. The unique challenge presented in this case is that the Facebook user at the time of the posts in question was the President of the United States. This fact cuts in multiple ways: as the holder of the highest political office in the country Trump's speech was of particular relevance to the public, and thus particularly worth facilitating. After all, even if Trump's posts were debauched, these were the views of the President, and it would not have served the public for him to be of this character and the public not to know.

On the other hand, as the then-President of the United States his words had greater impact than any other user's. They could do, and did, more harm, thanks to the weight of authority they acquired from the imprimatur of his office. And those real-world effects provided a perfectly legitimate basis for Facebook to take steps to (a) mitigate that damage by removing posts and (b) end the association that had allowed him to leverage Facebook for those destructive ends.

If Facebook concludes that anyone's use of its services is not in its interests, the interests of its user community, or the interests of the wider world Facebook and its users inhabit, it can absolutely decide to refuse that user continued access. And it can reach that conclusion based on wider context, beyond platform use. Facebook could for instance deny a confessed serial killer who only uses Facebook to publish poetry access to its service if it felt that the association ultimately served to enable the bad actor's bad acts. As with speech removals, such decisions should be made with reluctance and based on limited, specific, identifiable, and objective criteria, given the impact of such terminations. Just as continued access to Facebook may be unduly empowering for users, denying it can be equally disempowering. But in the case of Trump, as President he did not need Facebook to communicate to the public. He had access to other channels and Facebook no obligation to be conscripted to enable his mischief. Facebook has no obligation to enable anyone's mischief, whether they are a political leader or otherwise.

Potential middle-grounds. When it comes to deciding whether to continue to provide Facebook's services to users and their expression, there is a certain amount of baby-splitting that can be done in response to the sorts of challenges raised by this case. For instance, Facebook does more than simply host speech that can be read by others; it provides tools for engagement such as comments and sharing and amplification through privileged display, and in some instances allows monetization. Withdrawing any or all of these additional user benefits is a viable option that may go a long way toward minimizing the problems of continuing to host problematic speech or a problematic user without the platform needing to resort to removing either entirely.

Conclusion. Whether removing Trump's posts and further posting ability was the right decision or not depends on what sort of service Facebook wants to be and which choice it believes it best serves that purpose. Facebook can make these decisions any way it wants, but to minimize public criticism and maximize public cooperation how it makes them is what matters. These decisions should be transparent to the user community, scalable to apply to future situations, and predictable in how they would, to the extent they can be, since circumstances and judgment will inevitably evolve. Every choice will have consequences, some good and some bad. The choice for Facebook is really to affirmatively choose which ones it wants to favor. There may not be any one right answer, or even any truly right answer. In fact, in the end the best decision may have little to do with the actual choice that results but rather the process used to get there.

16 Comments | Leave a Comment..

Posted on Techdirt - 10 February 2021 @ 1:46pm

How To Think About Online Ads And Section 230

from the oversimplification-avoidance dept

There's been a lot of consternation about online ads, sometimes even for good reason. The problem is that not all of the criticism is sound or well-directed. Worse, the antipathy towards ad tech, regardless of whether it is well-founded or not, is coalescing into yet more unwise, and undeserved, attacks on Section 230 and other expressive discretion the First Amendment protects. If these attacks are ultimately successful none of the problems currently lamented will be solved, but they will create lots of new ones.

As always, effectively addressing actual policy challenges first requires a better understanding of what these challenges are. The reality is that there are at least three separate issues that are raised by online ads: those related to ad content itself, those related to audience targeting, and those related to audience tracking. They all require their own policy responses—and, as it happens, none of those policy responses call for doing anything to change Section 230. In fact, to the extent that Section 230 is even relevant, the best policy response will always require keeping it intact.

With regard to ad content, Section 230 applies, and should apply, to the platforms that run advertiser-supplied ads for the same reasons it applies, and should apply, to the platforms hosting the other sorts of content created by users. After all, ad content is, in essence, just another form of user generated content (in fact, sometimes it's exactly like other forms of user content). And, as such, the principles behind having Section 230 apply to platforms hosting user-generated content in general also apply – and need to apply – here.

For one thing, as with ordinary user-generated content, platforms are not going to be able to police all the ad content that may run on their site. One important benefit of online advertising versus offline is that it enables far more entities to advertise to far larger audiences than they would be able to afford in the offline space. Online ads may therefore sometimes be cheesy, low-budget affairs, but it's ultimately good for the consumer if it's not just large, well-resourced, corporate entities who get to compete for public attention. We should be wary of implementing any policy that might choke off this commercial diversity.

Of course, the flip side to making it possible for many more actors to supply many more ads is that the supply of online ads is nearly infinite, and thus the volume is simply too great for platforms to be able to scrutinize all of them (or even most of them). Furthermore, even in cases where platforms might be able to examine an ad, it is still unlikely to have the expertise to review it for all possible legal issues that might arise in every jurisdiction where the ad may appear. Section 230 exists in large part to alleviate these impossible content policing burdens to make it possible for platforms to facilitate the appearance of any content at all.

Nevertheless, Section 230 also exists to make it possible for platforms to try to police content anyway, to the extent that they can, by making it clear that they can't be held liable for any of those moderation efforts. And that's important if we want to encourage them to help eliminate ads of poor quality. We want platforms to be able to do the best they can to get rid of dubious ads, and that means we need to make it legally safe for them to try.

The more we think they should take these steps, the more we need policy to ensure that it's possible for platforms to respond to this market expectation. And that means we need to hold onto Section 230 because it is what affords them this practical ability.

What's more, Section 230 affords platforms all this critical protection regardless of whether they profit from carrying content or not. The statute does not condition its protection on whether a platform facilitates content in exchange for money, nor is there any sort of constitutional obligation for a platform to provide its services on a charitable basis in order to benefit from the editorial discretion the First Amendment grants it. Sure, some platforms do pointedly host user content for free, but every platform needs to have some way of keeping the lights on and servers running. And if the most effective way to keep their services free for some users to post their content is to charge others for theirs, it is an absolutely constitutionally permissible decision for a platform to make.

In fact, it may even be good policy to encourage as well, as it keeps services available for users who can't afford to pay for access. Charging some users to facilitate their content doesn't inherently make the platform complicit in the ad content's creation, or otherwise responsible for imbuing it with whatever quality is objectionable. Even if that an advertiser has paid for algorithmic display priority, Section 230 should still apply just as it applies to any other algorithmically driven display decision the platform employs.

But on the off-chance that the platform did take an active role in creating that objectionable content, Section 230 has never stood in the way of holding the platform responsible. What Section 230 simply says is that making it possible to post unlawful content is not the same as creating content; for the platform to be liable as an "information content provider," aka a content creator, it had to have done something significantly more to birth its wrongful essence than simply be a vehicle for someone else to express it.

It's even true if the platform allows the advertiser to choose its audience. After all, the content has already been created. Audience targeting is something else entirely, but it's also something we should be wary of impinging upon.

There may, of course, be situations where advertisers try to target certain types of ads (ex: jobs, housing offers) in harmful ways. And when they do it may be appropriate to sanction the advertiser for what may amount to illegally discriminatory behavior. But not every such targeting choice is wrongful; sometimes choosing narrow audiences based on protected status may even be beneficial. But if we change the law to allow platforms be held equally liable with the advertiser for their wrongful targeting choices, we will take away the ability for platforms to offer audience targeting for any reasons, even good ones, by making it legally unsafe in case the advertiser does it for bad ones.

Furthermore, doing so will upend all advertising as we've known it, and in a way that's offensive to the First Amendment. There's a reason that certain things are advertised during prime time, or during sports broadcasts, or on late night tv, just as there's a reason that ads appearing in the New York Times are not necessarily the same ones running in Field & Stream or Ebony magazines. The Internet didn't suddenly make those choices possible; advertisers have always wanted the most bang for their buck, to reach the people most likely to be their ultimate customers as cost effectively as possible. And as a result they have always made choices about where to place their ads based on the demographics those ads likely reach. To now say that it should be illegal to allow advertisers to ever make such choices, simply because they may sometimes make these decisions wrongfully would disrupt decades upon decades of past practice and likely run afoul of the First Amendment, which generally protects the choice of whom to speak to. In fact, it protects it regardless of the medium in question, and there is no principled reason why an online platform should be any less protected than a broadcaster or some sort of printed periodical (especially not the former).

Even if it would be better if advertisers weren't so selective—and it's a fair argument to make, and a fair policy to pursue—it's not an outcome we should use the weight of legal liability to try to force. It won't work, and it impinges on important constitutional freedoms we've come to count on. Rather, if there is any affirmative policy response to ad tech that is warranted it is likely with the third constituent part: audience tracking. But even so, any policy response will still need to be a careful one.

There is nothing new about marketers wanting to fully understand their audiences; they have always tried to track them as well as the technology of the day would allow. What's new is how much better they now can. And the reality is that some of the tracking ability is intrusive and creepy, especially to the degree it happens without the audience being aware of how much of their behavior is being silently learned by strangers. There is room for policy to at minimum encourage, and potentially even require, such systems to be more transparent in how they learn about their audiences, tell others what they've learned, and give those audiences a chance to say no to much of it.

But in considering the right regulatory response there are some important caveats. First, take Section 230 off the table. It has nothing to do with this regulatory problem, apart from enabling platforms that may use ad tech to exist at all. You don't fix ad tech by killing the entire Internet; any regulatory solution is only a solution when it targets the actual problem.

Which leads to the next caution, because the regulatory schemes we've seen attempted so far (GDPR, CCPA, Prop. 24) are, even if well-intentioned, clunky, conflicting, and with plenty of overhead that compromises their effectiveness and imposes their own unintended and chilling costs, including on expression itself (and of more expression than just that of advertisers).

Still, when people complain about online ads this is frequently the area they are complaining about and it is worth focused attention to solve. But it is tricky; given how easy it is for all online activity to leave digital footprints, as well as the many reasons we might want to allow those footprints to be measured and then those measurements to be used (even potentially for advertising), care is required to make sure we don't foreclose the good uses while aiming to suppress the bad. But for the right law, one that recognizes and reasonably reacts to the complexity of this policy challenge, there is an opportunity for a constructive regulatory response to this piece of the online ad tech puzzle. There is no quick fix – and ripping apart the Internet by doing anything to Section 230 is certainly not any kind of fix at all – but if something must be done about online advertising, this is the something that's worth the thoughtful policy attention to try to get right.

6 Comments | Leave a Comment..

Posted on Techdirt - 9 February 2021 @ 10:45am

If We're Going To Talk About Discrimination In Online Ads, We Need To Talk About Roommates.com

from the deja-vu-all-over-again dept

It has been strange to see people speak about Section 230 and illegal discrimination as if it were somehow a new issue to arise. In fact, one of the seminal court cases that articulated the parameters of Section 230, the Roommates.com case, did so in the context of housing discrimination. It's worth taking a look at what happened in that litigation and how it bears on the current debate.

Roommates.com was (and apparently remains) a specialized platform that does what it says on the tin: allow people to advertise for roommates. Back when the lawsuit began, it allowed people who were posting for roommates to include racial preferences in their ads, and it did so in two ways: (1) through a text box, where people could write anything about the roommate situation they were looking for, and (2) through answers to mandatory questions about roommate preferences.

Roommates.com got sued by the Fair Housing Councils of the San Fernando Valley and San Diego for violating federal (FHA) and state (FEHA) fair housing law for allowing advertisers to express these discriminatory preferences. It pled a Section 230 defense, because the allegedly offending ads were user ads. But, in a notable Ninth Circuit decision, it both won and it lost.

In sum, the court found that Section 230 indeed applied to the user expression supplied through the text box. That expression, for better or worse, was entirely created by the user. If something was wrong with it, it was the user who had made it wrongful and the user, as the information content provider, who could be held responsible—but not, per Section 230, the Roommates.com platform, which was the interactive computer service provider for purposes of the statute and therefore immune from liability for it.

But the mandatory questions were another story. The court was concerned that, if these ads were illegally discriminatory, the platform had been a party to the creation of that illegality by prompting the user to express discriminatory preferences. And so the court found that Section 230 did not provide the platform a defense to any claim predicated on the content elicited by these questions.

Even though it was a split and somewhat messy decision, the Roommates.com case has held up over the years and provided subsequent courts with some guidance for how to figure out when Section 230 should apply. There are still fights around the edges, but figuring out whether it should apply has basically boiled down to determining who imbued the content with its allegedly wrongful quality. If the platform, then it's on the hook as much as the user may be. But its contribution to wrongful content's creation still had to be more substantive than merely offering the user the opportunity to express something illegal.

The fact that Roommate encourages subscribers to provide something in response to the prompt is not enough to make it a "develop[er]" of the information under the common-sense interpretation of the term we adopt today. It is entirely consistent with Roommate's business model to have subscribers disclose as much about themselves and their preferences as they are willing to provide. But Roommate does not tell subscribers what kind of information they should or must include as "Additional Comments," and certainly does not encourage or enhance any discriminatory content created by users. Its simple, generic prompt does not make it a developer of the information posted. [p. 1174].

The reason it is so important to hold onto that distinction is because the Roommates.com litigation has a punchline. The case didn't end there, with that first Ninth Circuit decision. After several more years of litigation there was a another Ninth Circuit decision in the case, this time on the merits of the discrimination claim.

And the claim failed. Per the Ninth Circuit, roommate situations are so intimate that the First Amendment rights of free association must be allowed to prevail and people be able to choose whom they live with by any means they like, even if its xenophobic prejudice.

Because of a roommate's unfettered access to the home, choosing a roommate implicates significant privacy and safety considerations. The home is the center of our private lives. Roommates note our comings and goings, observe whom we bring back at night, hear what songs we sing in the shower, see us in various stages of undress and learn intimate details most of us prefer to keep private. Roommates also have access to our physical belongings and to our person. As the Supreme Court recognized, "[w]e are at our most vulnerable when we are asleep because we cannot monitor our own safety or the security of our belongings." Minnesota v. Olson, 495 U.S. 91, 99, 110 S.Ct. 1684, 109 L.Ed.2d 85 (1990). Taking on a roommate means giving him full access to the space where we are most vulnerable. [p. 1221]

[…].

Government regulation of an individual's ability to pick a roommate thus intrudes into the home, which "is entitled to special protection as the center of the private lives of our people." Minnesota v. Carter, 525 U.S. 83, 99, 119 S.Ct. 469, 142 L.Ed.2d 373 (1998) (Kennedy, J., concurring). "Liberty protects the person from unwarranted government intrusions into a dwelling or other private places. In our tradition the State is not omnipresent in the home." Lawrence v. Texas, 539 U.S. 558, 562, 123 S.Ct. 2472, 156 L.Ed.2d 508 (2003). Holding that the FHA applies inside a home or apartment would allow the government to restrict our ability to choose roommates compatible with our lifestyles. This would be a serious invasion of privacy, autonomy and security. [id.].

[…].

Because precluding individuals from selecting roommates based on their sex, sexual orientation and familial status raises substantial constitutional concerns, we interpret the FHA and FEHA as not applying to the sharing of living units. Therefore, we hold that Roommate's prompting, sorting and publishing of information to facilitate roommate selection is not forbidden by the FHA or FEHA. [p. 1223]

This ruling is important on a few fronts. In terms of substance, it means that any law that itself tries to ban discrimination may itself have constitutional problems. It may be just, proper, and even affirmatively Constitutional to ban it in many or even most contexts. But, as this decision explains, it isn't necessarily so in all contexts, and it risks harm to people and the liberty interests that protect them to ignore this nuance.

Meanwhile, from a Section 230 perspective, the decision meant that a platform got dragged through years and years of expensive litigation only to ultimately be exonerated. It's amazing it even managed to survive, as many platforms needlessly put through the litigation grinder don't. And that's a big reason why we have Section 230, because we want to make sure platforms can't get bled dry before being found not liable. It is not ultimate liability that can crush them; it's the litigation itself that can tear them to pieces and force them to shut down or at least severely restrict even lawful content.

Section 230 is designed to avoid these outcomes, and it's important that we not let our distaste, however justified, for some of the content internet users may create prompt us to make the platforms they use vulnerable to such ruin. Not if we want to make sure internet services can still remain available to facilitate the content that we prefer they carry instead.

21 Comments | Leave a Comment..

Posted on Techdirt - 5 February 2021 @ 1:47pm

Senators Warner, Hirono, And Klobuchar Demand The End Of The Internet Economy

from the daft-drafting dept

Just because Senators Warner, Hirono, and Klobuchar are apparently oblivious to how their SAFE TECH bill would destroy the Internet doesn't mean everyone else should ignore how it does. These are Senators drafting legislation, and they should understand the effect the words they employ will have.

Mike has already summarized much of the awfulness they propose, and why it is so awful, but it's worth taking a closer look at some of the individually odious provisions. This post focuses in particular on how their bill obliterates the entire Internet economy.

In sum, and without exaggeration: this bill would require every Internet service be a self-funded, charitable venture always offered for free.

The offending language is here:

(iii) by inserting before the period at the end [subsection (c)(1)] the following: ‘‘, unless the provider or user has accepted payment to make the speech available or, in whole or in part, created or funded the creation of the speech…’’

Subsection (c)(1), for reference, is the "twenty-six words that created the Internet." It's the clause that does nearly all the heavy lifting to give Section 230 its meaning and value. And what these Senators propose is that any value that it could still somehow manage to provide, after all the other changes they propose turn it into swiss cheese, now be conditional. And that condition: that the site never, ever make any money.

It's the first part of that bill text that is most absurd, but even the second part is plenty destructive too. To the extent that the latter part is even necessary – because if a platform did create the offending content then Section 230 wouldn't apply anyway – it would still have a huge impact. For instance, could Patreon be liable for helping fund someone's expression? If these Senators have their way, quite possibly.

But it's the first part that nukes the entire Internet from orbit because it prohibits any site from in any way acquiring any money in any way to subsidize their existence as a platform others can use. That's what "accepted payment to make the speech available" means. It doesn't care if the platform actually earns a profit, or runs at a loss. It doesn't care if it's even a commercial venture out to make money in the first place. It doesn't care how big or small it is. It doesn't even care how the site acquired money so that it could exist to enable others' expression. Wikipedia, for instance, is subsidized by donors, who provide "payment" so that Wikipedia can exist to make its users' speech available. But if this bill should pass, then no more Section 230 protection for that site, or any other site that didn't have an infinite pot of money at the outset to fund it forever. Any site that wants to be economically sustainable, or even simply recoup even some of the costs of operation – let alone actually profit – would have to do so without the benefit of Section 230 if this bill were to pass.

It's possible, of course, that some of this effect is just the result of bad drafting, and the Senators really mean to tie payment to the specific speech in question that may be unlawful. But (A) if they can't even draft this part correctly to not have these enormously destructive collateral effects, then there's little reason to believe their other provisions won't be equally ruinous, carelessly if not deliberately.

And (B), it would still be a problem constitutionally because it would make platforms' own First Amendment rights contingent on financial arrangement, which has never before been the case. It is, after all, the First Amendment that allows a platform to choose to carry or refuse any particular content, and not actually Section 230. Section 230 only helps make that First Amendment protection meaningful.

That money might influence a platform's decision does not obviate its constitutional protection. Editorial discretion is editorial discretion, regardless of whether it is affected by financial interest. Because of course editorial discretion always is affected by it, and always has been: newspapers run articles they think people will read because it will sell more papers, and media outlets refuse ads they think will offend. The First Amendment has never been contingent on charitable altruism, and any bill that would try to now make it so deeply offends it.

The sad irony is that it was Trump who declared that he wanted to "open up" First Amendment law and weaken its protections. But with bills like these it's the Democrats who actually are.

101 Comments | Leave a Comment..

Posted on Techdirt - 11 January 2021 @ 12:01pm

Dear Section 230 Critics: When Senators Hawley And Cruz Are Your Biggest Allies, It's Time To Rethink

from the strangest-bedfellows dept

Last week Senators Hawley and Cruz used their platform and power as United States Senators to deliberately spread disinformation they knew or should have known to be false in order to undermine public confidence in the 2020 Presidential Election results. Their actions gave oxygen to a lawless and violent insurrection that nearly overran—literally and physically—our democratic government.

They should have known better and there is every reason to believe they did know better. There is every reason to believe that they intended their actions to further their craven attempt to solidify their own desired political power, even though it came at the expense of our Constitutional order and democratic norms and likely *because* it came at the expense of our Constitutional order and democratic norms, which would otherwise have stood against their ambition. They are, after all, highly educated people, bearing credentials from some of our most esteemed academic institutions. It is impossible to believe they did not know what they were doing.

Just as it is impossible to believe they did not know what they were doing when they railed against Section 230. At first glance it may seem like an irrelevant quibble to take issue with their position on Internet policy when viewed in comparison to the actual, violent insurrection they also invited. But it is indeed worth the attention, for the same reasons that their other anti-democratic behavior is so troubling. Because one of the reasons we have rights of free expression in America, and the Constitution to guarantee them, is because free expression is so necessary as a check against tyranny. And for those like Hawley and Cruz who are rooting for the tyranny, getting rid of those speech protections is a necessary first step to advancing that anti-democratic end.

Which is what gutting Section 230 would do. While the First Amendment would, of course, in theory still be there to protect speech, in practice those rights would become illusory. When it comes to online expression, Section 230 is what makes those speech rights the First Amendment protects real and meaningful. And that's exactly what Hawley and Cruz want to prevent.

They want to prevent it because they can see how Section 230 stands in their way. They can see how platforms exercising their First Amendment rights to choose which user speech to facilitate could lead to those platforms choosing not to facilitate their poisonous propaganda, and they understand how stripping platforms of their Section 230 immunity effectively takes away platforms' ability to make that choice by making it too legally precarious to try.

They also can see how stripping platforms of their statutory immunity could force platforms to suppress user speech that challenges them. Section 230 allows platforms to have a free hand in enabling user speech because it means they don't have to fear enduring an expensive legal challenge over it. Without the statute's currently unequivocal protection, however, they won't be able to accommodate it so willingly. They will be forced to say no to plenty, including plenty of socially valuable, Constitutionally-protected speech against government officials like Hawley and Cruz, lest these platforms make themselves vulnerable to expensive litigation over it—which, even if unmeritorious, would do nothing but drain their resources. Every change to Section 230 that Hawley and Cruz have demanded would erode this critical protection platforms depend on to enable all this user expression and lead them to second guess whether they could continue to. And it would thus leave Hawley, Cruz, and their corrupt compatriots free to continue their nefarious efforts to consolidate their control over the nation without much fear of complaint.

The problem is, though, so would all the changes to Section 230 also being championed by Democrats. Campaigning against Section 230 has not been the exclusive domain of Republicans. Plenty of Democrats have joined them, from Senator Blumenthal (D-CT), to Rep. Eshoo (D-CA) and Rep. Malinowski (D-NJ), to even Senator Schatz (D-HI). And, of course, perhaps the most prominent Democrat of them all: President-Elect Joe Biden. Their reasons for agitating against Section 230 may be different than those cited by Republicans, and their proposed changes may vary in specifics as well, but, whether they realize it or not, the effect of all these changes would be the same as what Hawley and Cruz have advocated for: the erosion of First Amendment protections online. Which will only grease the skids for the Hawleys and Cruzes of the world as much as all the changes they themselves have been calling for.

Every policymaker appalled by what has just transpired and eager to preserve our system of self-government must take heed. Our inherently fragile democracy cannot survive without free speech, and no policymaker who wishes to ensure its survival can afford to do anything to undermine it. But when it turns out, as it does now, that the policy demanded by democracy's saviors is the same exact policy sought by its enemies, those who wish to save it need to think again about what they really ought to be asking for.

39 Comments | Leave a Comment..

Posted on Techdirt - 30 December 2020 @ 7:39pm

Senators Tell The USPTO To Remove The Arbitrary Obstacles Preventing Inventors (Especially Women Inventors) From Getting Patents

from the patent-barred dept

There are plenty of issues with the patent system as we know it today, but one big one is with the system we use to award them. It's a problem because the more important we think patents are, the more important it is to ensure that the mechanism we use to grant them is capable of recognizing all the invention patent law is intended to protect. Unfortunately, however, right now the patent-review system is architected in a way that makes it miss all too many patent-worthy inventions – including, especially, those inventions invented by women.

The lack of diversity among patent recipients has now caught the attention of a few Senators, who in December wrote to USPTO Director Iancu to express their concern. There may be several reasons for why women inventors are, by and large, not being granted patents, but one conspicuous one that the Senators focused on in their letter is the commensurate lack of women allowed to do the specialized work of filing for patents:

In today’s increasingly competitive global economy, we must leverage the creativity and talents of all Americans—including women, minorities, and people from low-income and other disadvantaged communities—to maintain the United States’ place as the world’s leading innovator. The patent system has long played a critical role in fostering American innovation. As you well know, the USPTO faces a significant gender gap among named inventors. According to a 2020 USPTO report, only 12.8% of named patent inventors are women. The USPTO has undertaken laudable efforts in recent years to recognize and start addressing this gender gap. These efforts are good first steps.

However, we fear that the USPTO’s efforts will be undercut by an apparent gender gap among patent practitioners. While recent data on the demographic make-up of the patent bar is not publicly available, studies from 2011 and 2014 suggest women made up as little as 18% of patent agents and patent attorneys with little growth over time. Unless there has been a significant increase in the number of women admitted to the patent bar in the ensuing years, female membership lags far behind the share of women earning degrees in either science, technology, engineering, or math (“STEM”) fields (~36%) or the law (~50%).

Quoting a letter submitted by Eric Goldman and Jess Miers (disclosure: I signed it), the Senators' letter further explained why the paucity of women in the patent bar may resultingly be limiting the number of women patent recipients:

[A]ccess to women patent prosecutors can increase women’s patenting activity in several ways. Women patent prosecutors can bring extra substantive expertise on goods and services catering to women customers. This expertise can help inventors recognize patentable inventions and better describe them in patent applications. Women patent prosecutors use their unique social networks to cultivate and support women inventors, and they make it easier for women inventors to “see” themselves in the patent system. Also, women patent prosecutors may develop more effective client relationships with women inventors than would develop with male patent prosecutors. That, in turn, can help women inventors feel comfortable seeking patent prosecution assistance and produce the evidence necessary to succeed with their patent applications.

In other words, if you want to patent more inventions, and to make sure women's inventions are among them, you are going to need more women able to help inventors (including those women inventors) obtain their patents. And right now they are being kept out the profession at arbitrarily high rates and for, as the letter also explained, equally arbitrary, if not outright absurd, reasons.

For those confused by some of the vernacular being used here, the "patent bar" is a fancy way of describing the legal professionals allowed to help inventors try to obtain patent grants from the USPTO. The fancy name for this activity is "patent prosecution" and those who do it are "patent prosecutors." Patent prosecutors don't necessarily have to be lawyers able to practice in any other jurisdiction, and most lawyers are not permitted to do the specialized work of patent prosecution. Instead, to be allowed to prosecute patents you need to take, and pass, a separate exam to be able to join the patent bar. It is that exam that is at the heart of the problem.

The problem, however, isn't necessarily with the exam itself. The real issue is that only certain people are eligible to sit for it, and these limitations on exam eligibility are unduly limiting the patent bar by pointlessly excluding otherwise qualified people, including, especially, women:

The USPTO sets the requirements for patent practitioners and, as such, serves as a gatekeeper to the patent bar. To ensure a high level of patent quality, it requires that all candidates pass a six-hour, 100-question exam in order to practice before the USPTO. However, this exam is not open to all. It is reserved for those who possess certain “scientific” and “technical” qualifications. Currently, the USPTO allows college graduates with degrees in only thirty-two specific majors to automatically qualify to sit for the exam (so-called “Category A”). This list includes a wide array of majors in engineering and the physical sciences—degrees that disproportionately go to men. However, it excludes several other majors, such as mathematics, that are highly relevant to modern-day innovation and are earned by women at a rate much closer to their share of overall undergraduate degrees. The list also excludes students who major in industrial and fashion design—fields highly relevant to design patents and for which women make up a majority of students.

True, sometimes those who are not automatically allowed to sit for the exam still can establish their eligibility in other ways, but these rules are even more Byzantine, and the result is still the same: the doors to the profession end up pointlessly closed to people capable of passing the exam and doing the work of patent prosecution. And this limitation on patent prosecutors thus has an echo effect on patent diversity, because it means that only a small subset of patentable inventions are ever likely to be successfully prosecuted.

These exam-eligibility rules therefore need to be reconsidered, the Senators told the USPTO. And to make sure the USPTO takes the matter seriously, the Senators also required it to provide answers to additional questions on the matter, due back to them by January 15.

40 Comments | Leave a Comment..

Posted on Techdirt - 29 December 2020 @ 3:39pm

Section 230 Isn't A Subsidy; It's A Rule Of Civil Procedure

from the make-section-230-boring-again dept

The other day Senator Schatz tweeted, "Ask every Senator what Section 230 is. Don’t ask them if they want to repeal it. Ask them to describe it."

It's a very fair point. Most of the political demands to repeal Section 230 betray a profound ignorance of what Section 230 does, why, or how. That disconnect between policy understanding and policy demands means that those demands to repeal the law will only create more problems while not actually solving any of the problems currently being complained about.

Unfortunately, however, Senator Schatz's next tweet revealed his own misunderstanding. [Update: per this tweet, it wasn't his misunderstanding his next tweet revealed but rather the misunderstanding of other Senators who have proposed other sorts of "reforms" he was taking issue with. Apologies to Senator Schatz for misstating.] "I have a bipartisan bill that proposes changes to 230, but repeal is absurd. The platforms are irresponsible, but we should not have a government panel handing out immunity like it's a hunting license. We must rein in big tech via 230 reform and antitrust law, not lazy stunts."

There's a lot to unpack in that tweet, including the bit about antitrust law, but commenting on that suggestion is for another post. The issue here is that no, Section 230 is nothing like the government "handing out immunity like a hunting license," and misstatements like that matter because they egg on "reform" efforts that will ruin rather than "reform" the statute, and in the process ruin plenty more that the Constitution – and our better policy judgment – requires us to protect.

The point of this post is to thus try to dispel all such misunderstandings that tend to regard Section 230's statutory protection as some sort of tangible prize the government hands out selectively, when in reality it is nothing of the sort. On the contrary, it reads like a rule of civil procedure that, like any rule of civil procedure, is applicable to any potential defendant that meets its broadly-articulated criteria.

For non-lawyers "rules of civil procedure" may sound arcane and technical, but the basic concept is simple. When people want to sue other people, these are the rules that govern how those lawsuits can proceed so that they can proceed fairly, for everyone. They speak to such things as who can sue whom, where someone can be sued, and, if a lawsuit is filed, whether and how it can go forward. They are the rules of the road for litigation, but they often serve as more than a general roadmap. In many cases they are the basis upon which courts may dispense with cases entirely. Lawsuits only sometimes end with rulings on the merits after both parties have fully presented their cases; just as often, if not more often, courts will evaluate whether the rules of civil procedure even allow a case to continue at all, and litigation frequently ends when courts decide that they don't.

Which is important because litigation is expensive, and the longer it goes on the more cost-prohibitive it becomes. And that's a huge problem, especially for defendants with good defenses, because even if those defenses should mean that they would eventually win the case, the crippling cost involved in staying in the litigation long enough for that defense to prevail might bankrupt them long before it ever could.

Such a result hardly seems fair, and we want our courts to be fair. They are supposed to be about administering justice, but there's nothing just about letting courts being used as tools to obliterate innocent defendants. One reason we have rules of civil procedure is to help lessen the danger that innocent defendants can be drained dry by unmeritorious litigation against them. And that is exactly what Section 230 is designed to do as well.

An important thing to remember is that most of what people complain about when they complain about Section 230 are things that the First Amendment allows to happen. The First Amendment is likely to insulate platforms from liability in their users' content, and it's also likely to insulate them from liability for their moderation decisions. Section 230 helps drive those points home explicitly for providers of "interactive computer services" (which, it should be noted, include far more than just "big tech" platforms; they also include much smaller and non-commercial ICS providers as well, and even individual people), but even if there were no Section 230 the First Amendment would still be there to do the job of protecting platforms in this way. At least in theory.

In practice, however, defendant platforms would first have to endure an onslaught of litigation and all its incumbent costs before the First Amendment could provide any useful benefit, which will likely be too little, too late for most if not all of them. The purpose of Section 230 is therefore to make sure those First Amendment rights can be real, and meaningful, and something that every sort of interactive computer service provider can be confident in exercising without having to fear being crushed by unconstitutional litigation if they do.

What people calling for any change to Section 230 need to realize is how these changes will do nothing but open the floodgates to this sort of crushing litigation against so much that the Constitution is otherwise supposed to protect. It is a flood that will inevitably chill platforms by effectively denying them the protection their First Amendment rights were supposed to afford, and in the process also chill all the expressive user activity they currently feel safe to enable. It is not an outcome that any policymaker should be so eager to tempt; rather, it is something to studiously avoid. And the first step to avoiding it is to understand how these proposed changes will do nothing but invite it.

77 Comments | Leave a Comment..

Posted on Techdirt - 4 December 2020 @ 7:39pm

Reform The DMCA? OK, But Only If It's Done Really, Really Carefully

from the running-into-minefields dept

The DMCA is a weird law. It's comprised of two almost completely unrelated provisions: Section 512, with its platform safe harbors, and Section 1201, which forbids circumventing technological measures. Both parts are full of problems, but to the extent that the DMCA provides platforms with liability protection via the safe harbors, it is also a critically important law. We are therefore fans of the DMCA because of this platform protection it provides, but it's like being fans of a terrible actor who had one absolutely fantastic performance in a classic movie we can't stop loving, even though the rest of his work is unwatchable dreck. In other words, we can't pretend the law is without its appeal, but we nevertheless fervently wish it were a whole lot better since we're stuck having to deal with the rest of it.

Which brings us to Senator Tillis, who has expressed interest in reforming the DMCA and already started to lay the groundwork. We dread where this reform effort might go, because we know (see the Copyright Office's 512 study) that many people are championing for the things already terrible about it to be made worse. But at the same time it would be great to fix the terrible things already there so that it could actually become an unequivocally good law that does what copyright law is supposed to do: stimulate expression and promote the spreading of knowledge.

Last month Senator Tillis put out a call for stakeholder input on the reforms he is thinking about, and earlier this week the Copia Institute submitted its response. Instead of answering his specific questions, which all seemed to presume way too much about what allegedly needs fixing in the DMCA, and not necessarily correctly, we made two larger points that need to apply to any reform measures: (1) There needs to be a clear, data-supported understanding of what needs to be fixed and why so that any implemented change actually helps, rather than hurts, creators, and (2) the statute must scrupulously comply with the First Amendment, which unfortunately it currently falls way too short of in way too many ways.

On the latter front we made several points. First, for the DMCA to be First Amendment-compliant, fair use cannot continue to be treated as an afterthought. It is not a minor technicality; it is a fundamental limit on the reach of a copyright and therefore needs to limit the power of what a copyright holder can do to advance that right. Thus, as we wrote in our submission, Section 1201 should no longer obstruct a fair use, and Section 512 should no longer enable the censoring of a fair use either. Protecting fair uses must be a central tenet of any DMCA revision in order to ensure that fair use can remain meaningful in the digital age.

There are also a number of problems that have emerged over the years in the way the Section 512 system operates that have turned it into an impermissible system of prior restraint. Platform protection is hugely critical for fostering online expression, but the irony is that this protection comes at the expense of the very expression it is supposed to foster. The basic problem is that, unlike Section 230, the platform protection the DMCA provides is conditional. But the even bigger problem is that the protection is conditioned on platforms acting against speakers and speech based only on allegations of infringement, even though those allegations may be unfounded. When a law causes speech to be sanctioned before a court has ever adjudicated it to be wrongful it is prior restraint and anathema to the First Amendment. But current judicial interpretations of the DMCA have made Section 512's critical platform protection on just this sort of thing, with dire consequences to speakers and their speech. Reform is therefore needed so that platform protection is no longer contingent on this sort of constitutional violation.

Similarly, we noted that Section 512 also undermines the First Amendment right to anonymous speech, given the operation of Section 512(g) (governing counter-notices) and Section 512(h) (establishing a special type of federal subpoena). But an even more significant constitutional defect with the DMCA overall is with Section 1201. As we've talked about before, Section 1201, and its prohibition against circumventing technical measures, chills security research and innovation and forecloses fair uses. None of these things are constitutionally permissible, and all undermine the overall goal of promoting progress.

Which brings us to our second main point. The whole point of copyright law is to promote progress. And that means encouraging expression so that the public can enjoy the fruits of it. But not every proposed change to the DMCA will lead to that result. In fact, many would do the exact opposite.

The problem is, many of the proposed changes presume that strengthening the power of a copyright holder automatically advances that greater interest. But in reality it doesn't. And we suggested that the reform effort was being sidetracked by a string of misguided assumptions that needed challenging.

First there is the idea that digital technologies are causing economic harm to copyright holders, but it is an idea that should be treated with skepticism. For one thing, it treats the consumption of every "pirated" digital copy as a lost sale. It also ignores that some works are only consumable at a price of $0 and overlooks that copyright holders have historically flourished even when works were available for free, such as in libraries or on over-the-air radio. In other words, the consumption of copyrighted works for free does not automatically equate to economic harm to copyright holders.

And then there is the presumption that copyright holders and creators are one and the same, and thus economic harm to the former means that there's economic harm to the latter. In fact, copyright holders and creators may frequently be entirely different entities – and even stuck being entirely different entities – with entirely different economic interests. Furthermore, advancing the interests of copyright holders may actually be adverse to the interests of creators, with the former potentially wanting to maximize profit from specific works, and the latter potentially more economically advantaged over all if they can develop robust market interest for their works over all.

Next we pointed out that hobbling digital technologies imposes its own economic harm, which should be the last thing for copyright law to encourage. We noted that it is not good that Veoh Networks got financially obliterated by the process of trying to assert its DMCA safe harbor defense (which was ultimately vindicated), because the Section 512 safe harbor provision is so needlessly cumbersome to deploy. It is not good that we now have one less competitor to YouTube and one less outlet available for creators. Nor is it ok that Seeqpod, a search engine dedicated to helping locate creative works, is now no longer available for people to use to find the works of artists they might then choose to support. The loss of these companies, their jobs, their innovation, and their economic energy is a loss that copyright law, including the DMCA, should lament, not exacerbate.

The loss of these platforms also directly harms the economic interests of creators. Here we challenged two assumptions, one, that the economic interests of platforms are somehow in conflict with creators', and two, that creators and platform users are somehow different. In reality creators are platform users. When the DMCA causes platforms facilitating user expression to disappear, or even the expression itself to disappear, those users themselves are creators who are being affected. And that affects their economic interests by depriving them of outlets to promote their works or even directly monetize them. None of these consequences are consistent with what copyright law is intended to accomplish, and any reform effort should make sure to avoid them too.

Read More | 58 Comments | Leave a Comment..

Posted on Techdirt - 21 October 2020 @ 9:31am

Trademark Genericide And One Big Way The DOJ Admits That Its Antitrust Lawsuit Against Google Is Utter Garbage

from the admitting-their-own-bullshit dept

Don't misread the title of this post to think there's only one thing wrong with the DOJ's antitrust complaint against Google. There's plenty. But on the list is this particular self-defeating argument included in the complaint -- the complaint where the DOJ basically has but one job: show that Google is a monopoly.

To understand it, we need to first understand the idea of "trademark genericide." That's what happens when your brand name is, well, just too good and people start using your branding as the default word to describe the product or service in general. Famous examples include "Band-Aid," "Thermos," "Xerox," and plenty of other words we're all used to using in lower-case form to describe things that aren't actually produced by the companies that had those trademarks.

The issue here is not actually whether Google has lost its trademark rights due to genericide, which is a technical question particular to the operation of trademark law and not relevant to the issues raised here. The DOJ isn't actually arguing that Google has anyway. But what it is arguing is that the same basic dynamic has occurred, where the branded name has become a widely adopted synonym to describe other people's similar goods and services. However, in doing so, it has blown up its own argument because that means there are other similar goods and services. Which means that Google is not a monopoly.

Look at what it argued (emphasis added):

Google has thus foreclosed competition for internet search. General search engine competitors are denied vital distribution, scale, and product recognition—ensuring they have no real chance to challenge Google. Google is so dominant that “Google” is not only a noun to identify the company and the Google search engine but also a verb that means to search the internet. [complaint p. 4]

This argument makes no sense. On the one hand it asserts that Google has foreclosed competition for Internet search, and in almost the next breath it asserts (and as an attempt at proving the first assertion, bizarrely) that "Google" has now become the generic word for Internet searching offered by everyone. If "Google" is now being used by consumers to describe the use of competing goods and services, it means that there are competing goods and services. Ergo, Google is not a monopoly, and thus the alleged premise for bringing this antitrust action is unsound.

There are, of course, many reasons why this antitrust action against Google is unsound, but it does seem odd that the DOJ would so candidly confess such a notable one in the introduction of its own complaint.

Especially because even the DOJ itself admitted later in the complaint that there are actually competing search engines, namely Bing, Yahoo, and DuckDuckGo.

Google has monopoly power in the United States general search services market. There are currently only four meaningful general search providers in this market: Google, Bing, Yahoo!, and DuckDuckGo. According to public data sources, Google today dominates the market with approximately 88 percent market share, followed far behind by Bing with about seven percent, Yahoo! with less than four percent, and DuckDuckGo with less than two percent. [p. 29]

But the argument it made in this later section to try to wish away the import of these competitors did not do much better than the previous one in the logic department.

There are significant barriers to entry in general search services. The creation, maintenance, and growth of a general search engine requires a significant capital investment, highly complex technology, access to effective distribution, and adequate scale. For that reason, only two U.S. firms—Google and Microsoft—maintain a comprehensive search index, which is just a single, albeit fundamental, component of a general search engine. Scale is also a significant barrier to entry. Scale affects a general search engine’s ability to deliver a quality search experience. The scale needed to successfully compete today is greater than ever. Google’s anticompetitive conduct effectively eliminates rivals’ ability to build the scale necessary to compete. Google’s large and durable market share and the significant barriers to entry in general search services demonstrate Google’s monopoly power in the United States. [p. 31]

Once again, the DOJ has managed to swing and miss in trying to argue that Google is a monopoly with its rushed and unthoughtful lawyering. Google obviously isn't, not with actual competitors, and the DOJ's apparent fallback argument of it being a monopoly somehow due to monopolistic effect similarly fails. It whines that scale is important for a search engine's success, and that there are significant barriers to entry to becoming a competitive player in the search engine space. But the DOJ offers nothing more than "it must be antitrust!" to hand-wave away why Google has managed to succeed better than its rivals, including rivals like Yahoo that had entered the market long before Google (and for whom barriers to entry should not have been an issue), and rivals like Microsoft (which the DOJ acknowledges is able to achieve the same scale as Google). The market has had choices—choices that even the DOJ cannot ignore, no matter how much it is desperate to because of how their existence undermines its case.

And so with the "la-la-la-I-can't-hear-you" approach to antitrust enforcement the DOJ tries to wish these inconvenient facts away, arguing that Google's size and share of the market somehow magically evinces an antitrust violation, with little more support than "because we said so."

Which is not nearly a good enough basis for this sort of extraordinary action.

Read More | 35 Comments | Leave a Comment..

Posted on Techdirt - 20 October 2020 @ 9:37am

Section 230 Basics: There Is No Such Thing As A Publisher-Or-Platform Distinction

from the foundational-understanding dept

We've said it before, many times: there is no such thing as a publisher/platform distinction in Section 230. But in those posts we also said other things about how Section 230 works, and perhaps doing so obscured that basic point. So just in case we'll say it again here, simply and clearly: there is no such thing as a publisher/platform distinction in Section 230. The idea that anyone could gain or lose the immunity the statute provides depending on which one they are is completely and utterly wrong.

In fact, the word "platform" does not even show up in the statute. Instead the statute uses the term "interactive computer service provider." The idea of a "service provider" is a meaningful one, because the whole point of Section 230 is to make sure that the people who provide the services that facilitate others' use of the Internet are protected in order for them to be able to continue to provide those services. We give them immunity from the legal consequences of how people use those services because without it they wouldn't be able to – it would simply be too risky.

But saying "interactive computer service provider" is a mouthful, and it also can get a little confusing because we sometimes say "internet service provider" to mean just a certain kind of interactive computer service provider, when Section 230 is not nearly so specific. Section 230 applies to all kinds of service providers, from ISPs to email services, from search engines to social media providers, from the dial-up services we knew in the 1990s back when Section 230 was passed to whatever new services have yet to be invented. There is no limit to the kinds of services Section 230 applies to. It simply applies to anyone and everyone, including individual people, who are somehow providing someone else the ability to use online computing. (See Section 230(f)(2).)

So for shorthand people have started to colloquially refer to protected service providers as "platforms." Because statutes are technical creatures it is not generally a good idea to use shorthand terms in place of the precise ones used by the statutes; often too much important meaning can be lost in the translation. But in this case "platform" is a tolerable synonym for most of our policy discussions because it still captures the essential idea: a Section 230-protected "platform" is the service that enables someone else to use the Internet.

Which brings us to the term "publisher," which does appear in the statute. In particular it appears in the critically important provision at Section 230(c)(1), which does most of the work making Section 230 work:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

In this provision the term "publisher" (or "speaker") refers to the creator of the content at issue. Who did? Was it the provider of the computer service, aka the platform itself? Or was it someone else? Because if it had been someone else, if the information at issue had been "provided by another information content provider," then we don't get to treat the platform as the "publisher or speaker" of that information – and it is therefore immune from liability for it.

Where the confusion has arisen is in the use of the term "publisher" in another context as courts have interpreted Section 230. Sometimes the term "publisher" itself means "facilitator" or "distributor" of someone else's content. When courts first started thinking about Section 230 (see, e.g., Zeran v. AOL) they sometimes used the term because it helped them understand what Section 230 was trying to accomplish. It was trying to protect the facilitator or distributor of others' expression – or, in other words, the platform people used to make that expression – and using the term "publisher" from our pre-Section 230 understanding of media law helped the courts recognize the legal effect of the statute.

Using the term did not, however, change that effect. Or the basic operation of the statute. The core question in any Section 230 analysis has always been: who originated the content at issue? That a platform may have "published" it by facilitating its appearance on the Internet does not make it the publisher for purposes of determining legal responsibility for it, because "publishing" is not the same as "creating." And Section 230 – and all the court cases interpreting it – have made clear that it is only the creator who can be held liable for what was created.

There are plenty of things we can still argue about regarding Section 230, but whether someone is a publisher versus a platform should not be one of them. It is only the creator v. facilitator distinction that matters.

50 Comments | Leave a Comment..

Posted on Techdirt - 15 October 2020 @ 3:34pm

We Interrupt This Hellscape With A Bit Of Good News On The Copyright Front

from the carl-malamud-versus-the-world dept

We've written about this case – or rather, these cases – a few times before: Carl Malamud published the entire Code of Federal Regulations at Public.Resource.org, including all the standards that the CFR incorporated and thus gave the force of law. Several organizations that had originally promulgated these various standards then sued Public Resource – in two separate but substantially similar cases later combined – for copyright infringement stemming from his having included them.

In a set of really unfortunate decisions, the district court upheld the infringement claims, finding that the standards were copyrightable (and also actually owned by the standards organizations claiming them, despite reason to doubt those ownership claims), and that Public Resource including them as part of its efforts to post the law online was not a fair use. But then the DC Circuit reversed that decision. While it generally left the overall question of copyrightability for another day, it did direct the district court to re-evaluate whether the publication of the standards was fair use.

Now back at the district court, the cases had proceeded to the summary judgment stage and were awaiting a new ruling for the court. One case still remains pending – ASTM v Public.Resource.Org – but the other one, American Educational Research Association et al. v. Public.Resource.Org, has now been dismissed by the plaintiffs with prejudice. Effectively that means that Public Resource wins and can continue to host these standards online. Which is good news for Public Resource and its users. But it does still leave anyone else's ability to repost standards incorporated into law up in the air. Hopefully when the court eventually rules in the remaining case it will find such use fair, and in a way that others can similarly avail themselves of the ability to fully publish the law.

Read More | 14 Comments | Leave a Comment..

Posted on Techdirt - 14 October 2020 @ 1:42pm

An Update On The Pretty Crummy Supreme Court Term So Far On Issues We Care About

from the bad-beginnings dept

As the Senate hearings continue over what the future United States Supreme Court may do, it's worth taking a moment to talk about what the current Court has already just done. The RBG-less Supreme Court is now back in session, and in view of the actions it's taken in at least four separate cases, it has not been an auspicious beginning.

Even some of the best news still managed to be awful. For instance, cert was denied the Enigma Software v. Malwarebytes case. Denial is bad news because it leaves a terrible Ninth Circuit Section 230 decision on the books. On the other hand, having it denied may have dodged a bullet. Section 230 is already in the cross-hairs of Congress and the Executive Branch; inviting the Supreme Court to go to town on it too seemed like a risky proposition, and Justice Thomas's unprompted dissent ripping Section 230 jurisprudence to shreds makes clear how much damage the Court could do to this critically important law if it took on this case.

And the risk of cert being granted here just might not have been worth it. For one thing, the case may continue. The Ninth Circuit had overturned the original granting of defendant Malwarebytes' motion to dismiss, which sent the case back to the district court. Which means there could be another opportunity at some point later in the litigation for Malwarebytes to challenge the lousy reasoning the Ninth Circuit employed to revive the case. Of course, it's possible that the parties might settle and leave the Ninth Circuit decision on the books, unchallenged. However, even if that happens, it's a precedent already called into question by the more recent Supreme Court decision in Bostock v. Clayton County, Ga.. So it's not great that future defendants will have to argue around the Ninth Circuit's ruling, and by no means a certainty that the Bostock statutory construction argument would prevail, but at least there is something of substance to enable future defendants to make a good run at it.

Meanwhile, at least two other cert denials left us with even more bad news. One of these cases was Austin v. Illinois. Supreme Court review was sought after the Illinois Supreme Court left in place Illinois's revenge porn law. As we pointed out at the time – and the lower court in Illinois had recognized – the Illinois revenge porn law is not a content neutral law, and as such it's also not one sufficiently narrowly-tailored to meet the strict scrutiny the First Amendment requires. The law also doesn't take the intent of the defendant into account. Unfortunately the Illinois Supreme Court did not seem bothered by these constitutional infirmities and upheld the law. We were hoping the United States Supreme Court would recognize the problems and grant review to address them – but it didn't. The law now remains on the books. And while it might indeed punish some of the deserving people that a constitutional revenge porn law would also catch, the problem with unconstitutional laws is that they also tend to catch other people too, even those whose speech should have been constitutionally protected.

The other cert denial of note is in G&M Realty v. Castillo. This cert petition sought review of the shockingly awful Second Circuit decision doubling-down on the terribly troubling EDNY decision awarding a multi-million dollar judgement ostensibly for violating the Visual Artists Rights Act (VARA). Never mind that, despite its apparent policy intent, VARA will actually lead to less public art and thus actually hurt artists, and ignore for the moment the short shrift the law gives to real property rights, these decisions managed to offend the constitution in several other outrageous ways. As we explained previously, there were multiple due process issues raised by how this particular case was adjudicated and the extraordinarily punitive penalty awarded against the defendant property owner, who had simply painted over his own building after the district court told him he could.

But the problem isn't just that this particular case was a travesty; what this case also illustrated is how badly VARA offends both the First Amendment and the equal protection clause of the Constitution. It gratuitously awards an extra benefit to only certain expression based in some way on the content of that expression, which is not supposed to happen. (Put another way: it also denies a benefit to certain expression based on its content.) It is an utterly irredeemable law, and it is a great shame that the Supreme Court refused to grant review, not just to overturn the Second Circuit's galling miscarriage of justice but to free us all from this law's unconstitutional reach. Assuming Congress will refuse to repeal it, we will have to await a new victim of the law with the means and ability to challenge their injury before we will have any chance of being rid of it.

The reality is that Supreme Court jurisprudence is always at best a mixed bag when it comes to copyright. Earlier this year it did produce a good decision in Georgia v. Public.Resource.org, but it missed a rare opportunity to restore sanity when it comes to the VARA amendment to the copyright statute. Now the question is whether it will restore sanity when it comes to how copyright in software works.

Oral argument at the Supreme Court was finally held last week in Google v. Oracle, after having been postponed from its original March hearing date due to the pandemic. It's impossible to read the tealeaves and know how the court will rule, but it was hard to come away with too much optimism. What was concerning about the hearing is the undercurrent reflected in the justices' questions that if the Court rules in Google's favor it is somehow doing Google a favor and diminishing Oracle's copyright. When in actuality it is Oracle's copyright claim that is much broader than the law has ever allowed.

Copyrights have always (or at least until recently) been understood as limited monopolies granting their owners a limited set of exclusive rights for limited periods. Over the years these periods have become less limited, and interpretations of what these exclusive rights cover have tended to get broader. But the basic monopoly has still always been curtailed by the subject-matter limitation of Section 102(b) of the statute, which limits what can actually be subject to copyright in the first place, and fair use, which limits what uses of the work the copyright owner can exclude.

Both of these limitations are at issue before the Court: whether Oracle could even claim a copyright monopoly over the API in the first place, and, even if it could, whether that copyright could allow it to prevent other people from freely using the software's API to make their own interoperable software, or if that would be fair use. The complication in this case is that there's a special section in the copyright statute – Section 117 – that enunciates other exceptions to the reach of software copyright's, due to their unique nature that makes them different than other sorts of copyrightable works. Oracle argued that this section exhaustively articulated the limits to its software copyright, but if this view were correct it would mean that software copyright would not be subject to any of the other limitations that have always applied to any other form of copyright.

Worse, not only would such a conclusion be bad policy that would deter future software development – the sort of authorship a software copyright is supposed to incentivize – but it would also constitute a significant change from the status quo.

The one bit of good news to report is that at least Justice Sotomayor recognized this issue. In particular she observed how out of step the Federal Circuit's decisions had been from those of most other courts that had considered whether APIs could be subject to copyright. Their consensus had been no, and the freedom this view afforded software developers to make their software interoperable has enabled an entire industry to take root. In her questioning Justice Sotomayor appeared to recognize how badly it would threaten that industry if the Supreme Court adopted the Federal Circuit's decisions in favor of Oracle's copyright claims because of how much it represented a significant change in the previously understood reach of a software copyright.

JUSTICE SOTOMAYOR: Counsel, at the --in your beginning statement, you had the sky falling if we ruled in favor of Google. The problem with that argument for me is that it seems that since 1992, and Justice Kagan mentioned the case, the Second Circuit case, a Ninth Circuit case, an Eleventh Circuit case, a First Circuit case, that a basic principle has developed in the case law, up until the Federal Circuit's decision. I know there was a Third Circuit decision earlier on in the 1980s. But the other circuits moved away from that. They and the entire computer world have not tried to analogize computer codes to other methods of expression because it's sui generis. They've looked at its functions, and they've said the API, the Application Programming Interface, of which the declaring code is a part, is not copyrightable. Implementing codes are. And on that understanding, industries have built up around applications that know they can -- they can copy only what's necessary to run on the application, but they have to change everything else. That's what Google did here. That's why it took less than 1 percent of the Java code. So I guess that's the way the world has run in every other system. Whether it's Apple's desktop or Amazon's web services, everybody knows that APIs are not -- declaring codes are not copyrightable. Implementing codes are. So please explain to me why we should now upend what the industry has viewed as the copyrightable elements and has declared that some are methods of operation and some are expressions. Why should we change that understanding? [transcript p. 52-53.]

The question is whether her skepticism about Oracle's copyright claim is one that will be adopted by the rest of the justices, or whether sometime later this term we'll be writing even more posts about how the Supreme Court has let everyone down on this front too.

13 Comments | Leave a Comment..

Posted on Techdirt - 23 September 2020 @ 1:37pm

Busting Still More Myths About Section 230, For You And The FCC

from the human-readable-advocacy dept

The biggest challenge we face in advocating for Section 230 is how misunderstood it is. Instead of getting to argue about its merits, we usually have to spend our time disabusing people of their mistaken impressions about what the statute does and how. If people don't get that part right then we'll never be able to have a meaningful conversation about the appropriate role it should have in tech policy.

It's particularly a problem when it's a federal agency getting these things wrong. In our last comment to the FCC we therefore took issue with some of the worst falsehoods the NTIA had asserted in its petition demanding the FCC somehow seize imaginary authority it doesn't actually have to change Section 230. But in reading a number of public comments in support of its petition, it became clear that there was more to say to address these misapprehensions about the law.

The record developed in the opening round of comments in the [FCC's] rulemaking reflects many opinions about Section 230. But opinions are not facts, and many of these opinions reflect a fundamental misunderstanding of how Section 230 works, why we have it, and what is at risk if it is changed.

These misapprehensions should not become the basis of policy because they cannot possibly be the basis of *good* policy. To help ensure they will not be the predicate for any changes to Section 230, the Copia Institute submits this reply comment to address some of the recurrent myths surrounding Section 230, which should not drive policy, and reaffirm some fundamental truths, which should.

Our exact reply comment is attached below. But because it isn't just these agencies we want to make sure understand how this important law works, instead of just summarizing it here, we're including a version of it in full right here, below.

As we told the FCC, there are several recurring complaints that frequently appear in the criticism leveled at Section 230. Unfortunately, most of these complaints are predicated on fundamental misunderstandings of why we have Section 230, or how it works. What follows is an attempt to dispel many of these myths and to explain what is at risk by making changes to Section 230 – especially any changes born out of these misunderstandings.

To begin with, one type of flawed argument against Section 230 tends to be premised on the incorrect notion that Section 230 was intended to be some sort of Congressional handout designed to subsidize a nascent Internet. The thrust of the argument is that now that the Internet has become more established, Section 230 is no longer necessary and thus should be repealed. But there are several problems with this view.

For one thing, it is technically incorrect. Prodigy, the platform jeopardized by the Stratton Oakmont decision, which prompted the passage of Section 230, was already more than ten years old by that point and handling large amounts of user-generated content. It was also owned by large corporate entities (Sears and IBM). It is true that Congress was worried that if Prodigy could be held liable for its users' content it would jeopardize the ability for new service providers to come into being. But the reason Congress had that concern was because of how that liability threatened the service providers that already existed. In other words, it is incorrect to frame Section 230 as a law designed to only foster small enterprises; from the very beginning it was intended to protect entrenched corporate incumbents, as well as everything that would follow.

Indeed, the historical evidence bears out this concern. For instance, in the United States, where, at least until now, there has been much more robust platform protection than in Europe, investment in new technologies and services has vastly outpaced that in Europe. (See the Copia Institute's whitepaper Don't Shoot the Message Board for more information along these lines.) Even in the United States there is a correlation between the success of new technologies and services and the strength of the available platform protection, where those that rely upon the much more robust Section 230 immunity do much better than those that depend on the much weaker Digital Millennium Copyright Act safe harbors.

Next, it is also incorrect to say that Section 230 was intended to be a subsidy for any particular enterprise, or even any particular platform. Nothing in the language of Section 230 causes it to apply to apply only to corporate interests. Per Section 230(f)(2) the statute applies to anyone meeting the definition of a service provider, as well as any user of a service provider. Many service providers are also small or non-profit, and, as we've discussed before, can even be individuals. Section 230 applies to them all, and all will be harmed if its language is changed.

Indeed, the point of Section 230 was not to protect platforms for their own sake but to protect the overall health of the Internet itself. Protecting platforms was simply the step Congress needed to take to achieve that end. It is clear from the preamble language of Section 230(a) and (b), as well as the legislative history, that what Congress really wanted to do with Section 230 was simultaneously encourage the most good online expression, and the least bad. It accomplished this by creating a two-part immunity that both shielded platforms from liability arising from carrying speech, as well as from any liability in removing it.

By pursuing a regulatory approach that was essentially carrot-based, rather than stick-based, Congress left platforms free to do the best they could to vindicate both goals: intermediating the most beneficial speech and allocating their resources most efficiently to minimize the least desirable. As we and others have many times pointed out, including in our earlier FCC comment, even being exonerated from liability in user content can be cripplingly expensive. Congress did not want platforms to be obliterated by the costs of having to defend themselves for liability in their users' content, or to have their resources co-opted by the need to minimize their own liability instead of being able to direct them to running a better service. If platforms had to fear liability for either their hosting or moderation efforts it would force them to do whatever they needed to protect themselves but at the expense of being effective partners in achieving Congress's twin aims.

This basic policy math remains just as true in 2020 as it did in the 1990s, which is why it is so important to resist these efforts to change the statute. Undermining Section 230's strong platform protections will only undermine the overall health of the Internet and do nothing to help there be more good content and less bad online, which even the statute's harshest critics often at least ostensibly claim to want.

While some have argued that platforms who fail to be optimal partners in meeting Congress's desired goals should lose the benefit of Section 230's protection, there are a number of misapprehensions baked into this view. One misapprehension is that Section 230 contains any sort of requirement for how platforms moderate their user content; it does not. Relatedly, it is a common misconception that Section 230 hinges on some sort of "platform v. publisher" distinction, immunizing only "neutral platforms" and not anyone who would qualify as a "publisher." People often mistakenly believe that a "publisher" is the developer of the content, and thus not protected by Section 230. In reality, however, as far as Section 230 is concerned, platforms and publishers are actually one and the same, and therefore all are protected by the statute. The term "publisher" that appears in certain court decisions merely relates to the understanding of the word "publisher" to mean "one that makes public," which is of course the essential function of what a platform does to distribute others' speech. But content distribution is not the same thing as content creation. Section 230 would not apply to the latter, but it absolutely applies to the former, even if the platform has made editorial decisions with respect to that distribution. Those choices still do not amount to content creation.

In addition, the idea that a platform's moderation choices can jeopardize their Section 230 protection misses the fact that it is not Section 230 that gives platforms the right to moderate however they see fit. As we explained in our previous comment and on many other occasions, the editorial discretion behind content moderation decisions is protected by the First Amendment, not Section 230. Eliminating Section 230 will not take away the right for platforms to exercise their discretion. What it will do, however, is make it practically impossible for platforms to avail themselves of this right because it will force them to have to expend their resources defending themselves. They might potentially eventually win, but, as we earlier explained, even exoneration can be an extinction-level event for a platform.

Furthermore, it would effectively eviscerate the benefit of the statute if its protection were conditional. The point of Section 230 is to protect platforms from the crippling costs of litigation; if they had to litigate to find out whether they were protected or not, there would be no benefit and it would be as if there were no Section 230 at all. Given the harms to the online ecosystem Section 230 was designed to forestall, this outcome should be avoided.

All of this information boils down to this essential truth: the NTIA petition should be rejected, and so should any other effort to change Section 230, especially one that embraces these misunderstandings.

Read More | 2 Comments | Leave a Comment..

Posted on Techdirt - 15 September 2020 @ 3:34pm

Because Too Many People Still Don't Know Why The EARN IT Bill Is Terrible, Here's A Video

from the AV dept

The biggest problem with all the proposals to reform Section 230 is that way too many people don't understand *why* they are a terrible idea. And the EARN IT bill is one of the worst of the worst, because it does not just break Section 230 but also so much more, yet too many people remain oblivious to the issues.

Obviously there's more education to be done, and towards that end Stanford's Riana Pfefferkorn and I recently gave this presentation at the Crypto and Privacy Village at Defcon. The first part is a crash course in Section 230 and how it does the important work it does in protecting the online ecosystem. The second part is an articulation of all the reasons the EARN IT bill in particular is terrible and the specific damage it would do to encryption and civil liberties, along with ruining Section 230 and everything important that it advances.

We'll keep explaining in every way we can why Section 230 should be preserved and the EARN IT bill should be repudiated, but if you're the kind of person who prefers AV explanations, then this video is for you.

(Note: there's a glitch in the video at the beginning. Once it goes dark, skip ahead to about 3 minutes 20 seconds and it will continue.)

7 Comments | Leave a Comment..

More posts from Cathy Gellis >>

.

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it