Eric Goldman's Techdirt Profile

Eric Goldman

About Eric Goldman

Posted on Techdirt - 26 June 2023 @ 11:55am

California’s Journalism Protection Act Is An Unconstitutional Clusterfuck Of Terrible Lawmaking

The California legislature is competing with states like Florida and Texas to see who can pass laws that will be more devastating to the Internet. California’s latest entry into this Internet death-spiral is the California Journalism Protection Act (CJPA, AB 886). CJPA has passed the California Assembly and is pending in the California Senate.

The CJPA engages with a critical problem in our society: how to ensure the production of socially valuable journalism in the face of the Internet’s changes to journalists’ business models? The bill declares, and I agree, that a “free and diverse fourth estate was critical in the founding of our democracy and continues to be the lifeblood for a functioning democracy…. Quality local journalism is key to sustaining civic society, strengthening communal ties, and providing information at a deeper level that national outlets cannot match.” Given these stakes, politicians should prioritize developing good-faith and well-researched ways to facilitate and support journalism. The CJPA is none of that.

Instead, the CJPA takes an asinine, ineffective, unconstitutional, and industry-captured approach to this critical topic. The CJPA isn’t a referendum on the importance of journalism; instead, it’s a test of our legislators’ skills at problem-solving, drafting, and helping constituents. Sadly, the California Assembly failed that test.

Overview of the Bill

The CJPA would make some Big Tech services pay journalists for using snippets of their content and providing links to the journalists’ websites. This policy approach is sometimes called a “link tax,” but that’s a misnomer. Tax dollars go to the government, which can then allocate the money to (in theory) advance the public good—such as funding journalism.

The CJPA bypasses the government’s intermediation and supervision of these cash flows. Instead, it pursues a policy worse than socialism. CJPA would compel some bigger online publishers (called “covered platforms” in the bill) to transfer some of their wealth directly to other publishers—intended to be journalistic operations, but most of the dollars will go to vulture capitalists’ stockholders and MAGA-clickbait outlets like Breitbart.

In an effort to justify this compelled wealth transfer, the bill manufactures a new intellectual property right—sometimes called an “ancillary copyright for press publishers“—in snippets and links and then requires the platforms to pay royalties (euphemistically called “journalism usage fee payments”) for the “privilege” of publishing ancillary-copyrighted material. The platforms aren’t allowed to reject or hide DJPs’ content, so they must show the content to their audiences and pay royalties even if they don’t want to.

The wealth-transfer recipients are called “digital journalism providers” (DJPs). The bill contemplates that the royalty amounts will be set by an “arbitrator” who will apply baseball-style “arbitration,” i.e., the valuation expert picks one of the parties’ proposals. “Arbitrator” is another misnomer; the so-called arbitrators are just setting valuations.

DJPs must spend 70% of their royalty payouts on “news journalists and support staff,” but that money won’t necessarily fund NEW INCREMENTAL journalism. The bill explicitly permits the money to be spent on administrative overhead instead of actual journalism. With the influx of new cash, DJPs can divert their current spending on journalists and overhead into the owners’ pockets. Recall how the COVID stimulus programs directly led to massive stock buybacks that put the government’s cash into the hands of already-wealthy stockholders—same thing here. Worse, journalist operations may become dependent on the platforms’ royalties, which could dry up with little warning (e.g., a platform could drop below CJPA’s statutory threshold). We should encourage journalists to build sustainable business models. CJPA does the opposite.

Detailed Analysis of the Bill Text

Who is a Digital Journalism Provider (DJP)? 

A print publisher qualifies as a DJP if it:

  • “provide[s] information to an audience in the state.” Is a single reader in California an “audience”? By mandating royalty payouts despite limited ties to California, the bill ensures that many/most DJPs will not be California-based or have any interest in California-focused journalism.
  • “performs a public information function comparable to that traditionally served by newspapers and other periodical news publications.” What publications don’t serve that function?
  • “engages professionals to create, edit, produce, and distribute original content concerning local, regional, national, or international matters of public interest through activities, including conducting interviews, observing current events, analyzing documents and other information, or fact checking through multiple firsthand or secondhand news sources.” This is an attempt to define “journalists,” but what publications don’t “observe current events” or “analyze documents or other information”?
  • updates its content at least weekly.
  • has “an editorial process for error correction and clarification, including a transparent process for reporting errors or complaints to the publication.”
  • has:
    • $100k in annual revenue “from its editorial content,” or
    • an ISSN (good news for me; my blog ISSN is 2833-745X), or
    • is a non-profit organization
  • 25%+ of content is about “topics of current local, regional, national, or international public interest.” Again, what publications don’t do this?
  • is not foreign-owned, terrorist-owned, etc.

If my blog qualifies as an eligible DJP, the definition of DJPs is surely over-inclusive.

Broadcasters qualify as DJPs if they:

  • have the specified FCC license,
  • engage journalists (like the factor above),
  • update content at least weekly, and
  • have error correction processes (like the factor above).

Who is a Covered Platform?

A service is a covered platform if it:

  • Acquires, indexes, or crawls DJP content,
  • “Aggregates, displays, provides, distributes, or directs users” to that content, and
  • Either
    • Has 50M+ US-based MAUs or subscribers, or
    • Its owner has (1) net annual sales or a market cap of $550B+ OR (2) 1B+ worldwide MAUs.

(For more details about the problems created by using MAUs/subscribers and revenues/market cap to measure size, see this article).

How is the “Journalism Usage Fee”/Ancillary Copyright Royalty Computed?

The CJPA creates a royalty pool of the “revenue generated through the sale of digital advertising impressions that are served to customers in the state through an online platform.” I didn’t understand the “impressions” reference. Publishers can charge for advertising in many ways, including ad impressions (CPM), clicks, actions, fixed fee, etc. Does the definition only include CPM-based revenue? Or all ad revenue, even if impressions aren’t used as a payment metric? There’s also the standard problem of apportioning ad revenue to “California.” Some readers’ locations won’t be determinable or will be wrong; and it may not be possible to disaggregate non-CPM payments by state.

Each platform’s royalty pool is reduced by a flat percentage, nominally to convert ad revenues from gross to net. This percentage is determined by a valuation-setting “arbitration” every 2 years (unless the parties reach an agreement). The valuation-setting process is confusing because it contemplates that all DJPs will coordinate their participation in a single “arbitration” per platform, but the bill doesn’t provide any mechanisms for that coordination. As a result, it appears that JDPs can independently band together and initiate their own customized “arbitration,” which could multiply the proceedings and possibly reach inconsistent results.

The bill tells the valuation-setter to:

  • Ignore any value conferred by the platform to the JDPs due to the traffic referrals, “unless the covered platform does not automatically access and extract information.” This latter exclusion is weird. For example, if a user posts a link to a third-party service, the platform could argue that this confers value to the JDP only if the platform doesn’t show an automated preview.
    • Note: In a typical open-market transaction, the parties always consider the value they confer on each other when setting the price. By unbalancing those considerations, the CJPA guarantees the royalties will overcompensate DJPs.
  • “Consider past incremental revenue contributions as a guide to the future incremental revenue contribution” by each DJP. No idea what this means.
  • Consider “comparable commercial agreements between parties granting access to digital content…[including] any material disparities in negotiating power between the parties to those commercial agreements.” I assume the analogous agreements will come from music licensing?

Each JDP is entitled to a percentage, called the “allocation share,” of the “net” royalty pool. It’s computed using this formula: (the number of pages linking to, containing, or displaying the JDP’s content to Californians) / (the total number of pages linking to, containing, or displaying any JDP’s content to Californians). Putting aside the problems with determining which readers are from California, this formula ignores that a single page may have content from multiple DJPs. Accordingly, the allocation share percentages cumulatively should add up to over 100% of the net royalty pool calculated by the valuation-setters. In other words, the formula ensures the unprofitability of publishing DJP content. For-profit companies typically exit unprofitable lines of business.

Elimination of Platforms’ Editorial Discretion

The CJPA has an anti-“retaliation” clause that nominally prevents platforms from reducing their financial exposure:

(a) A covered platform shall not retaliate against an eligible digital journalism provider for asserting its rights under this title by refusing to index content or changing the ranking, identification, modification, branding, or placement of the content of the eligible digital journalism provider on the covered platform.

(b) An eligible digital journalism provider that is retaliated against may bring a civil action against the covered platform.

(c) This section does not prohibit a covered platform from, and does not impose liability on a covered platform for, enforcing its terms of service against an eligible journalism provider.

This provision functions as a mandatory must-carry provision. It forces platforms to carry content they don’t want to carry and don’t think is appropriate for their audience—at peril of being sued for retaliation. In other words, any editorial decision that is adverse to any DJP creates a non-trivial risk of a lawsuit alleging that the decision was retaliatory. It doesn’t really change the calculus if the platform might ultimately prevail in the lawsuit; the costs and risks of being sued are enough to prospectively distort the platform’s decision-making.

[Note: section (c) doesn’t negate this issue at all. It simply converts a litigation battle over retaliation into a battle over whether the DJP violated the TOS. Platforms could try to eliminate the anti-retaliation provision by drafting TOS provisions broad enough to provide them with total editorial flexibility. However, courts might consider such broad drafting efforts to be bad faith non-compliance with the bill. Further, unhappy DJPs will still claim that broad TOS provisions were selectively enforced against them due to the platform’s retaliatory intent, so even tricky TOS drafting won’t eliminate the litigation risk.]

Thus, CJPA rigs the rules in favor of DJPs. The financial exposure from the anti-retaliation provision, plus the platform’s reduced ability to cater to the needs of its audience, further incentivizes platforms to drop all DJP content entirely or otherwise substantially reconfigure their offerings.

Limitations on JDP Royalty Spending

DJPs must spend 70% of the royalties on “news journalists and support staff.” Support staff includes “payroll, human resources, fundraising and grant support, advertising and sales, community events and partnerships, technical support, sanitation, and security.” This indicates that a DJP could spend the CJPA royalties on administrative overhead, spend a nominal amount on new “journalism,” and divert all other revenue to its capital owners. The CJPA doesn’t ensure any new investments in journalism or discourage looting of journalist organizations. Yet, I thought supporting journalism was CJPA’s raison d’être.

Why CJPA Won’t Survive Court Challenges

If passed, the CJPA will surely be subject to legal challenges, including:

Restrictions on Editorial Freedom. The CJPA mandates that the covered platforms must publish content they don’t want to publish—even anti-vax misinformation, election denialism, clickbait, shill content, and other forms of pernicious or junk content.

Florida and Texas recently imposed similar must-carry obligations in their social media censorship laws. The Florida social media censorship law specifically restricted platforms’ ability to remove journalist content. The 11th Circuit held that the provision triggered strict scrutiny because it was content-based. The court then said the journalism-protection clause failed strict scrutiny—and would have failed even lower levels of scrutiny because “the State has no substantial (or even legitimate) interest in restricting platforms’ speech… to ‘enhance the relative voice’ of… journalistic enterprises.” The court also questioned the tailoring fit. I think CJPA raises the same concerns. For more on this topic, see Ashutosh A. Bhagwat, Why Social Media Platforms Are Not Common Carriers, 2  J. Free Speech L. 127 (2022).

Note: the Florida bill required platforms to carry the journalism content for free. CJPA would require platforms to pay for the “privilege” of being forced to carry journalism content, wanted or not. CJPA’s skewed economics denigrate editorial freedom even more grossly than Florida’s law.

Copyright Preemption. The CJPA creates copyright-like protection for snippets and links. Per 17 USC 301 (the copyright preemption clause), only Congress has the power to provide copyright-like protection for works, including works that do not contain sufficient creativity to qualify as an original work of authorship. Content snippets and links individually aren’t original works of authorship, so they do not qualify for federal copyright protection at the federal or state level; while any compilation copyright is within federal copyright’s scope and therefore is also off-limits to state protection.

The CJPA governs the reproduction, distribution, and display of snippets and links, and the federal copyright law governs those activities in 17 USC 106. CJPA’s provisions thus overlap with 106’s scope, but the works are within the scope of federal copyright law. This is not permitted by federal copyright preemption.

Section 230. Most or all of the snippets/links governed by the CJPA will constitute third-party content, including search results containing third-party content and user-submitted links where the platform automatically fetches a preview from the JDP’s website. Thus, CJPA runs afoul of Section 230 in two ways. First, it treats the covered platforms as the “publishers or speakers” of those snippets and links for purposes of the allocation share. Second, the anti-retaliation claim imposes liability for removing/downgrading third-party content, which courts have repeatedly said is covered by Section 230 (in addition to the First Amendment).

DCC. I believe the Dormant Commerce Clause should always apply to state regulation of the Internet. In this case, the law repeatedly contemplates the platforms determining the location of California’s virtual borders, which will always have an error rate that cannot be eliminated. Those errors guarantee that the law reaches activity outside of California.

Takings. I’m not a takings expert, but a government-compelled wealth transfer from one private party to another sounds like the kind of thing our country’s founders would have wanted to revolt against.

Conclusion

Other countries have attempted “link taxes” like CJPA. I’m not aware of any proof that those laws have accomplished their goal of enhancing local journalism. Knowing the track record of global futility, why do the bill’s supporters think CJPA will achieve better results? Because of their blind faith that the bill will work exactly as they anticipate? Their hatred of Big Tech? Their desire to support journalism, even if it requires using illegitimate means?

Our country absolutely needs a robust and well-functioning journalism industry. Instead of making progress towards that vital goal, we’re wasting our time futzing with crap like CJPA.

For more reasons to oppose the CJPA, see the Chamber of Progress page.

A Few More CJPA Memes

Originally posted to Eric Goldman’s Technology & Marketing Law Blog, reposted here with permission, and (thankfully, for the time being) without having to pay Eric to link back to his original even though he qualifies as a “DJP” under this law.

Posted on Techdirt - 18 May 2023 @ 01:42pm

Two Common But Disingenuous Phrases About Section 230

This blog post is about the following two phrases:

  • “[T]he Communications Decency Act was not meant to create a lawless no-man’s-land on the Internet.” This phrase originated in Kozinski’s Roommates.com en banc opinion. Including Roommates.com, I found 22 cases (25 opinions total) using the phrase (see Appendix A).
  • “Congress has not provided an all purpose get-out-of-jail-free card for businesses that publish user content on the internet.” This phrase originated in the Doe v. Internet Brands opinion. Including the Internet Brands case, I found six cases using the phrase (see Appendix B).

Both phrases suffer from the same defect: they “refute” strawmen arguments. In fact, no one has ever advanced the propositions the phrases disagree with; and if anyone did advance those propositions, the speakers would demonstrate their ignorance about Section 230. So the rhetorical flourishes in the phrases are just that; they aren’t substantive arguments.

Why are the refuted arguments strawmen? Let me explain:

OF COURSE Section 230 did not create a “lawless no-man’s-land” zone. Section 230, from day 1, always (1) retained all legal obligations for content originators, who definitely do not operate in a lawless zone, and (2) contained statutory exclusions for IP, ECPA, and federal criminal prosecutions, which means Internet services have always faced liability pursuant to those doctrines.

[Note: The “no-man’s-land” reference is also weird. Putting aside the gender skew, this concept usually refers to territory between opposing forces’ trenches in World War I, where any person entering the zone would be machine-gunned to death by the opposing force and thus humans cannot survive there for long. How does that relate to Section 230? It doesn’t. If anything, Section 230’s immunity creates a zone that’s overpopulated with content that might not otherwise exist online–the opposite of a “no-man’s-land.” The metaphor makes no sense.]

Similarly, OF COURSE Section 230 does not provide an “all purpose get-out-of-jail-free card.” In fact, Section 230 has always excluded federal criminal prosecutions and their potential risk of jailtime, so Section 230 literally is the opposite of a “get-out-of-jail-free” card. (The First Amendment significantly limits prosecutions against publishers, so the First Amendment acts more like a get-out-of-jail-free card than Section 230). Section 230 does block state criminal prosecutions, but that’s not an “all purpose” card and it’s the entire point of a statutory immunity (i.e., to remove liability that otherwise exists to facilitate other socially beneficial activity while preserving criminal liability for the primary wrongdoer).

[Note: the phrase “get-out-of-jail-free card” is generally associated with the board game Monopoly, but the Wikipedia page lists a reference back to the 16th Century.]

I hope this post makes clear why I get so irritated whenever I see the phrases referenced in a court opinion or invoked by a grandstanding politician. By attacking a strawman argument, they confirm the weakness of their argumentation and that they don’t have more persuasive arguments to make–a good tipoff that they are embracing a dubious result and are grasping for any justification, no matter how tenuous. Indeed, many of the cases enumerated below are best remembered for their contorted reasoning to reach questionable rulings for the plaintiffs (and the fact that several opinions show up in both appendices is a strong indicator of how shaky those specific rulings were).

In my dream world, a post like this proves the folly of using these phrases and discourages their further invocation. If you’ve ever uttered one of these phrases, I hope that ends today.

* * *

BONUS: Along the lines of the Internet as a “lawless” zone, I am similarly perplexed by characterizations of the Internet as a “Wild West.” I found 11 cases in Westlaw for the search query “internet /s ‘wild west’” (see, e.g., Ganske v. Mensch), though the usages vary, and I find this metaphor is more commonly used in popular rhetoric. The reference makes no sense because, if anything, there is too much law governing the Internet, not too little. Further, the “Wild West” metaphor more accurately suggests an underenforcement of existing laws, i.e., there were laws governing communities in the Western US, but sometimes they were practically unenforceable due to the scarcity of law enforcement officials and the difficulties of gathering evidence. This led to the development of alternative means of enforcing rules, including vigilantism. If you are using the “Wild West” metaphor, I assume you are implicitly calling for greater enforcement of existing laws. If you are invoking it to suggest that the Internet lacks governing laws, I strongly disagree.

* * *

Appendix A: Cases Using the Phrase “Lawless No-Man’s-Land”

Fair Housing Council of San Fernando Valley v. Roommates.Com, LLC, 521 F.3d 1157 (9th Cir. April 3, 2008)

Milo v. Martin, 311 S.W.3d 210 (Tex. Ct. App. April 29, 2010)

Hill v. StubHub, Inc., 2011 WL 1675043 (N.C. Superior Ct. Feb. 28, 2011)

Hare v. Richie, 2012 WL 3773116 (D. Md. Aug. 29, 2012)

Ascend Health Corp. v. Wells, 2013 WL 1010589 (E.D.N.C. March 14, 2013)

Jones v. Dirty World Entertainment Recordings, LLC, 965 F.Supp.2d 818 (E.D. Ky. Aug. 12, 2013)

J.S. v. Village Voice Media Holdings, L.L.C., 184 Wash.2d 95 (Wash. Sept. 3, 2015)

Doe v. Internet Brands, Inc., 824 F.3d 846 (9th Cir. May 31, 2016)

Fields v. Twitter, Inc., 200 F.Supp.3d 964 (N.D. Cal. Aug. 10, 2016)

Fields v. Twitter, Inc., 217 F.Supp.3d 1116 (N.D. Cal. Nov. 18, 2016)

Gonzalez v. Google, Inc., 282 F.Supp.3d 1150 (N.D. Cal. Oct. 23, 2017)

Daniel v. Armslist, LLC, 382 Wis.2d 241 (Wis. Ct. App. April 19, 2018)

Gonzalez v. Google, Inc., 335 F.Supp.3d 1156 (N.D. Cal. Aug. 15, 2018)

HomeAway.com, Inc. v. City of Santa Monica, 918 F.3d 676 (9th Cir. March 13, 2019)

Daniel v. Armslist, LLC, 386 Wis.2d 449 (Wis. April 30, 2019)

Airbnb, Inc. v. City of Boston, 386 F.Supp.3d 113 (D. Mass. May 3, 2019)

Turo Inc. v. City of Los Angeles, 2020 WL 3422262 (C.D. Cal. June 19, 2020)

Lemmon v. Snap, Inc., 995 F.3d 1085 (9th Cir. May 4, 2021)

In re Facebook, Inc., 625 S.W.3d 80 (Tex. June 25, 2021)

Doe v. Twitter, Inc., 555 F.Supp.3d 889 (N.D. Cal. Aug. 19, 2021)

Doe v. Mindgeek USA Inc., 574 F.Supp.3d 760 (C.D. Cal. Dec. 2, 2021)

Lee v. Amazon.com, Inc., 76 Cal.App.5th 200 (Cal. App. Ct. March 11, 2022)

Al-Ahmed v. Twitter, Inc., 603 F.Supp.3d 857 (N.D. Cal. May 20, 2022)

In re Apple Inc. App Store Simulated Casino-Style Games Litigation, 2022 WL 4009918 (N.D. Cal. Sept. 2, 2022)

Dangaard v. Instagram, LLC, 2022 WL 17342198 (N.D. Cal. Nov. 30, 2022)

* * *

Appendix B: Cases Using the Phrase “Get-Out-of-Jail-Free Card”

Doe No. 14 v. Internet Brands, Inc., 767 F.3d 894 (9th Cir. 2014), amended by Doe v. Internet Brands, Inc., 824 F.3d 846 (9th Cir. 2016)

Airbnb, Inc. v. City and County of San Francisco, 217 F.Supp.3d 1066 (N.D. Cal. 2016)

Daniel v. Armslist, LLC, 382 Wis.2d 241 (Wis. Ct. App. 2018)

Doe v. Mindgeek USA Inc., 574 F.Supp.3d 760 (C.D. Cal. 2021)

Lemmon v. Snap, Inc., 995 F.3d 1085 (9th Cir. 2021)

In re Apple Inc. App Store Simulated Casino-Style Games, 2022 WL 4009918 (N.D. Cal. 2022)

Posted on Techdirt - 12 April 2023 @ 01:44pm

Recent Case Highlights How Age Verification Laws May Directly Conflict With Biometric Privacy Laws

California passed the California Age-Appropriate Design Code (AADC) nominally to protect children’s privacy, but at the same time, the AADC requires businesses to do an age “assurance” of all their users, children and adults alike. (Age “assurance” requires the business to distinguish children from adults, but the methodology to implement has many of the same characteristics as age verification–it just needs to be less precise for anyone who isn’t around the age of majority. I’ll treat the two as equivalent).

Doing age assurance/age verification raises substantial privacy risks. There are several ways of doing it, but the two primary options for quick results are (1) requiring consumers to submit government-issued documents, or (2) requiring consumers to submit to face scans that allow the algorithms to estimate the consumer’s age.

[Note: the differences between the two techniques may be legally inconsequential, because a service may want a confirmation that the person presenting the government documents is the person requesting access, which may essentially require a review of their face as well.]

But, are face scans really an option for age verification, or will it conflict with other privacy laws? In particular, face scanning seemingly directly conflict with biometric privacy laws, such as Illinois’ BIPA, which provide substantial restrictions on the collection, use, and retention of biometric information. (California’s Privacy Rights Act, CPRA, which the AADC supplements, also provides substantial protections for biometric information, which is classified as “sensitive” information). If a business purports to comply with the CA AADC by using face scans for age assurance, will that business simultaneously violate BIPA and other biometric privacy laws?

Today’s case doesn’t answer the question, but boy, it’s a red flag.

The court summarizes BIPA Sec. 15(b):

Section 15(b) of the Act deals with informed consent and prohibits private entities from collecting, capturing, or otherwise obtaining a person’s biometric identifiers or information without the person’s informed written consent. In other words, the collection of biometric identifiers or information is barred unless the collector first informs the person “in writing of the specific purpose and length of term for which the data is being collected, stored, and used” and “receives a written release” from the person or his legally authorized representative

Right away, you probably spotted three potential issues:

  • The presentation of a “written release” slows down the process. I’ve explained how slowing down access to a website can constitute an unconstitutional barrier to content.
  • Will an online clickthrough agreement satisfy the “written release” requirement? Per E-SIGN, the answer should be yes, but standard requirements for online contract formation are increasingly demanding more effort from consumers to signal their assent. In all likelihood, BIPA consent would require, at minimum, a two-click process to proceed. (Click 1 = consent to the BIPA disclosures. Click 2 = proceeding to the next step).
  • Can minors consent on their own behalf? Usually contracts with minors are voidable by the minor, but even then, other courts have required the contracting process to be clear enough for minors to understand. That’s no easy feat when it relates to complicated and sensitive disclosures, such as those seeking consent to engage in biometric data collection. This raises the possibility that at least some minors can never consent to face scans on their own behalf, in which case it will be impossible to comply with BIPA with respect to those minors (and services won’t know which consumers are unable to self-consent until after they do the age assessment #InfiniteLoop).

[Another possible tension is whether the business can retain face scans, even with BIPA consent, in order to show that each user was authenticated if challenged in the future, or if the face scans need to be deleted immediately, regardless of consent, to comply with privacy concerns in the age verification law.]

The primary defendant at issue, Binance, is a cryptocurrency exchange. (There are two Binance entities at issue here, BCM and BAM, but BCM drops out of the case for lack of jurisdiction). Users creating an account had to go through an identity verification process run by Jumio. The court describes the process:

Jumio’s software…required taking images of a user’s driver’s license or other photo identification, along with a “selfie” of the user to capture, analyze and compare biometric data of the user’s facial features….

During the account creation process, Kuklinski entered his personal information, including his name, birthdate and home address. He was also prompted to review and accept a “Self-Directed Custodial Account Agreement” for an entity known as Prime Trust, LLC that had no reference to collection of any biometric data. Kuklinski was then prompted to take a photograph of his driver’s license or other state identification card. After submitting his driver’s license photo, Kuklinski was prompted to take a photograph of his face with the language popping up “Capture your Face” and “Center your face in the frame and follow the on-screen instructions.” When his face was close enough and positioned correctly within the provided oval, the screen flashed “Scanning completed.” The next screen stated, “Analyzing biometric data,” “Uploading your documents”, and “This should only take a couple of seconds, depending on your network connectivity.”

Allegedly, none of the Binance or Jumio legal documents make the BIPA-required disclosures.

The court rejects Binance’s (BAM) motion to dismiss:

  • Financial institution. BIPA doesn’t apply to a GLBA-regulated financial institution, but Binance isn’t one of those.
  • Choice of Law. BAM is based in California, so it argued CA law should apply. The court says no because CA law would foreclose the BIPA claim, plus some acts may have occurred in Illinois. Note: as a CA company, BAM will almost certainly need to comply with the CA AADC.
  • Extraterritorial Application. “Kuklinski is an Illinois resident, and…BIPA was enacted to protect the rights of Illinois residents. Moreover, Kuklinski alleges that he downloaded the BAM application and created the BAM account while he was in Illinois.”
  • Inadequate Pleading. BAM claimed the complaint lumped together BAM, BCM, and Jumio. The court says BIPA doesn’t have any heightened pleading standards.
  • Unjust Enrichment. The court says this is linked to the BIPA claim.

Jumio’s motion to dismiss also goes nowhere:

  • Retention Policy. Jumio says it now has a retention policy, but the court says that it may have been adopted too late and may not be sufficient,
  • Prior Settlement. Jumio already settled a BIPA case, but the court says that only could protect Jumio before June 23, 2019.
  • First Amendment. The court says the First Amendment argument against BIPA was rejected in Sosa v. Onfido and that decision was persuasive.

[The Sosa v. Onfido case also involved face-scanning identity verification for the service OfferUp. I wonder if the court would conduct the constitutional analysis differently if the defendant argued it had to engage with biometric information in order to comply with a different law, like the AADC?]

The court properly notes that this was only a motion to dismiss; defendants could still win later. Yet, this ruling highlights a few key issues:

1. If California requires age assurance and Illinois bans the primary methods of age assurance, there may be an inter-state conflict of laws that ought to support a Dormant Commerce Clause challenge. Plus, other states beyond Illinois have adopted their own unique biometric privacy laws, so interstate businesses are going to run into a state patchwork problem where it may be difficult or impossible to comply with all of the different laws.

2. More states are imposing age assurance/age verification requirements, including Utah and likely Arkansas. Often, like the CA AADC, those laws don’t specify how the assurance/verification should be done, leaving it to businesses to figure it out. But the legislatures’ silence on the process truly reflects their ignorance–the legislatures have no idea what technology will work to satisfy their requirements. It seems obvious that legislatures shouldn’t adopt requirements when they don’t know if and how they can be satisfied–or if satisfying the law will cause a different legal violation. Adopting a requirement that may be unfulfillable is legislative malpractice and ought to be evidence that the legislature lacked a rational basis for the law because they didn’t do even minimal diligence.

3. The clear tension between the CA AADC and biometric privacy is another indicator that the CA legislature lied to the public when it claimed the law would enhance children’s privacy.

4. I remain shocked by how many privacy policy experts and lawyers remain publicly quiet about age verification laws, or even tacitly support them, despite the OBVIOUS and SIGNIFICANT privacy problems they create. If you care about privacy, you should be extremely worried about the tsunami of age verification requirements being embraced around the country/globe. The invasiveness of those requirements could overwhelm and functionally moot most other efforts to protect consumer privacy.

5. Mandatory online age verification laws were universally struck down as unconstitutional in the 1990s and early 2000s. Legislatures are adopting them anyway, essentially ignoring the significant adverse caselaw. We are about to have a high-stakes society-wide reconciliation about this tension. Are online age verification requirements still unconstitutional 25 years later, or has something changed in the interim that makes them newly constitutional? The answer to that question will have an enormous impact on the future of the Internet. If the age verification requirements are now constitutional despite the legacy caselaw, legislatures will ensure that we are exposed to major privacy invasions everywhere we go on the Internet–and the countermoves of consumers and businesses will radically reshape the Internet, almost certainly for the worse.

Reposted with permission from Eric Goldman’s Technology & Marketing Law Blog.

Posted on Techdirt - 18 November 2022 @ 01:44pm

My Testimony To The Colombian Constitutional Court Regarding Online Account Terminations And Content Removals

This week, I testified remotely before the Colombian Constitutional Court in the case of Esperanza Gómez Silva c. Meta Platforms, Inc. y Facebook Colombia S.A.S. Expediente T-8.764.298. In a procedure I don’t understand, the court organized a public hearing to discuss the issues raised by the case. (The case involves Instagram’s termination of an adult film star’s account despite her account content allegedly never violating the TOS). My 15 minutes of testimony was based on this paper.

* * *

My name is Eric Goldman. I’m a professor at Santa Clara University School of Law, located in California’s Silicon Valley, where I hold the titles of Associate Dean for Research, Co-Director of the High Tech Law Institute, and Supervisor of the Privacy Law Certificate. I started practicing Internet Law in 1994 and first started teaching Internet Law in 1996. I thank the court for this opportunity to testify.

My testimony makes two points. First, I will explain the status of lawsuits regarding online account terminations and content removals in the United States. Second, I will explain why imposing legal restrictions on the ability of Internet services that gather, organize, and disseminate user-generated content (which I’ll call “UGC publishers”) to terminate accounts or remove content leads to unwanted outcomes.

Online Account Terminations and Content Removals in the US

In 2021, Jess Miers and I published an article in the Journal of Free Speech Law (run by the UCLA Law School) entitled “Online Account Terminations/Content Removals and the Benefits of Internet Services Enforcing Their House Rules.” The article analyzed all of the U.S. legal decisions we could find that addressed UGC publishers’ liability for terminating users’ accounts or removing users’ contents. When we finalized our dataset in early 2021, we found 62 opinions. There have been at least 15 more rulings since then.

What’s remarkable is how consistently plaintiffs have lost. No plaintiff has won a final ruling in court imposing liability on UGC publishers for terminating users’ accounts or removing users’ content. Though some recent regulatory developments in the U.S. seek to change this legal rule, those developments are currently being challenged in court and, in my opinion, will not survive the challenges.

It’s also remarkable why the plaintiffs have lost. Plaintiffs have attempted a wide range of common law, statutory, contract, and Constitutional claims, and courts have rejected those claims based on one or more of the following four grounds:

Prima Facie Elements. First, the claims may fail because the plaintiff cannot establish the prima facie elements of the claim. In other words, the law simply wasn’t designed to redress the plaintiffs’ concerns.

Contract. Second, the claims may fail because of the UGC publishers’ terms of service (called the “TOS”). TOSes often contain several provisions reinforcing the UGC publishers’ editorial freedom, including provisions expressly saying that (1) the UGC publisher can terminate accounts or remove content in its sole discretion, (2) it may freely change its editorial policies at any time, and (3) it doesn’t promise to apply its editorial policies perfectly. In the US, courts routinely honor such contract provisions, even if the TOS terms are non-negotiable and may seem one-sided.

Section 230. Third, the claims may fail because of a federal statute called “Section 230,” which says that websites aren’t liable for third-party content. Courts have treated the user-plaintiff content as “third-party” content to the UGC publisher, in which case Section 230 applies.

Constitution. Fourth, the claims may fail on Constitutional grounds. In the US, the Constitution only restricts the action of government actors, not private entities. Therefore, users do not have any Constitutional protections from the editorial decisions of UGC publishers. Instead, the Constitution protects the UGC publishers’ freedoms of speech and press, and any government intrusion into their speech or press decisions must comport with the Constitution. Accordingly, in the US, UGC publishers do not “censor” users. Instead, any government effort to curtail UGC publishers’ account termination or content removal constitutes censorship. This means the court cannot Constitutionally rule in favor of the plaintiffs.

This point bears emphasis. Any effort to force UGC publishers to publish accounts or content against their wishes would override the UGC publishers’ Constitutional protections. Unless the Supreme Court changes the law, this compelled publication is not permitted.

The Merits of UGC Publishers’ Editorial Discretion

I now turn to my second point. Giving unrestricted editorial discretion to UGC publishers may sometimes seem unfair. There is often a significant power imbalance between the “Tech Giants” and individual users, and this imbalance can leave aggrieved users without any apparent recourse for what may feel like capricious or arbitrary decisions by the UGC publisher.

I’m sympathetic to those concerns, and I hope UGC publishers continue to voluntarily adopt additional user-friendly features to reduce users’ feeling of powerlessness. However, government intrusion into the editorial process is not the solution.

When UGC publishers are no longer free to exercise editorial discretion, it means that the government hinders the UGC publisher’s ability to cater to the needs of its audience. In other words, the audience expects a certain experience from the UGC publisher, and government regulation prevents the UGC publisher from meeting those expectations.

This becomes an existential threat to UGC publishers if spammers, trolls, and other malefactors are provided mandatory legal authorization to reach the publisher’s audience despite the publisher’s wishes. That creates a poor reader experience that jeopardizes the publisher’s relationship with its audience.

If the publisher cannot sufficiently curb bad actors from overrunning the service, then advertisers will flee, users will not pay to access low-quality content, and UGC publishers will lack a tenable business model that puts the entire enterprise at risk. When UGC publishers are compelled to publish unwanted content, many UGC publishers will have to leave the industry.

Other UGC publishers will continue to publish content—just not user content because they can’t sufficiently ensure it meets their audience’s needs. In its place, the publishers will substitute to professionally-produced content, which the publishers still can fully control.

This switch from UGC to professionally-produced content will fundamentally change the Internet as we know it. Today, we take for granted that we can talk with each other online; in a future where publication access is mandated, we will talk to each other less, and more frequently publishers will be talking at us.

To have enough money to pay for the professionally-produced content, publishers will increasingly adopt subscription models to access the content (sometimes called “paywalls”), which means we will enjoy less free content online. This also exacerbates the digital divide, where wealthier users will get access to more and better content than poorer users can afford, perpetuating the divisions between these groups. Finally, professionally-produced content and paywalls will entrench other divisions in our society. Those in power with majority attributes will be the most likely to get the ability to publish their content and reach audiences; those without power won’t have the same publication opportunities, and that will leave them in a continually marginalized position.

This highlights the unfortunate irony of mandatory publication obligations. Instead of expanding publication opportunities, government-compelled online publication is far more likely to consolidate the power to publish content in a smaller number of hands that do not include the less wealthy or powerful members of our society. If the court seeks to vindicate the rights of less powerful authors online, counterintuitively, protecting publishers’ editorial freedom is the best way to do so.

Closing

I keep using the term UGC publishers, and this may have created some semantic confusion. In an effort to denigrate the editorial work of UGC publishers, they are often called anything but “publishers.” Indeed, the setup for today’s event uses several euphemisms for the publishing function, such as “content intermediation platforms” and “social network management.” (I understand there may have been something lost in translation).

The nomenclature matters a lot here. By using alternative descriptors, it downplays the seemingly obvious consequence that compelling UGC publishers’ publication decisions is government censorship. Reduced editorial freedom provides another way for the government to abuse its power to control how its citizens talk with each other.

Thank you for the opportunity to provide this input into your important efforts.

* * *

The judges asked three questions in the Q&A:

  • can Colombian courts reach transborder Internet services? [My answer: yes, if they have a physical presence in Colombia]
  • can content moderation account for off-service activity? [My answer: yes, this is no different than publishers deciding the identity fo the authors they want to publish]
  • must Internet services follow due process? [My answer: no, “due process” only applies to government actors].

Reposted with permission.

Posted on Techdirt - 19 October 2022 @ 12:22pm

The Word ‘Emoji’ Is A Protectable Trademark?

Emoji Co. GmbH has registered trademarks in the dictionary word “Emoji.” They mostly are a licensing organization, and their registrations are in a wide range of classes: “from articles of clothing and snacks to ‘orthopaedic foot cushions’ and ‘[p]atient safety restraints.’” (Raise your hand if you’ve ever seen Emojico-branded patient safety restraints). Indeed, the court essentially questions the entire basis of Emojico’s licensing business, saying:

Given the ubiquity of the word “emoji” as a reference to the various images and icons used in electronic communications, it is especially important that Plaintiff come forward with evidence demonstrating that the term is also known as an identifier of Plaintiff as a source of goods….Other than its say-so, Plaintiff offers no evidence demonstrating, for instance, that consumers actually associate Plaintiff with emoji products such as those offered for sale by Defendants

(The absence of secondary meaning sounds like a major problem with Emojico’s case, one of several problems the court spots and then essentially ignores).

As I previously documented, Emojico has likely sued about 10,000 defendants for trademark infringement. Many defendants are small-time Amazon vendors (often from China) selling items depicting emojis, who Emojico claims are infringing by using the term “emoji” in their product listings. Defendants often no-show in court, making the rulings vulnerable to obvious mistakes that never will be appealed.

Without the defendants in court to defend themselves, the court rules that the defendants violated Emojico’s trademark rights and grants a permanent injunction. The judge then turns to Emojico’s request for statutory damages, including Emojico’s assertion that infringement was willful. The court says it

finds the nature of Plaintiff’s trademark to be relevant to the willfulness inquiry, as it raises the concern that many persons might innocently use the word “emoji” in commerce without awareness of Plaintiff’s intellectual property rights. Indeed, the various images and icons commonly referred to as “emojis” have become a staple of modern communication, such that the term “emoji” is even defined in many dictionaries.

This means the term “emoji” is generic with respect to the dictionary definitions and Emojico’s litigation empire should crumble. The trademark registrations discourage that outcome.

Otherwise, “emoji” is at most descriptive of the goods in question, so there should be an air-tight descriptive fair use defense. The court says:

Fair use, however, is an affirmative defense, and none of the defaulting Defendants have appeared to assert it. But the Court believes the principle underlying the defense, “that no one should be able to appropriate descriptive language through trademark registration,” is relevant to its willfulness analysis. If Plaintiff’s mark can legitimately be used for a substantial number of descriptive purposes, it suggests that any particular Defendant might not have knowingly or recklessly disregarded Plaintiff’s rights.

That’s how the court sidesteps the elephant in the room. The defendants did not “disregard Plaintiff’s rights” because it’s completely permissible to use “emoji” in a descriptive fair use sense. But the court didn’t consider descriptive fair use in flatly declaring infringement because… well, I’m not sure why not, other than this judge apparently thinks courts can’t raise screamingly obvious defenses sua sponte?

This next passage may require tissues:

many Defendants are using the word “emoji” to describe a product that depicts one of the many digital icons commonly used in electronic communications. For example, one Defendant offered for sale a jewel encrusted pendant in the shape of the “Fire” emoji under the listing “2.00 Ct. Round Diamond Fire Emoji Charm Piece Pendant Men’s 14k Yellow Gold Over.” Especially since “Emoji” was used in conjunction with the word “Fire,” it would be reasonable to conclude that this particular Defendant honestly believed that they were using the word “Emoji” to identify the product as depicting a specific emoji, namely the Fire Emoji. Another Defendant offered for sale a pillow depicting a smiley face emoji with the listing reading “1PC 32cm Emoji Smiley Emoticon Pillow Plush Toy Doll Pillow Soft Sofa Cushion.” Again, the word emoji is used to describe the product as depicting a smiley face emoji. Further, the listing uses another word, “Emoticon,” that is commonly associated with digital representations of facial expressions. The listing’s inclusion of a word describing a similar concept to an emoji suggests that both words are simply being used to describe the product being offered.

[We now know what happens if you yell “Fire Emoji” in a crowded online marketplace. TRADEMARK INFRINGEMENT. 🔥]

The court seemingly understands the problem perfectly. Any person looking at the listings in question would instantly interpret “emoji” as describing the product’s physical attributes – AS TRADEMARK LAW PERMITS IT TO DO. Yet, somehow, the court creates a Schrodinger’s fair use defense – the usages may be descriptive trademark use for damages purposes, but apparently not so obviously to resolve infringement. That’s messed up.

How messed up? The court says:

Plaintiff suggests that any person who sells a product depicting a familiar emoji is forbidden from using the one word that most closely describes the image depicted. Plaintiff’s right cannot be so expansive.

💯 How did the court find infringement again?

After questioning the foundation of Emojico’s trademark empire and reaching the obvious conclusion that the defendants engaged in descriptive fair use, the court nevertheless awards $25k of statutory damages per defendant. The court treats this as benevolence towards the defendants:

That figure is below Plaintiff’s requested awards because it accounts for the many possible fair uses of Plaintiff’s mark as well as Plaintiff’s failure to present sufficient evidence concerning many key factors relevant to the statutory damages determination. On the other hand, the award is greater than the minimum authorized by § 1117(c)(1) in light of the need for deterrence, the fact that Defendants’ infringing conduct occurred online, and Plaintiff’s evidence of its licensing efforts and efforts at enforcing its trademark rights.

So, was justice served in this case? On the one hand, it’s all for show, because Emojico will almost certainly collect zero dollars of this damages award. On the other hand, it’s a terrifying reminder of how things can go wrong in default proceedings, when the court is hearing only the plaintiff’s unrebutted advocacy. The true victims of this court’s error, and of Emojico’s litigation campaign, are consumers who love emoji-themed items but increasingly will find it harder to acquire those products in online marketplaces because Emojico keeps lawfaring vendors out of the marketplace or forcing vendors to use terms that consumers don’t recognize. Even if the defendants didn’t make the arguments, the judge should have listened to her instincts and intervened on the consuming public’s behalf. All of us, except possibly for Emojico and its lawyers, are poorer because she didn’t.

Reposted with permission from the Technology & Marketing Law Blog.

Posted on Techdirt - 16 September 2022 @ 09:40am

California’s Age Appropriate Design Code Is Radical Anti-Internet Policy

When a proposed new law is sold as “protecting kids online,” regulators and commenters often accept the sponsors’ claims uncritically (because… kids). This is unfortunate because those bills can harbor ill-advised policy ideas. The California Age-Appropriate Design Code (AADC / AB2273, just signed by Gov. Newsom) is an example of such a bill. Despite its purported goal of helping children, the AADC delivers a “hidden” payload of several radical policy ideas that sailed through the legislature without proper scrutiny. Given the bill’s highly experimental nature, there’s a high chance it won’t work the way its supporters think–with potentially significant detrimental consequences for all of us, including the California children that the bill purports to protect.

In no particular order, here are five radical policy ideas baked into the AADC:

Permissioned innovation. American business regulation generally encourages “permissionless” innovation. The idea is that society benefits from more, and better, innovation if innovators don’t need the government’s approval.

The AADC turns this concept on its head. It requires businesses to prepare “impact assessments” before launching new features that kids are likely to access. Those impact assessments will be freely available to government enforcers at their request, which means the regulators and judges are the real audience for those impact assessments. As a practical matter, given the litigation risks associated with the impact assessments, a business’ lawyers will control those processes–with associated delays, expenses, and prioritization of risk management instead of improving consumer experiences.

While the impact assessments don’t expressly require government permission to proceed, they have some of the same consequences. They put the government enforcer’s concerns squarely in the room during the innovation development (usually as voiced by the lawyers), they encourage self-censorship by the business if they aren’t confident that their decisions will please the enforcers, and they force businesses to make the cost-benefit calculus before the business has gathered any market feedback through beta or A/B tests. Obviously, these hurdles will suppress innovations of all types, not just those that might affect children. Alternatively, businesses will simply route around this by ensuring their features aren’t available at all to children–one of several ways the AADC will shrink the Internet for California children.

Also, to the extent that businesses are self-censoring their speech (and my position is that all online “features” are “speech”) because of the regulatory intervention, then permissioned innovation raises serious First Amendment concerns.

Disempowering parents. A foundational principle among regulators is that parents know their children best, so most children protection laws center around parental decision-making (e.g. COPPA).The AADC turns that principle on its head and takes parents completely out of the equation. Even if parents know their children best, per the AADC, parents have no say at all in the interaction between a business and their child. In other words, despite the imbalance in expertise, the law obligates businesses, not parents, to figure out what’s in the best interest of children. Ironically, the bill cites evidence that “In 2019, 81 percent of voters said they wanted to prohibit companies from collecting personal information about children without parental consent” (emphasis added), but then the bill drafters ignored this evidence and stripped out the parental consent piece that voters assumed. It’s a radical policy for the AADC to essentially tell parents “tough luck” if parents don’t like the Internet that the government is forcing on their children.

Fiduciary obligations to a mass audience. The bill requires businesses to prioritize the best interests of children above all else. For example: “If a conflict arises between commercial interests and the best interests of children, companies should prioritize the privacy, safety, and well-being of children over commercial interests.” Although the AADC doesn’t use the term “fiduciary” obligations, that’s functionally what the law creates. However, fiduciary obligations are typically imposed in 1:1 circumstances, like a lawyer representing a client, where the professional can carefully consider and advise about an individual’s unique needs. It’s a radical move to impose fiduciary obligations towards millions of individuals simultaneously, where there is no individual considerations at all.

The problems with this approach should be immediately apparent. The law treats children as if they all have the same needs and face the same risks, but “children” are too heterogeneous to support such stereotyping. Most obviously, the law lumps together 17 year-olds and 2 year-olds, even though their risks and needs are completely different. More generally, consumer subpopulations often have conflicting needs. For example, it’s been repeatedly shown that some social media features provide net benefit to a majority or plurality of users, but other subcommunities of minors don’t benefit from those features. Now what? The business is supposed to prioritize the best interests of “children,” but the presence of some children who don’t benefit indicates that the business has violated its fiduciary obligation towards that subpopulation, and that creates unmanageable legal risk–despite the many other children who would benefit. Effectively, if businesses owe fiduciary obligation to diverse populations with conflicting needs, it’s impossible to serve that population at all. To avoid this paralyzing effect, services will screen out children entirely.

Normalizing face scans. Privacy advocates actively combat the proliferation of face scanning because of the potentially lifelong privacy and security risks created by those scans (i.e., you can’t change your face if the scan is misused or stolen). Counterproductively, this law threatens to make face scans a routine and everyday occurrence. Every time you go to a new site, you may have to scan your face–even at services you don’t yet know if you can trust. What are the long-term privacy and security implications of routinized and widespread face scanning? What does that do to people’s long-term privacy expectations (especially kids, who will infer that face scans just what you do)? Can governments use the face scanning infrastructure to advance interests that aren’t in the interests of their constituents? It’s radical to motivate businesses to turn face scanning of children into a routine activity–especially in a privacy bill.

(Speaking of which–I’ve been baffled by the low-key response of the privacy community to the AADC. Many of their efforts to protect consumer privacy won’t likely matter in the long run if face scans are routine).

Frictioned Internet navigation. The Internet thrives in part because of the “seamless” nature of navigating between unrelated services. Consumers are so conditioned to expect frictionless navigation that they respond poorly when modest barriers are erected. The Ninth Circuit just explained:

The time it takes for a site to load, sometimes referred to as a site’s “latency,” is critical to a website’s success. For one, swift loading is essential to getting users in the door…Swift loading is also crucial to keeping potential site visitors engaged. Research shows that sites lose up to 10% of potential visitors for every additional second a site takes to load, and that 53% of visitors will simply navigate away from a page that takes longer than three seconds to load. Even tiny differences in load time can matter. Amazon recently found that every 100 milliseconds of latency cost it 1% in sales.

After the AADC, before you can go to a new site, you will have to do either face scanning or upload age authenticating documents. This adds many seconds or minutes to the navigation process, plus there’s the overall inhibiting effects of concerns about privacy and security. How will these barriers change people’s web “surfing”? I expect it will fundamentally change people’s willingness to click on links to new services. That will benefit incumbents–and hurt new market entrants, who have to convince users to do age assurance before users trust them. It’s radical for the legislature to make such a profound and structural change to how people use and enjoy an essential resource like the Internet.

A final irony. All new laws are essentially policy experiments, and the AADC is no exception. But to be clear, the AADC is expressly conducting these experiments on children. So what diligence did the legislature do to ensure the “best interest of children,” just like it expects businesses to do post-AADC? Did the legislature do its own impact assessment like it expects businesses to do? Nope. Instead, the AADC deploys multiple radical policy experiments without proper diligence and basically hopes for the best for children. Isn’t it ironic?

I’ll end with a shoutout to the legislators who voted for this bill: if you didn’t realize how the bill was packed with radical policy ideas when you voted yes, did you even do your job?

Posted on Techdirt - 2 August 2022 @ 01:38pm

Is the California Legislature Addicted to Performative Election-Year Stunts That Threaten the Internet?

It’s an election year, and like clockwork, legislators around the country want to show they care about protecting kids online. This pre-election frenzy leads performative bills that won’t actually help any kids. Today I’m blogging about one of those bills, California AB 2408, “Social media platform: child users: addiction.” (For more on how the California legislature is working to eliminate the Internet, see my posts on the pending bills AB587 and AB2273).

This bill assumes that social media platforms are intentionally addicting kids, so it creates business-ending liability to thwart those alleged addictions. The consequences depend on which the platforms choose to play it.

The platforms are most likely to toss all kids overboard. This is almost certainly what the legislators actually want given their antipathy towards the Internet, but it’s not a good outcome for anyone. It hurts the kids by depriving them of valuable social outlets and educational resources; it hurts adults by requiring age (and likely identity) verification to sort the kids from adults; and the age/identity verification hurts both kids and adults by exposing them to greater privacy and security risks. I explain all of this in my post on AB 2273 (the AADC), which redundantly also would require platforms to authenticate all users’ ages to avoid business-ending liability.

If platforms try to cater to kids, they would have to rely on an affirmative defense that hands power over to a censor (euphemistically called an “auditor” in the bill) who can declare that any feature is addictive, requiring the platform to promptly remove the feature or face business-ending liability. Handing control of publication decisions to a government-designated censor is as disrespectful to the Constitution as it sounds.

What the Bill Says

Who’s Covered? 

The bill defines “social media platform” as

a public or semipublic internet-based service or application that has users in California and that meets all of the following criteria:

(A) A substantial function of the service or application is to connect users in order to allow users to interact socially with each other within the service or application.

(B) A service or application that provides email or direct messaging services shall not be considered to meet this criterion on the basis of that function alone.

(C) The service or application allows users to do all of the following:

(i) Construct a public or semipublic profile for purposes of signing into and using the service.

(ii) Populate a list of other users with whom an individual shares a social connection within the system.

(iii) Create or post content viewable by other users, including, but not limited to, on message boards, in chat rooms, or through a landing page or main feed that presents the user with content generated by other users.

I critiqued similar language in my AB 587 blog post. Putting aside its clunky drafting, I assume this definition reaches all UGC services, subject to the statutory exclusions for:

  • email and direct messaging services (the bill doesn’t define either type).
  • tools that allow employees and affiliates to talk with each other (Slack, perhaps?).
  • businesses earning less than $100M/year in gross revenues. See my article on defining Internet service size for critiques about the pros and (mostly) cons of revenue metrics.
  • “A social media platform whose primary function is to allow users to play video games.” This is so interesting because video games have been accused of addicting kids for decades, but this bill would give them a free pass. Or maybe the legislature plans to target them in a bill sequel? If the legislature is willing to pass this bill, no business is safe.

What’s Restricted?

This is the bill’s core restriction:

A social media platform shall not use a design, feature, or affordance that the platform knew, or which by the exercise of reasonable care should have known, causes child users to become addicted to the platform.

Child = under 18.

Addiction is defined as: “(A) Indicates preoccupation or obsession with, or withdrawal or difficulty to cease or reduce use of, a social media platform despite the user’s desire to cease or reduce that use. and (B) Causes physical, mental, emotional, developmental, or material harms to the user.”

The restriction excludes third-party content and “passively displaying” that content (as we’ve discussed repeatedly, “passively publishing content” is an oxymoron). Parents cannot waive the bill’s liability for their kids.

The Affirmative Defense. The bill provides an affirmative defense against civil penalties if the platform: “(1) Instituted and maintained a program of at least quarterly audits of its practices, designs, features, and affordances to detect practices or features that have the potential to cause or contribute to the addiction of child users. [and] (2) Corrected, within 30 days of the completion of an audit described in paragraph (1), any practice, design, feature, or affordance discovered by the audit to present more than a de minimis risk of violating this subdivision.” Given that the defense would negate some, but not all, potential remedies, this defense doesn’t really help as much as it should.

Problems with the Bill

Social Media Benefits Minors

The bill enumerates many “findings” about social media’s evilness. The purported “findings” are mockably sophomoric because each fact claim is easily rebutted or disproven. However, they are a tell about the drafters’ mindset. The drafters approached the bill as if social media is never legitimate, which explains why the bill would nuke social media. Thus, with zero self-awareness, the findings say: “California should take reasonable, proportional, and effective steps to ensure that its children are not harmed by addictions of any kind.” The bill’s response is neither reasonable nor proportional–and it would be “effective” only in the sense of suppressing all social media activity, good and bad alike.

Of course, everyone (other than the bill drafters) know that social media has many benefits for its users, both adults and children alike. For example, the infamous slide showing that Instagram harmed 20% of teenage girls’ self-image also showed that it benefited 40% of teenage girls. Focusing on the 20% by eliminating the 40% is a policy choice, I guess. However, millions of unhappy Californian voters will be shocked by the legislature’s casual disregard towards something they value highly and care passionately about.

The Age Authentication Problem

The bill imposes liability for addicting children, but it doesn’t define when a platform knows that a user is a child. As I’ve discussed with other performative protect-kids-online bills, any attempt to segment kids from adults online doesn’t work because there’s no great method for age authentication. Any age authentication solution will set up barriers to moving around the Internet for both adults and children (i.e., welcome to our site, but we don’t really want you here until we’ve authenticated your age), will make errors in the classifications, and will expose everyone to greater privacy and security risks (which counterproductively puts kids at greater risk). If users have a persistent identity at a platform (necessary to avoid redundantly authenticating users’ ages each visit), then age authentication requires identity authentication, which expands the privacy and security risks (especially for minors) and subverts anonymous/pseudonymous Internet usage, which hurts users with minority characteristics and discourages critical content and whistleblowing. So protecting “kids” online comes with a huge package of unwanted consequences and tradeoffs, none of which the bill acknowledges or attempts to mitigate.

Another option is that the platform treats adults like kids, which I’m sure the bill drafters would be just fine with. However, that highlights the bill’s deceptive messaging. It isn’t really about protecting “kids.” It’s really about censoring social media.

Holding Manufacturers Liable for Addiction

This bill would hold platforms liable for addicting their customers–a very, very rare liability allocation in our legal system. Consider other addictions in our society. Cigarette manufacturers and retailers aren’t liable for the addictive nature of nicotine. Alcohol manufacturers and retailers aren’t liable for alcohol addiction. Casinos aren’t liable for gambling addiction. Those vices may be restricted to adults (but remember parents can’t waive 2408 for their kids), but virtually every marketplace product or service can “addict” some of its most fervent customers without facing liability. This bill seemingly opens up a major new frontier in tort law.

The Causation Problem

The bill sidesteps a key causation problem. If a practice is standard in the industry and a user uses multiple platforms, how do we know which platform caused the addiction? Consider something like infinite scrolling, which is used by many platforms.

This problem is easy to see by analogy. Assume that a gambling addict started gambling at Casino A, switched loyalty to Casino B, but occasionally gambles at Casino C. Which casino caused the addiction?

One possible answer is to hold all of the casinos liable. Or, in the case of this bill, hold every platform liable so long as the plaintiff can show the threshold condition of addiction (“preoccupation or obsession with, or withdrawal or difficulty to cease or reduce use of, a social media platform despite the user’s desire to cease or reduce that use”). But this also means platforms could be liable for addictions they didn’t “cause,” at least not initially.

The Impossibility of Managing the Liability Risk

There’s a fine line between standard product marketing – where the goal is to increase consumer demand for the product – and causing customers to become addicted. This bill erases the line. Platforms have no idea which consumers might become addicted and which won’t. There’s no way to segregate the addiction-vulnerable users and treat them more gently.

This means the platform must treat all of its customers as eggshell victims. Other than the affirmative defense, how can a platform manage its legal exposure to a customer base of possibly millions of California children, any one of which may be an eggshell? The answer: it can’t.

The unmanageable risk is why platforms’ dominant countermove to the bill will be to toss children off their service.

The Affirmative Defense

Platforms that don’t toss children overboard will rely on the affirmative defense. The affirmative defense is predicated on an audit, but the bill provides no details about the auditor’s credentials. Auditor-censors don’t need to have any specific certification or domain expertise. In theory, this permits self-auditing. More likely, it sets up a race to the bottom where the platforms retain auditor-censors based on their permissiveness. This would turn the audit a form of theater: everyone plays their statutory part, but a permissive auditor-censor nevertheless greenlights most features. In other words, auditing without certification doesn’t create any benefits for anyone.

If the auditor-censor’s report comes back clean, the platform has satisfied the defense. If the auditor-censor’s report doesn’t come back clean, the 30 day cure period is too short to fix or remove many changes. As a result, platforms will necessarily run all potential site changes by their auditor-censor before launch to preempt getting flagged in the next quarterly report. Thus, every quarterly report should come back clean because any potential auditor-censor concerns were resolved beforehand.

The affirmative defense mitigates civil penalties, but it does not address any other potential remedies created by the bill, including injunctive relief and criminal sanctions. As a result, the incomplete nature of the affirmative defense doesn’t really provide the legal protection that platforms need. This will further motivate platforms to toss kids overboard.

Section 230 Preemption

The bill has a savings clause to exclude any claims covered Section 230, the First Amendment, and the CA Constitution equivalent. That’s great, but what’s left of the bill after Section 230’s preemption? At their core, platforms are remixing third-party content, and any “addiction” relates to the consumption of that content. This bill tries to avoid reaching third-party content, but that’s all it does. Thus, it should squarely fall within Section 230’s preemption.

Constitutionality

If platforms conduct the audit theater, the auditor functions as a government-designated censor. The auditor-censor’s report is the only thing potentially protecting platforms from business-ending liability, so platforms must do whatever the auditor-censor says. This gives the auditor power to decide what features the platforms publish and what they don’t. For example, imagine a government-designated censor at a newspaper, deciding if the newspaper can add a new column or feature, add a new topical section, or change the size and layout of the paper. That censor overrides the publisher’s editorial choices of what content to present and how to present it. This bill does the same.

There are also the standard problems about who is and isn’t covered by the bill and why they were included/excluded, plus the typical Dormant Commerce Clause concern.

I’ll also note the serious tort doctrine problems (like the causation problem) and questions about whether the bill actually benefits any constituency (especially with the audit theater). Even if the bill gets lesser constitutional scrutiny, it still may not survive.

Conclusion

Numerous lawsuits have been filed across the country premised on the same theory underlying this bill, i.e., social media addicts kids. Those bills will run into tort law, Section 230, and Constitutionality challenges very soon. It would make sense for the California legislature to see how that litigation plays out and discover what, if any, room is left for the legislature to regulate. That would save taxpayers the costs of the inevitable, and quite possibly successful, court challenge to this bill if passed.

Originally posted to the Technology & Marketing Law Blog. Reposted here with permission.

Posted on Techdirt - 29 June 2022 @ 11:55am

California Legislators Seek To Burn Down The Internet — For The Children

I’m continuing my coverage of dangerous Internet bills in the California legislature. This job is especially challenging during an election year, when legislators rally behind the “protect the kids” mantra to pursue bills that are likely to hurt, or at least not help, kids. Today’s example is AB 2273, the Age-Appropriate Design Code Act (AADC),

Before we get overwhelmed by the bill’s details, I’ll highlight three crucial concerns:

First, the bill pretextually claims to protect children, but it will change the Internet for EVERYONE. In order to determine who is a child, websites and apps will have to authenticate the age of ALL consumers before they can use the service. NO ONE WANTS THIS. It will erect barriers to roaming around the Internet. Bye bye casual browsing. To do the authentication, businesses will be forced to collect personal information they don’t want to collect and consumers don’t want to give, and that data collection creates extra privacy and security risks for everyone. Furthermore, age authentication usually also requires identity authentication, and that will end anonymous/unattributed online activity.

Second, even if businesses treated all consumers (i.e., adults) to the heightened obligations required for children, businesses still could not comply with this bill. That’s because this bill is based on the U.K. Age-Appropriate Design Code. European laws are often aspirational and standards-based (instead of rule-based), because European regulators and regulated businesses engage in dialogues, and the regulators reward good tries, even if they aren’t successful. We don’t do “A-for-Effort” laws in the U.S., and generally we rely on rules, not standards, to provide certainty to businesses and reduce regulatory overreach and censorship.

Third, this bill reaches topics well beyond children’s privacy. Instead, the bill repeatedly implicates general consumer protection concerns and, most troublingly, content moderation topics. This turns the bill into a trojan horse for comprehensive regulation of Internet services and would turn the privacy-centric California Privacy Protection Agency/CPPA) into the general purpose Internet regulator.

So the big takeaway: this bill’s protect-the-children framing is designed to mislead everyone about the bill’s scope. The bill will dramatically degrade the Internet experience for everyone and will empower a new censorship-focused regulator who has no interest or expertise in balancing complex and competing interests.

What the Bill Says

Who’s Covered

The bill applies to a “business that provides an online service, product, or feature likely to be accessed by a child.” “Child” is defined as under-18, so the bill treats teens and toddlers identically.

The phrase “likely to be accessed by a child means it is reasonable to expect, based on the nature of the content, the associated marketing, the online context, or academic or internal research, that the service, product, or feature would be accessed by children.” Compare how COPPA handles this issue; it applies when services know (not anticipate) users are under-13 or direct their services to an under-13 audience. In contrast, the bill says that if it’s reasonable to expect ONE under-18 user, the business must comply with its requirements. With that overexpansive framing, few websites and apps can reasonably expect that under-18s will NEVER use their services. Thus, I believe all websites/apps are covered by this law so long as they clear the CPRA quantitative thresholds for being a “business.” [Note: it’s not clear how this bill situates into the CPRA, but I think the CPRA’s “business” definition applies.]

What’s Required

The bill starts with this aspirational statement: “Companies that develop and provide online services, products, or features that children are likely to access should consider the best interests of children when designing, developing, and providing that service, product, or feature.” The “should consider” grammar is the kind of regulatory aspiration found in European law. Does this statement have legal consequences or not? I vote it does not because “should” is not a compulsory obligation. So what is it doing here?

More generally, this provision tries to anchor the bill in the notion that businesses owe a “duty of loyalty” or fiduciary duty to their consumers. This duty-based approach to privacy regulation is trendy in privacy circles, but if adopted, it would exponentially expand regulatory oversight of businesses’ decisions. Regulators (and private plaintiffs) can always second-guess a business’ decision; a duty of “loyalty” gives the regulators the unlimited power to insist that the business made wrong calls and impose punishments accordingly. We usually see fiduciary/loyalty obligations in the professional services context where the professional service provider must put an individual customer’s needs before its own profit. Expanding this concept to mass-market businesses with millions of consumers would take us into uncharted regulatory territory.

The bill would obligate regulated businesses to:

  • Do data protection impact assessments (DPIAs) for any features likely to be accessed by kids (i.e., all features), provide a “report of the assessment” to the CPPA, and update the DPIA at least every 2 years.
  • “Establish the age of consumers with a reasonable level of certainty appropriate to the risks that arise from the data management practices of the business, or apply the privacy and data protections afforded to children to all consumers.” As discussed below, this is a poison pill for the Internet. This also exposes part of the true agenda here: if a business can’t do what the bill requires (a common consequence), the bill drives businesses to adopt the most restrictive regulation for everyone, including adults.
  • Configure default settings to a “high level of privacy protection,” whatever that means. I think this meant to say that kids should automatically get the highest privacy settings offered by the business, whatever that level is, but it’s not what it says. Instead, this becomes an aspirational statement about what constitutes a “high level” of protection.
  • All disclosures must be made “concisely, prominently, and using clear language suited to the age of children likely to access” the service. The disclosures in play are “privacy information, terms of service, policies, and community standards.” Note how this reaches all consumer disclosures, not just those that are privacy-focused. This is the first of several times we’ll see the bill’s power grab beyond privacy. Also, if a single toddler is “likely” to access the service, must all disclosures must be written at toddlers’ reading level?
  • Provide an “obvious signal” if parents can monitor their kids’ activities online. How does this intersect with COPPA?
  • “Enforce published terms, policies, and community standards established by the business, including, but not limited to, privacy policies and those concerning children.” 🚨 This language unambiguously governs all consumer disclosures, not just privacy-focused ones. Interpreted literally, it’s ludicrous to mandate businesses enforce every provision in their TOSes. If a consumer breaches a TOS by scraping content or posting violative content, does this provision require businesses to sue the consumer for breach of contract? More generally, this provision directly overlaps AB 587, which requires businesses to disclose their editorial policies and gives regulators the power to investigate and enforce any perceived or alleged deviations how services moderate content. See my excoriation of AB 587. This provision is a trojan horse for government censorship that has nothing to do with protecting the kids or even privacy. Plus, even if it weren’t an unconstitutional provision, the CPPA, with its privacy focus, lacks the expertise to monitor/enforce content moderation decisions.
  • “Provide prominent, accessible, and responsive tools to help children, or where applicable their parent or guardian, exercise their privacy rights and report concerns.” Not sure what this means, especially in light of the CPRA’s detailed provisions about how consumers can exercise privacy rights.

The bill would also obligate regulated businesses not to:

  • “Use the personal information of any child in a way that the business knows or has reason to know the online service, product, or feature more likely than not causes or contributes to a more than de minimis risk of harm to the physical health, mental health, or well-being of a child.” This provision cannot be complied with. It appears that businesses must change their services if a single child might suffer any of these harms, which is always? This provision especially seems to target UGC features, where people always say mean things that upset other users. Knowing that, what exactly are UGC services supposed to do differently? I assume the paradigmatic example are the concerns about kids’ social media addiction, but like the 587 discussion above, the legislature is separately considering an entire bill on that topic (AB 2408), and this one-sentence treatment of such a complicated and censorial objective isn’t helpful.
  • “Profile a child by default.” “Profile” is not defined in the bill. The term “profile” is used 3x in the CPRA but also not defined. So what does this mean?
  • “Collect, sell, share, or retain any personal information that is not necessary to provide a service, product, or feature with which a child is actively and knowingly engaged.” This partially overlaps COPPA.
  • “If a business does not have actual knowledge of the age of a consumer, it shall not collect, share, sell, or retain any personal information that is not necessary to provide a service, product, or feature with which a consumer is actively and knowingly engaged.” Note how the bill switches to the phrase “actual knowledge” about age rather than the threshold “likely to be accessed by kids.” This provision will affect many adults.
  • “Use the personal information of a child for any reason other than the reason or reasons for which that personal information was collected. If the business does not have actual knowledge of the age of the consumer, the business shall not use any personal information for any reason other than the reason or reasons for which that personal information was collected.” Same point about actual knowledge.
  • Sell/share a child’s PI unless needed for the service.
  • “Collect, sell, or share any precise geolocation information of children by default” unless needed for the service–and only if providing “an obvious sign to the child for the duration of that collection.”
  • “Use dark patterns or other techniques to lead or encourage consumers to provide personal information beyond what is reasonably expected for the service the child is accessing and necessary to provide that service or product to forego privacy protections, or to otherwise take any action that the business knows or has reason to know the online service or product more likely than not causes or contributes to a more than de minimis risk of harm to the child’s physical health, mental health, or well-being.” No one knows what the term “dark patterns” means, and now the bill would also restrict “other techniques” that aren’t dark patterns? Also see my earlier point about the “de minimis risk of harm” requirement.
  • “Use any personal information collected or processed to establish age or age range for any other purpose, or retain that personal information longer than necessary to establish age. Age assurance shall be proportionate to the risks and data practice of a service, product, or feature.” The bill expressly acknowledges that businesses can’t authenticate age without collecting PI–including PI the business would choose not to collect but for this bill. This is like the CCPA/CPRA’s problems with “verifiable consumer request”–to verify the consumer, the business has to ask for PI, sometimes more invasively than the PI the consumer is making the request about. ¯_(ツ)_/¯

New Taskforce

The bill would create a new government entity, the “California Children’s Data Protection Taskforce,” composed of “Californians with expertise in the areas of privacy, physical health, mental health, and well-being, technology, and children’s rights” as appointed by the CPPA. The taskforce’s job is “to evaluate best practices for the implementation of this title, and to provide support to businesses, with an emphasis on small and medium businesses, to comply with this title.”

The scope of this taskforce likely exceeds privacy topics. For example, the taskforce is charged with developing best practices for “Assessing and mitigating risks to children that arise from the use of an online service, product, or feature”–this scope isn’t limited to privacy risks. Indeed, it likely reaches services’ editorial decisions. The CPPA is charged with constituting and supervising this taskforce even though it lacks expertise on non-privacy-related topics.

New Regulations

The bill obligates the CPPA to come up with regulations supporting this bill by April 1, 2024. Given the CADOJ’s and CPPA’s track record of missing statutorily required timelines for rule-making, how likely is this schedule? 🤣

Problems With the Bill

Unwanted Consequences of Age and Identity Authentication. Structurally, the law tries to sort the online population into kids and adults for different regulatory treatment. The desire to distinguish between children and adults online has a venerable regulatory history. The first Congressional law to crack down on the Internet, the Communications Decency Act, had the same requirement. It was struck down as unconstitutional because of the infeasibility. Yet, after 25 years, age authentication still remains a vexing technical and social challenge.

Counterproductively, age-authentication processes are generally privacy invasive. There are two primary ways to do it: (1) demand the consumer disclose lots of personal information, or (2) use facial recognition and collect highly sensitive face information (and more). Businesses don’t want to invade their consumers’ privacy these ways, and COPPA doesn’t require such invasiveness either.

Also, it’s typically impossible to do age-authentication without also doing identity-authentication so that the consumer can establish a persistent identity with the service. Otherwise, every consumer (kids and adults) will have to authentication their age each time they access a service, which will create friction and discourage usage. But if businesses authenticate identity, and not just age, then the bill creates even greater privacy and security risks as consumers will have to disclose even more PI.

Furthermore, identity authentication functionally eliminates anonymous online activity and all unattributed activity and content on the Internet. This would hurt many communities, such as minorities concerned about revealing their identity (e.g., LGBTQ), pregnant women seeking information about abortions, and whistleblowers. This also raises obvious First Amendment concerns.

Enforcement. The bill doesn’t specify the enforcement mechanisms. Instead, it wades into an obvious and avoidable tension in California law. On the one hand, the CPRA expressly negates private rights of action (except for certain data security breaches). If this bill is part of the CPRA–which the introductory language implies–then it should be subject to the CPRA’s enforcement limits. CADOJ and CPPA have exclusive enforcement authority over the CPRA, and there’s no private right of action/PRA. On the other hand, California B&P 17200 allows for PRAs for any legal violation, including violations of other California statutes. So unless the bill is cabined by the CPRA’s enforcement limit, the bill will be subject to PRAs through 17200. So which is it?  ¯\_(ツ)_/¯

Adding to the CPPA’s Workload. The CPPA is already overwhelmed. It can’t make its rule-making deadline of July 1, 2022 (missing it by months). That means businesses will have to comply with the voluminous rules with inadequate compliance time. Once that initial rule-making is done, the CPPA will then have to build a brand-new administrative enforcement function and start bringing, prosecuting, and adjudicating enforcements. That will be another demanding, complex, and time-consuming project for the CPPA. So it’s preposterous that the California legislature would add MORE to the CPPA’s agenda, when it clearly cannot handle the work that the California voters have already instructed it to do.

Trade Secret Problems. Requiring businesses to report about their DPIAs for every feature they launch potentially discloses lots of trade secrets–which may blow their trade secret protection. It certainly provides a rich roadmap for plaintiffs to mine.

Conflict with COPPA. The bill does not provide any exceptions for parental consent to the business’ privacy practices. Instead, the bill takes power away from parents. Does this conflict with COPPA such that COPPA would preempt it? No doubt the bill’s basic scheme rejects COPPA’s parental control model.

I’ll also note that any PRA may compound the preemption problem. “Allowing private plaintiffs to bring suits for violations of conduct regulated by COPPA, even styled in the form of state law claims, with no obligation to cooperate with the FTC, is inconsistent with the treatment of COPPA violations as outlined in the COPPA statute.” Hubbard v. Google LLC, 546 F. Supp. 3d 986 (N.D. Cal. 2021).

Conflict with CPRA’s Amendment Process. The legislature may amend the CPRA by majority vote only if it enhances consumer privacy rights. As I’ve explained before, this is a trap because I believe the amendments must uniformly enhance consumer privacy rights. In other words, if some consumers get greater privacy rights, but other consumers get less privacy rights, then the legislature cannot make the amendment via majority vote. In this case, the AADC undermines consumer privacy by exposing both children and adults to new privacy and security risks through the authentication process. Thus, the bill, if passed, could be struck down as exceeding the legislature’s authority.

In addition, the bill says “If a conflict arises between commercial interests and the best interests of children, companies should prioritizes the privacy, safety, and well-being of children over commercial interests.” A reminder of what the CPRA actually says: “The rights of consumers and the responsibilities of businesses should be implemented with the goal of strengthening consumer privacy, while giving attention to the impact on business and innovation.” By disregarding the CPRA’s instructions to consider impacts on businesses, this also exceeds the legislature’s authority.

Dormant Commerce Clause. The bill creates numerous potential DCC problems. Most importantly, businesses necessarily will have authenticate the age of all consumers, both in and outside of California. This means that the bill would govern how businesses based outside of California interact with non-Californians, which the DCC does not permit.

Conclusion

Due to its scope and likely impact, this bill is one of the most consequential bills in the California legislature this year. The Internet as we know it hangs in the balance. If your legislator isn’t paying proper attention to those consequences (spoiler: they aren’t), you should give them a call.

Originally posted to Eric Goldman’s Technology & Marketing Law blog. Reposted with permission.

Posted on Techdirt - 23 June 2022 @ 10:47am

California Seems To Be Taking The Exact Wrong Lessons From Texas And Florida’s Social Media Censorship Laws

This post analyzes California AB 587, self-described as “Content Moderation Requirements for Internet Terms of Service.” I believe the bill will get a legislative hearing later this month.

A note about the draft I’m analyzing, posted here. It’s dated June 6, and it’s different from the version publicly posted on the legislature’s website (dated April 28). I’m not sure what the June 6 draft’s redlines compare to–maybe the bill as introduced? I’m also not sure if the June 6 draft will be the basis of the hearing, or if there will be more iterations between now and then. It’s exceptionally difficult for me to analyze bills that are changing rapidly in secret. When bill drafters secretly solicit feedback, every other constituency cannot follow along or share timely or helpful feedback. It’s especially ironic to see non-public activity for a bill that’s all about mandating transparency. ¯\_(ツ)_/¯

Who’s Covered by the Bill?

The bill applies to “social media platforms” that: “(A) Construct a public or semipublic profile within a bounded system created by the service. (B) Populate a list of other users with whom an individual shares a connection within the system. [and] (C) View and navigate a list of connections made by other individuals within the system.”

This definition of “social media” has been around for about a decade, and it’s awful. Critiques I made 8 years ago:

First, what is a “semi-public” profile, and how does it differ from a public or non-public profile? Is there even such a thing as a “semi-private” or “non-public” profile?…

Second, what does “a bounded system” mean?…The “bounded system” phrase sounds like a walled garden of some sort, but most walled gardens aren’t impervious. So what delimits the boundaries the statute refers to, and what does an “unbounded” system look like?

I also don’t understand what constitutes a “connection,” what a “list of connections” means, or what it means to “populate” the connection list. This definition of social media was never meant to be used as a statutory definition, and every word invites litigation.

Further, the legislature should–but surely has not–run this definition through a test suite to make sure it fits the legislature’s intent. In particular, which, if any, services offering user-generated content (UGC) functionality do NOT satisfy this definition? Though decades of litigation might ultimately answer the question, I expect that the language likely covers all UGC services.

[Note: based on a quick Lexis search, I saw similar statutory language in about 20 laws, but I did not see any caselaw interpreting the language because I believe those laws are largely unused.]

The bill then excludes some UGC services:

  • Companies with less than $100M of gross revenue in the prior calendar year. There are many obvious problems with this standard, such as the fact that the revenue is enterprise-wide (so bigger businesses with small UGC components will be covered if they don’t turn off the UGC functionality), the lack of a phase-in period, the lack of a nexus for revenues derived from California, and the absence of why $100M was selected instead of $50M, $500M, or whatever. Every legislator really ought to read this article about how to draft size metrics for Internet services.
  • Email service providers, “direct messaging” services, and “cloud storage or shared document or file collaboration.” All social media services are, in a sense, “cloud storage,” so what does this exclusion mean? ¯\_(ツ)_/¯
  • “A section for user-generated comments on a digital news internet website that otherwise exclusively hosts content published by” entities enumerated in the California Constitution, Article I(2)(b). Entities referenced in the Constitution: a “publisher, editor, reporter, or other person connected with or employed upon a newspaper, magazine, or other periodical publication, or by a press association or wire service” and “a radio or television news reporter or other person connected with or employed by a radio or television station.” I don’t know that any service can take advantage of this exclusion because every traditional publisher publishes content from freelancers and other non-employees, so the “exclusively hosts” requirement creates a null set. Also, this exclusion opts-into the confusion about the statutory differences between traditional and new media. See some cases discussing that issue.
  • “Consumer reviews of products or services on an internet website that serves the exclusive purpose of facilitating online commerce.” Ha ha. Should we call this the “Amazon exclusion”? If so, I’m not sure they are getting their money’s worth. Does Amazon.com EXCLUSIVELY facilitate online commerce? 🤔  And if this exclusion doesn’t benefit Yelp and TripAdvisor–because they have reviews on things that don’t support e-commerce (like free-to-visit parks)–I can’t wait to see how the state explains why non-commercial consumer reviews need transparency while commercial ones do not.
  • “An internet-based subscription streaming service that is offered to consumers for the exclusive purpose of transmitting licensed media, including audio or video files, in a continuous flow from the internet-based service to the end user, and does not host user-generated content.” Should we call this the “Netflix exclusion”? I’d be grateful if someone could explain to me the differences between “licensed media” and “UGC.” 🤔

The Law’s Requirements

Publish the “TOS”

The bill requires social media platforms to post their terms of service (TOS), translated into every language they offer product features in. It defines “TOS” as:

a policy or set of policies adopted by a social media company that specifies, at least, the user behavior and activities that are permitted on the internet-based service owned or operated by the social media company, and the user behavior and activities that may subject the user or an item of content to being actioned. This may include, but is not limited to, a terms of service document or agreement, rules or content moderation guidelines, community guidelines, acceptable uses, and other policies and established practices that outline these policies.

To start, I need to address the ambiguity of what constitutes the “TOS” because it’s the most dangerous and censorial trap of the bill. Every service publishes public-facing “editorial rules,” but the published versions never can capture ALL of the service’s editorial rules. Exceptions include: private interpretations that are not shared to protect against gaming, private interpretations that are too detailed for public consumption, private interpretations that governments ask/demand the services don’t tell the public about, private interpretations that are made on the fly in response to exigencies, one-off exceptions, and more.

According to the bill’s definition, failing to publish all of these non-public “policies and practices” before taking action based on them could mean noncompliance with the bill’s requirements. Given the inevitability of such undisclosed editorial policies, it seems like every service always will be noncompliant.

Furthermore, to the extent the bill inhibits services from making an editorial decision using a policy/practice that hasn’t been pre-announced, the bill would control and skew the services’ editorial decisions. This pre-announcement requirement would have the same effect as Florida’s restrictions on updating their TOSes more than once every 30 days (the 11th Circuit held that restriction was unconstitutional).

Finally, imagine trying to impose a similar editorial policy disclosure requirement on a traditional publisher like a newspaper or book publisher. They currently aren’t required to disclose ANY editorial policies, let alone ALL of them, and I believe any such effort to require such disclosures would obviously be struck down as an unconstitutional intrusion into the freedom of speech and press.

In addition to requiring the TOS’s publication, the bill says the TOS must include (1) a way to contact the platform to ask questions about the TOS, (2) descriptions of how users can complain about content and “the social media company’s commitments on response and resolution time.” (Drafting suggestion for regulated services: “We do not promise to respond ever”), and (3) “A list of potential actions the social media company may take against an item of content or a user, including, but not limited to, removal, demonetization, deprioritization, or banning.” I identified 3 dozen potential actions in my Content Moderation Remedies article, and I’m sure more exist or will be developed, so the remedies list should be long and I’m not sure how a platform could pre-announce the full universe of possible remedies.

Information Disclosures to the CA AG

Once a quarter, the bill would require platforms to deliver to the CA AG the current TOS, a “complete and detailed description” of changes to the TOS in the prior quarter, and a statement of whether the TOS defines any of the following five terms and what the definitions are: “Hate speech or racism,” “Extremism or radicalization,” “Disinformation or misinformation,” “Harassment,” and “Foreign political interference.” [If the definitions are from the TOS, can’t the AG just read that?]. I’ll call the enumerated five content categories the “Targeted Constitutionally Protected Content.”

In addition, the platforms would need to provide a “detailed description of content moderation practices used by the social media.” This seems to contemplate more disclosures than just the “TOS,” but that definition seemingly already captured all of the service’s content moderation rules. I assume the bill wants to know how the service’s editorial policies are operationalized, but it doesn’t make that clear. Plus, like Texas’ open-ended disclosure requirements,  the unbounded disclosure obligation ensures litigation over (unavoidable) omissions.

Beyond the open-ended requirement, the bill enumerates an overwhelmingly complex list of required disclosures, which are far more invasive and burdensome than Texas’ plenty-burdensome demands:

  • “Any existing policies intended to address” the Targeted Constitutionally Protected Content. Wasn’t this already addressed in the “TOS” definition?
  • “How automated content moderation systems enforce terms of service of the social media platform and when these systems involve human review.” As discussed more below, this is a fine example of a disclosure where any investigation into its accuracy would be overly invasive.
  • “How the social media company responds to user reports of violations of the terms of service.” Does this mean respond to the user or respond to notices through internal processes? At large services, the latter involves a complicated and constantly changing flowchart with lots of exceptions, so this would become another disclosure trap.
  • “How the social media company would remove individual pieces of content, users, or groups that violate the terms of service, or take broader action against individual users or against groups of users that violate the terms of service.” What does “broader action” mean? Does that refer to account-level interventions instead of item-level interventions? As my Content Moderation Remedies paper showed, this topic is way more complicated than a binary remove/leave up dichotomy.
  • “The languages in which the social media platform does not make terms of service available, but does offer product features, including, but not limited to, menus and prompts.” Given the earlier requirement to translate the TOS into these languages, this disclosure would be an admission of legal violations, no?
  • With respect to the Targeted Constitutionally Protected Content, the following data:
    • “The total number of flagged items of content.”
    • Number of items “actioned.”
    • “The total number of actioned items of content that resulted in action taken by the social media company against the user or group of users responsible for the content.” I assume this means account-level actions based on the Targeted Constitutionally Protected Content?
    • Number of items “removed, demonetized, or deprioritized.” Is this just a subset of the number reported in the second bullet above?
    • “The number of times actioned items of content were viewed by users.”
    • “The number of times actioned items of content were shared, and the number of users that viewed the content before it was actioned.” How is the second half of this requirement different from the prior bullet?
    • “The number of times users appealed social media company actions taken on that platform and the number of reversals of social media company actions on appeal disaggregated by each type of action.”
    • All of the data disclosed in response to the prior bullet points must be broken down further by:
      • Each of the five categories of the Targeted Constitutionally Protected Content.
      • The type of content (posts vs. profile pages, etc.)
      • The type of media (video vs. text, etc.)
      • How the items were flagged (employees/contractors, “AI software,” “community moderators,” “civil society partners” and “users”–third party non-users aren’t enumerated but they are another obvious source of “flags”)
      • “How the content was actioned” (same list of entities as the prior bullet)

All told, there are 7 categories of disclosures, and the bill indicates that the disclosure categories have, respectively, 5 options, at least 5 options, at least 3 options, at least 5 options, and at least 5 options. So I believe the bill requires that each service’s reports should include no less than 161 different categories of disclosures (7×5+7×5+7×3+7×5+7×5).

Who will benefit from these disclosures? At minimum, unlike the purported justification cited by the 11th Circuit for Florida’s disclosure requirements, the bill’s required statistics cannot help consumers make better marketplace choices. By definition, each service can define each category of Targeted Constitutionally Protected Content differently, so consumers cannot compare the reported numbers across services. Furthermore, because services can change how these define each content category from time to time, it won’t even be possible to compare a service’s new numbers against prior numbers to determine if they are getting “better” or “worse” at managing the Targeted Constitutionally Protected Content. Services could even change their definitions so they don’t have to report anything. For example, a service could create an omnibus category of “incivil content/activity” that includes some or all of the Targeted Constitutionally Protected Content categories, in which case they wouldn’t have to disclose anything. (Note also that this countermove would represent a change in the service’s editorial practices impelled by the bill, which exacerbates the constitutional problem discussed below). So who is the audience for the statistics and what, exactly, will they learn from the required disclosures? Without clear and persuasive answers to these questions, it looks like the state is demanding the info purely as a raw exercise of power, not to benefit any constituency.

Remedies

Violations can trigger penalties of up to $15k/violation/day, and the penalties should at minimum be “sufficient to induce compliance with this act” but should be mitigated if the service “made a reasonable, good faith attempt to comply.” The AG can enforce the law, but so can county counsel and city DAs in some circumstances. The bill provides those non-AG enforcers with some financial incentives to chase the penalty money as a bounty.

An earlier draft of the bill expressly authorized private rights of action via B&P 17200. Fortunately, that provision got struck…but, unfortunately, in its place there’s a provision saying that this bill is cumulative with any other law. As a result, I think the 17200 PRA is still available. If so, this bill will be a perpetual litigation machine. I would expect every lawsuit against a regulated service would add 587 claims for alleged omissions, misrepresentations, etc. Like the CCPA/CPRA, the bill should clearly eliminate all PRAs–unless the legislature wants Californians suing each other into oblivion.

Some Structural Problems with the Bill

Although the prior section identified some obvious drafting errors, fixing those errors won’t make this a good bill. Some structural problems with the bill that can’t be readily fixed.

The overall problem with mandatory editorial transparency. I just wrote a whole paper explaining why mandatory editorial transparency laws like AB 587 are categorically unconstitutional, so you should start with that if you haven’t already read it. To summarize, the disclosure requirements about editorial policies and practices functionally control speech by inducing publishers to make editorial decisions that will placate regulators rather than best serve the publisher’s audience. Furthermore, any investigation of the mandated disclosures puts the government in the position of supervising the editorial process, an “unhealthy entanglement.” I already mentioned one such example where regulators try to validate if the service properly described when it does manual vs. automated content moderation. Such an investigation would necessarily scrutinize and second-guess every aspect of the service’s editorial function.

Because of these inevitable speech restrictions, I believe strict scrutiny should apply to AB 587 without relying on the confused caselaw involving compelled commercial disclosures. In other words, I don’t think Zauderer–a recent darling of the pro-censorship crowd–is the right test (I will have more to say on this topic). Further, Zauderer only applies when the disclosures are “uncontroversial” and “purely factual,” but the AB587 disclosures are neither. The Targeted Constitutionally Protect Content categories all involve highly political topics, not the pricing terms at issue in Zauderer; and the disclosures require substantial and highly debatable exercises of judgments to make the classifications, so they are not “purely factual.” And even if Zauderer does apply, I think the disclosure requirements impose an undue burden. For example, if 161 different prophylactic “just-in-case” disclosures don’t constitute an undue burden, I don’t know what would.

The TOS definition problem. As I mentioned, what constitutes part of the “TOS” creates a litigation trap easily exploited by plaintiffs. Furthermore, if it requires the publication of policies and practices that justifiably should not be published, the law intrudes into editorial processes.

The favoritism shown to the Targeted Constitutionally Protected Content. The law “privileges” the five categories in the Targeted Constitutionally Protected Content for heightened attention by services, but there are many other categories of lawful-but-awful content that are not given equal treatment. Why?

This distinction between types of lawful-but-awful speech sends the obvious message to services that they need to pay closer attention to these content categories over the others. This implicit message to reprioritize content categories distorts the services’ editorial prerogative, and if services get the message that they should manage the disclosed numbers down, the bill reduces constitutionally protected speech. However, services won’t know if they should be managing the numbers down. The AG is a Democrat, so he’s likely to prefer less lawful-but-awful content. However, many county prosecutors in red counties (yes, California has them) may prefer less content moderation of constitutionally protected speech and would investigate if they see the numbers trending down. Given that services are trapped between these competing partisan dynamics, they will be paralyzed in their editorial decision-making. This reiterates why the bill doesn’t satisfy Zauderer “uncontroversial” prong.

The problem classifying the Targeted Constitutionally Protected Content. Determining what fits into each category of the Targeted Constitutionally Protected Content is an editorial judgment that always will be subject to substantial debate. Consider, for example, how often the Oversight Board has reversed Facebook on similar topics. The plaintiffs can always disagree with the service’s classifications, and that puts them in the role of second-guessing the service’s editorial decisions.

Social media exceptionalism. As Benkler et al’s book Network Propaganda showed, Fox News injects misinformation into the conversation, which then propagates to social media. So why does the bill target social media and not Fox News? More generally, the bill doesn’t explain why social media needs this intervention compared to traditional publishers or even other types of online publishers (say, Breitbart?). Or is the state’s position that it could impose equally invasive transparency obligations on the editorial decisions of other publishers, like newspapers and book publishers?

The favoritism shown to the excluded services. I think the state will have a difficult time justifying why some UGC services get a free pass from the requirements. It sure looks arbitrary.

The Dormant Commerce Clause. The bill does not restrict its reach to California. This creates several potential DCC problems:

  • The bill reaches extraterritorially.
    • It requires disclosures involving activity outside of California, including countries where the Targeted Constitutionally Protected Content is illegal. This makes it impossible to properly contextualize the numbers because the legislative restrictions may vary by country. It also leaves the services vulnerable to enforcement actions that their numbers are too high/low based on dynamics the services cannot control.
    • If the bill reaches services not located in California, then it is regulating activity between a non-California service and non-California residents.
  • The bill sets up potential conflicts with other states’ laws. For example, a recent NY law defines “hateful conduct” and provides specific requirements for dealing with it. This may or may not coincide with California’s requirements.
  • The cumulative effect of different states’ disclosure requirements will surely become overly burdensome. For example, Texas’ disclosure requirements are structured differently than California’s. A service would have to build different reporting schemes to comply with the different laws. Multiply this times many other states, and the reporting burden becomes overwhelming.

Conclusion

Stepping back from the details, the bill can be roughly divided into two components: (1) the TOS publication and delivery component, and (2) the operational disclosures and statistics component. Abstracting the bill at this level highlights the bill’s pure cynicism.

The TOS publication and delivery component is obviously pointless. Any regulated platform already posts its TOS and likely addresses the specified topics, at least in some level of generality (and an obvious countermove to this bill will be for services to make their public-facing disclosures more general and less specific than they currently are). Consumers can already read those onsite TOSes if they care; and the AG’s office can already access those TOSes any time it wants. (Heck, the AG can even set up bots to download copies quarterly, or even more frequently, and I wonder if the AG’s office has ever used the Wayback Machine?). So if this provision isn’t really generating any new disclosures to consumers, it’s just creating technical traps that platforms might trip over.

The operational disclosures and statistics component would likely create new public data, but as explained above, it’s data that is worthless to consumers. Like the TOS publication and delivery provision, it feels more like a trap for technical enforcements than a provision that benefits California residents. It’s also almost certainly unconstitutional. The emphasis on Targeted Constitutionally Protected Content categories seems designed to change the editorial decision-making of the regulated services, which is a flat-out form of censorship; and even if Zauderer is the applicable test, it seems likely to fail that test as well.

So if this provision gets struck and the TOS publication and delivery provision doesn’t do anything helpful, it leaves the obvious question: why is the California legislature working on this and not the many other social problems in our state? The answer to that question is surely dispiriting to every California resident.

Reposted, with permission, from Eric Goldman’s Technology & Marketing Law Blog.

Posted on Techdirt - 29 September 2021 @ 06:23am

The SHOP SAFE Act Is A Terrible Bill That Will Eliminate Online Marketplaces

We’ve already posted Mike’s post about the problems with the SHOP SAFE Act that is getting marked up today, as well as Cathy’s lamenting the lack of Congressional concern for what they’re damaging, but Prof. Eric Goldman wrote such a thorough and complete breakdown of the problems with the bill that we decided that was worth posting too.

[Note: this blog post covers Rep. Nadler’s manager’s amendment for the SHOP SAFE Act, which I think will be the basis of a committee markup hearing today. If Congress were well-functioning, draft bills going into markup would be circulated a reasonable time before the hearing, so that we can properly analyze them on a non-rush basis, and clearly marked as the discussion version so that we’re not confused by which version is actually the current text.]

The SHOP SAFE Act seeks to curb harmful counterfeit items sold through online marketplaces. That’s a laudable goal that I expect everyone supports. However, this bill is itself a giant counterfeit. It claims to focus on “counterfeits” that could harm consumer “health and safety,” but those are both lies designed to make the bill seem narrower and more balanced than it actually is.

Instead of protecting consumers, this bill gives trademark owners absolute control over online marketplaces by overturning Tiffany v. eBay. It creates a new statutory species of contributory trademark liability that applies to online marketplaces (defined more broadly than you think) selling third-party items that bear counterfeit marks and implicate “health and safety” (defined more broadly than you think), unless the online marketplace operator does the impossible and successfully navigates over a dozen onerous and expensive compliance obligations.

Because the bill makes it impossible for online marketplaces to avoid contributory trademark liability, this bill will drive most or all online marketplaces out of the industry. (Another possibility is that Amazon will be the only player able to comply with the law, in which case the law entrenches an insurmountable competitive moat around Amazon’s marketplace). If you want online marketplaces gone, you might view this as a good outcome. For the rest of us, the SHOP SAFE Act will reduce our marketplace choices, and increase our costs, during a pandemic shutdown when online commerce has become even more crucial. In other words, the law will produce outcomes that are the direct opposite of what we want from Congress.

In addition to destroying online marketplaces, this bill provides the template for how rightsowners want to reform the DMCA online safe harbor to make it functionally impossible to qualify for as well. In this respect, the SHOP SAFE Act portends how Congress will accelerate the end of the Web 2.0 era of user-generated content.


[The rest of this post is 4k+ words explaining what the bill does and why it sucks. You might stop reading here if you don’t want the gory/nerdy details.]

Who’s Covered by the Bill

The bill defines an “electronic commerce platform” as “any electronically accessed platform that includes publicly interactive features that allow for arranging the sale or purchase of goods, or that enables a person other than an operator of the platform to sell or offer to sell physical goods to consumers located in the United States.”

Clearly, the second part of that definition targets Amazon and other major marketplaces, such as eBay, Walmart Marketplace, and Etsy. I presume it also includes print-on-demand vendors that enable users to upload images, such as CafePress, Zazzle, and Redbubble (unless those vendors are considered to be retailers, not online marketplaces).

The first part of the definition includes services with “publicly interactive features that allow for arranging the sale or purchase of goods.” This is a bizarre way to describe any online marketplace, and it covers something other than enabling third-party sellers (that’s the second part of the definition), so what services does this describe? Read literally, all advertising “allow[s] for arranging the sale or purchase of goods,” so this law potentially obligates every ad-supported publisher to undertake the content moderation obligations the bill imposes on online marketplaces. That doesn’t make sense, because the bill uses the undefined term “listing” 11 times, and display advertising isn’t normally considered to be a listing. Still, this wording is unusual and broad — and you better believe trademark owners like its breadth. If the bill wasn’t meant to regulate all ads, the bill drafters should make that clear.

Like most Internet regulations nowadays, the bill distinguishes entities based on size. See my article with Jess Miers on how legislatures should do that properly. The bill applies to services that have “sales on the platform in the previous calendar year of not less than $500,000.” Some problems with this distinction:

  • The bill doesn’t define “platform,” so it’s unclear what revenues count. In Amazon’s case, is it only revenues from the marketplace or does it also include the revenues from Amazon’s retailing function? If the latter, then the definition will pick up smallish online retailers that have small marketplace components.
  • The bill also doesn’t distinguish between gross and net revenue. So, for example, assume a site takes a 10% commission on sales. If a service has $500k in merchandise sales (gross revenue), but only keeps $50k in commissions (net revenue), is it covered by the law or not? I think the bill covers gross revenue, which means the bill reaches companies with small net revenues.
  • As usual, the bill doesn’t provide a phase-in period. A service may not know its revenues until some time after the calendar year closed, but it would be obligated to comply with the law from the beginning of the calendar year. As usual, then, this forces services below the revenue threshold to comply anticipatorily in case they clear the threshold. How hard is it for bills to include a phase-in period?

I’d fret more about the $500k threshold, but it’s likely to be irrelevant anyways. The bill also applies to smaller services once they receive 10 NOCI notices over their lifetimes from all sources. (Unlike the other services, these services get a six-month phase-in period).

To qualify as a NOCI, the notice must (1) refer to the SHOP SAFE Act, (2) “include an explicit notification of the 10-notice limit and the requirement of the platform to publish” the NOCI disclosures below (I have no idea what this element means), and (3) “identify a listing on the platform that reasonably could be determined to have used a counterfeit mark in connection with the sale, offering for sale, distribution, or advertising of goods that implicate health and safety.” (So, a NOCI counts against the 10-notice threshold if it “reasonably could be determined” that the listing was counterfeit, even if the NOCI is actually wrong.)

A month after getting its first NOCI, the service must publicly post an attestation that it has less than $500k in revenue along with a running tally of the number of NOCIs received… I guess for shits and giggles so that trademark owners can compete to be the one to put the service over the 10 NOCI threshold? I mean, even tiny services will quickly accrue 10 NOCIs. Indeed, I imagine rightsowners will coordinate their NOCIs to ensure that small services clear this threshold and are obligated to comply with the law. Thus, the 10 lifetime NOCIs threshold is a ruse to mislead people that smaller services aren’t governed by the law, when of course they will be.

What’s Regulated?

The law applies to counterfeit “goods that implicate health and safety,” defined as “goods the use of which can lead to illness, disease, injury, serious adverse event, allergic reaction, or death if produced without compliance with all applicable Federal, State, and local health and safety regulations and industry-designated testing, safety, quality, certification, manufacturing, packaging, and labeling standards.” I mean, pretty much every physical product meets this definition, right? Virtually any poorly-designed or nonconforming physical item has the capacity to cause personal injury. For example, electronic items that don’t comply with industry standards can cause physical harm from electrical charges, which means every electronic item is categorically within the bill’s scope even if the allegedly counterfeited item actually complies with industry standards. Now, replicate that analysis for other goods and tell me which categories of goods lack the capacity to cause harm. Once again, the “health and safety” framing is another deceptive ruse because the bill functionally applies to all goods, not just especially risky goods.

Overturning Tiffany v. eBay

In 2010, the Second Circuit issued a watershed decision about secondary trademark infringement. Essentially, the court held that eBay wasn’t liable for counterfeit sales of Tiffany items because eBay honored takedown notices and Tiffany’s claims sought to hold eBay accountable for generalized knowledge. That ruling has produced a kind of détente in the online secondary trademark infringement field, where we just don’t see broad counterfeiting lawsuits against online marketplaces any more.

The SHOP SAFE Act ends that détente. First, it creates a new statutory contributory trademark infringement claim for selling the regulated items. Second, the bill says that the new contributory claim doesn’t preempt other plaintiff claims, so trademark owners will still bring the standard statutory direct trademark infringement claim and common law contributory trademark claims (and dilution, false designation of origin, etc.). Third, online marketplaces nominally can try to “earn” a safe harbor from the new statutory contributory liability claim (but not from the other legal claims) by jumping through an onerous gauntlet of responsibilities. Those requirements will impose huge compliance costs, but those investments won’t prevent online marketplaces from being dragged into extraordinarily expensive and high-stakes litigation over eligibility for this defense. Fourth, the law imposes a proactive screening obligation, something that Tiffany v. eBay rejected. Fifth, unlike Tiffany v. eBay, generalized knowledge can create liability, and takedown notices aren’t required as a prerequisite to liability. Sixth, in litigation over direct trademark infringement and common law contributory trademark infringement claims, trademark owners can cite compliance/non-compliance with the defense factors against the online marketplace, putting the online marketplace in a worse legal position than they currently are in.

All told, the SHOP SAFE Act will functionally repeal the Tiffany v. eBay standard that has fostered the growth of online marketplaces for the last decade-plus, and usher in a new era of online shopping that will likely exclude online marketplaces entirely.

The “Safe Harbor” Preconditions

To earn protection from the newly created contributory trademark infringement doctrine, online marketplaces must perfectly implement all of the following 13 requirements:

1. Determine, and periodically confirm, that third-party sellers have a registered US agent for service or a designated “verified” US address for service. (Just wait until other countries require the equivalent from US-based online sellers on foreign marketplaces. A new frontier for a trade war.)

2. Verify the third-party seller’s identity, principal place of business, and contact information through “reliable documentation, including to the extent possible some form of government-issued identification.” (What is “reliable” documentation, and how much risk will online marketplaces be willing to take?)

3. Require the third-party seller to take reasonable steps to verify the authenticity of its goods and attest to those steps. This requirement doesn’t apply to sellers who sell less than $5k/yr and lists no more than five of the same items per year. (Is the online marketplace liable if the seller doesn’t actually take reasonable steps? How can the online marketplace “require” independent sellers to do this?)

4. Impose TOS terms that the third-party seller (1) won’t use counterfeit marks, (2) consents to US jurisdiction, and (3) designates a US agent for service or has a verified US address for service. (Can trademark owners take advantage of the US jurisdiction consent between the online marketplace and its third-party sellers? Normally trademark owners aren’t third-party beneficiaries of that contract. Also, that consent isn’t limited to jurisdiction over counterfeit claims — it’s over everything the TOS might govern.)

5. Conspicuously display on the platform:

  • the third-party seller’s verified principal place of business,
  • contact information,
  • identity of the third-party seller, and
  • the country from which the goods were originally shipped from the third-party seller

But the online marketplace isn’t required to display “the personal identity of an individual, a residential street address, or personal contact information of an individual, and in such cases shall instead provide alternative, verified means of contacting the third-party seller.”

6. Conspicuously display “in each listing the country of origin and manufacture of the goods as identified by the third-party seller, unless such information was not reasonably available to the third-party seller and the third-party seller has identified to the platform the steps it undertook to identify the country of origin and manufacture of the goods and the reasons it was unable to identify the same.” This requirement doesn’t apply to sellers who sell less than $5k/yr and lists no more than five of the same items per year.

7. Require third-party sellers to “use images that accurately depict the goods sold, offered for sale, distributed, or advertised on the platform.” (Does this create an affirmative obligation to include images? While rare, I believe that some marketplace sellers sometimes currently sell their items without including any photo. Also, product shots have been a constant source of copyright litigation. The manufacturer can sue the seller for copying its shots; the manufacturer can sue for false advertising if non-official shots aren’t “accurate,” and freelancers love to sue over product shots they took and ones they think are too similar to the ones they took.)

8. Undertake “reasonable proactive measures for screening goods before displaying the goods to the public to prevent the use by any third-party seller of a counterfeit mark in connection with the sale, offering for sale, distribution, or advertising of goods on the platform. The determination of whether proactive measures are reasonable shall consider the size and resources of a platform, the available technological and non-technological solutions at the time of screening, the information provided by the registrant to the platform, and any other factor considered relevant by a court.” (This is the most coveted payload for trademark owners. Every rightsowner wants UGC services to engage in proactive screening. The screening won’t be limited to harmful counterfeit goods, and consider how courts will punish online marketplaces for undertaking this proactive screening in their analysis of direct and contributory trademark infringement.)

9. Provide “reasonably accessible electronic means by which a registrant and consumer can notify the platform of suspected use of a counterfeit mark.” (What are the odds that the consumer notifications will be made in good faith? Consider, in particular, how a dissatisfied buyer could weaponize this provision for reasons having nothing to do with counterfeiting. Note also how buyer complaints of counterfeiting, when not accurate — and buyers won’t necessarily know — could create scienter on the online marketplace’s part, and the countermoves by the marketplace could work to the detriment of the marketplace, the seller, AND the manufacturer by reducing their online marketplace sales.)

10. Implement “a program to expeditiously disable or remove from the platform any listing for which a platform has reasonable awareness of use of a counterfeit mark in connection with the sale, offering for sale, distribution, or advertising of goods.” The online marketplace’s scienter may be inferred from:

  • information regarding the use of a counterfeit mark on the platform generally,
  • general information about the third-party seller,
  • identifying characteristics of a particular listing, or
  • other circumstances as appropriate.

(This differs from the DMCA online safe harbor in many ways. The most obvious is that online marketplaces can be liable for the new statutory contributory trademark claim even if trademark owners never send them takedown notices. Among other things, this factor also emboldens trademark owners to send notices like “there are counterfeits on your site — find and remove them” without identifying any specific infringing listing. It seems those generalized notices would confer scienter sufficient to impose contributory trademark infringement. This, of course, directly rejects the Tiffany v. eBay precedent, which said such generalized knowledge wasn’t enough.)

An online marketplace can restore a listing “if, after an investigation, the platform reasonably determines that a counterfeit mark was not used in the listing.” (How many services will want to do the investigation, and how confident will the service be that the trademark owner will agree that they “reasonably” determined the listing wasn’t counterfeit? In practice, once a listing is down, it ain’t going back up.)

11. Implement “a publicly available, written policy that requires termination of a third-party seller that reasonably has been determined to have engaged in repeated use of a counterfeit mark.” (Note how this combines several parts of the DMCA online safe harbor, including the obligation to adopt a repeat infringer policy, to publish the repeat infringer policy, and to reasonably implement the repeat infringer policy.)

Apparently online marketplaces are free to create their own repeat termination policy, but the bill says “Use of a counterfeit mark by a third-party seller in 3 separate listings within 1 year typically shall be considered repeated use.” (This sidesteps the obvious question of how services “know” that a seller used the counterfeit mark. Remember, in obligation #10, online marketplaces must terminate listings when the service has a “reasonable awareness,” which isn’t conclusive proof that counterfeiting actually took place. So does each removal based on that lowered scienter count as one of the three strikes?)

Online marketplaces can reinstate terminated sellers in some circumstances, none of which have any realistic chance of happening.

12. Take reasonable measures to ensure terminated sellers don’t reregister on the service. (Another coveted item by rightsowner: a permanent staydown.)

13. Provide “a verified basis to contact a third-party seller upon request by a registrant that has a bona fide belief that the seller has used a counterfeit mark.” (I didn’t understand this provision because the trademark owners should already have all of the information they need to blast counterfeiters from obligation #5).

Whew! Could trademark owners ask for anything more? These obligations are pretty much their dream wishlist.

Liability for Bogus NOCIs

The bill creates a new cause of action for bogus takedown notices sent to online marketplaces. I’m going to dig into this cause of action, but no need to master the details: Congress has learned absolutely nothing from the failure of 17 USC 512(f), so there’s no possible way for any plaintiff to benefit from this provision.

The cause of action: “Any person who knowingly makes any material misrepresentation in a notice to an electronic commerce platform that a counterfeit mark was used in a listing by a third party seller for goods that implicate health and safety shall be liable in a civil action for damages by the third-party seller that is injured by such misrepresentation, as the result of the electronic commerce platform relying upon such misrepresentation to remove or disable access to the listing, including temporary removal or disablement.” If the third-party seller declines to sue the trademark owner, the online marketplace can sue (with the third-party seller’s consent) if the trademark owner sent 10+ bogus notices. The bill provides statutory damages that range between $2,500-$75,000 per notice.

That sounds swell, but it’s useless for two reasons.

First, the bill doesn’t require trademark owners to send takedown notices in the first place. Trademark owners can sue online marketplaces for contributory trademark infringement without ever sending a takedown notice. So if trademark owners face potential liability for sending bogus takedown notices, why send them at all? Or trademark owners will send very generalized notices that don’t trigger liability for them but will trigger liability for the online marketplace.

Second, and more importantly, the cause of action requires a scienter that plaintiffs can’t prove. How can a third-party seller or online marketplace show the trademark owner knowingly made a material misrepresentation in their takedown notices? They can’t — unless they find smoking-gun evidence in discovery, but their complaints won’t survive a motion to dismiss sufficient to get to discovery. So there’s no way to win.

The “knowingly makes any material misrepresentation” standard is virtually identical to the 512(f) standard (“knowingly materially misrepresents”), so I expect courts will interpret the scienter standards the same. The Ninth Circuit killed 512(f) claims when it concluded in the Rossi case that the copyright owner’s subjective belief of infringement was good enough to defeat liability. As a result, over the past 20+ years, there has been only a small handful of 512(f) cases that have led to damages, and those few mostly involve default judgments. If trademark owners similarly can defend against this claim based on their subjective belief that counterfeiting is taking place, plaintiffs cannot win.

This provision is yet another ruse. It’s designed to make people think there’s a disincentive against trademark owner overclaims; but anyone who knows the 512(f) caselaw knows that this cause of action is completely worthless and a waste of everyone’s time.


Selected Problems with the Bill

What is “Counterfeiting”? The bill defines “counterfeit mark” as “a counterfeit of a mark” (I can’t make this up). But there’s actually a lot of confusion about what constitutes counterfeiting. See, e.g., my post about the trademark enforcements involving the “EMOJI” word mark, where they take the position that a marketplace item using the term “emoji” in the product name or description “counterfeits” their mark (seriously, look at the example from their exhibit and tell them that’s not bogus). A similar issue arises with print-on-demand services, where trademark owners take the position that any variation of their mark being manufactured onto a good constitutes counterfeiting, even if it’s parodic or an obvious joke. Thus, the bill’s grammar restricting the “use of counterfeit marks” potentially covers a much wider range of activity than classic piratical counterfeiting. Trademark owners will weaponize that ambiguity.

Lack of State Preemption. The Lanham Act doesn’t preempt state trademark laws, so this law isn’t likely to preempt any state law equivalents. It also would leave in place laws like the Arkansas Online Marketplace Consumer Inform Act, which has overlapping but different requirements than the SHOP SAFE Act. That overlap jacks up compliance costs and risks even more. While the SHOP SAFE Act is terrible and should never pass, it is even more terrible without a preemption provision.

Country of Origin Problems. The mandatory reporting of products’ country of origin is a liability trap. The bill excludes the smallest sellers from making this disclosure, but plenty of small-scale sellers will be obligated nonetheless, and they (and even bigger players) are sure to botch this because the law is confusing and the information won’t always be available to resellers. Any error on country-of-origin disclosures sets up the third-party sellers for false advertising claims. (Per Malwarebytes, the online marketplace should qualify for Section 230 protection for the Lanham Act false advertising claims). This gives trademark owners a second way of targeting third-party sellers: even if those sellers aren’t engaging in counterfeiting or any trademark infringement at all, country-of-origin false advertising claims can still be weaponized to drive them out of the marketplace.

Repudiation of the 512 Deal. The DMCA online safe harbor struck a grand bargain: online copyright enforcement would be a shared responsibility. Copyright owners would identify infringing items, and service providers would then remove those items. There has never been a trademark equivalent of the DMCA, but the Tiffany v. eBay case has de facto created a similar balance. Unsurprisingly, copyright owners hate the DMCA’s shared responsibility, and they have tried to undermine that deal through lawfare in courts. Trademark owners similarly want a different deal.

This bill, as Congress’ first trademark complement to the DMCA, emphatically repudiates the DMCA deal. It gives trademark owners everything they could possibly want: turning online marketplaces into their trademark enforcement deputies, getting them to proactively screen for infringing items, making them wipe out listings without having to send listing-by-listing notices, upfront disclosure of the information needed to sue the sellers (rather than going through the 512(h) subpoena process), and permanent staydown of allegedly recidivist sellers.

Not only does this represent terrible trademark policy, but it’s a preview of how copyright owners will force DMCA safe harbor reform. They will want all of the same things: proactive monitoring of infringement, no need to send item-specific notices, authentication of users before they can upload, and staydown requirements. The SHOP SAFE Act isn’t just about counterfeits; it’s a proxy war for the next round of online copyright reform, and the open Internet doesn’t have a chance of surviving either reform.

“Reasonableness” Isn’t Reasonable to Online Marketplaces. I’ve blogged many times about how a “reasonableness” standard of liability in the online context is a fast-track to the end of UGC. As a legal standard, “reasonableness” often can’t be resolved on motions to dismiss because it’s fact-intensive and defendants can’t tell their side of the story at that procedural stage. As a result, “reasonableness” standards substantially increase the odds that lawsuits survive the motion to dismiss and get into discovery, which raises the defense costs by a factor of 10 or more.

The bill contains 21 instances of the term “reasonable” or variations. Each and every one of those is a fight the defendants can’t cost-justify. That means defendants will give up at the earliest opportunity or, more likely, self “censor” to avoid any potential courtroom battle over their “reasonableness.”

Too Many Defense Factors Makes the Defenses Unwinnable. More generally, to avoid the new cause of action, online marketplaces must win each and every one of the 13 preconditions (many of which have subparts). In other words, they must do everything perfectly AND prove all 13 elements to the court’s satisfaction. Safe harbors with that many prerequisites are extraordinarily costly because the plaintiffs can contest each element and engage in expensive discovery related to them. The DMCA online safe harbor has functionally failed for this reason: it’s too expensive for startups to prove they qualify, and copyright owners can weaponize those costs intentionally to drive entities out of the industry. This has turned the DMCA online safe harbor into a sport of kings, so only larger companies can afford it, which has exacerbated the concerns about “Big Tech” market consolidation. The SHOP SAFE Act replicates the structure that failed in the DMCA online safe harbor, so it’s predictable that the SHOP SAFE defenses also will fail to help out online marketplaces, leaving them highly vulnerable to the new cause of action.

Goodbye, Scalability. The Internet enables scalable operations in new and important ways. That scalability has created new functionality that never existed in the offline world — like online marketplaces. The SHOP SAFE Act blows scaling apart. Not only do the “reasonableness” requirements require careful attention to the facts, the bill makes it impossible to have true self-service signups of third-party sellers. Instead, there will need to be several levels of human review of new signups to satisfy the various authentication requirements. Furthermore, the proactive screening requirement will also require substantial human monitoring because determining “counterfeits” cannot be delegated solely to the machines. The absence of scalability and the need for substantial human labor will reward services that are really small, like a one-person operation, or really large, like a market-dominant player. Thus, SHOP SAFE’s elimination of scalability will exacerbate competition problems in the online retailing world.

Who Cares About Privacy? Trademark owners demanded the WHOIS system to make it easier for them to sue domain name registrants. The WHOIS system has collapsed due to the GDPR, which exposed how the WHOIS system was highly privacy-invasive. The SHOP SAFE Act doubles down on privacy invasions in two ways.

First, it requires online marketplaces to collect lots of sensitive information they don’t want, such as government-issued IDs. Those databases are honeypots for law enforcement and hackers.

Second, it requires publication of some information that sellers might consider private, especially if they are small operations with close identity between their professional and personal lives. (The bill’s exclusion of some private information incompletely addresses this concern.) For example, that information can be highly sensitive for sellers of controversial items who can be targeted by trolls and haters for local ostracism or physical attacks like swatting, and competitors can use this information too to engage in anti-competitive harassment.

Just like WHOIS struck a lopsided balance between trademark owners’ interests and registrant privacy, the SHOP SAFE Act similarly tosses privacy concerns under the trademark owners’ bus.

Why Would Anyone Support This Bill? This bill will kill online marketplaces and make markets less efficient. Where the online marketplace owner has a retailing function, like Amazon and Walmart, they can shut down the marketplace and subsume some items into their standard retailing function. That transition cuts off the long tail of items consumers expect to find online, and it burns hundreds of thousands of independent businesses that currently thrive in the marketplace system but become irrelevant in a retailing model. Meanwhile, standalone online marketplaces, like eBay and Etsy, have to revamp their entire business or exit the industry entirely, which further reduces competition for online retailing. The net competitive effects, then, are that consumers will pay higher prices, lose their ability to find long-tail items, and incur higher search costs to do so, while existing market leaders will consolidate their dominant positions, and hundreds of thousands of people will lose their jobs.

In contrast, who wins in this situation? The only winners are trademark owners, some of whom hate online marketplaces because they are tired of seeing their goods leak out of official distribution channels into more price-discounted online marketplaces, because they hate competing against used items of the goods they sell, and because some counterfeiting does take place there (as it does in the offline world too). To address those concerns, they are willing to burn down the entire online marketplace industry. What I can’t understand is why any members of Congress would be so willing to give trademark owners their wishlist when the results would be so disadvantageous for their constituents. The trademark owner lobby is strong, but our governance systems should be strong enough to resist terrible and selfish legislation like this.

Reposted with permission from Eric Goldman’s Technology & Marketing Law Blog