gaurav.laroia's Techdirt Profile

gaurav.laroia

About gaurav.laroia

Posted on Techdirt - 31 August 2020 @ 12:00pm

Fighting Hate Speech Online Means Keeping Section 230, Not Burying It

At Free Press, we work in coalition and on campaigns to reduce the proliferation of hate speech, harassment, and disinformation on the internet. It’s certainly not an easy or uncomplicated job. Yet this work is vital if we’re going to protect the democracy we have and also make it real for everyone — remedying the inequity and exclusion caused by systemic racism and other centuries-old harms seamlessly transplanted online today.

Politicians across the political spectrum desperate to “do something” about the unchecked political and economic power of online platforms like Google and Facebook have taken aim at Section 230, passed in 1996 as part of the Communications Decency Act. Changing or even eliminating this landmark provision appeals to many Republicans and Democrats in DC right now, even if they hope for diametrically opposed outcomes.

People on the left typically want internet platforms to bear more responsibility for dangerous third-party content and to take down more of it, while people on the right typically want platforms to take down less. Or at least less of what’s sometimes described as “conservative” viewpoints, which too often in the Trump era has been unvarnished white supremacy and unhinged conspiracy theories.

Free Press certainly aligns with those who demand that platforms do more to combat hate and disinformation. Yet we know that keeping Section 230, rather than radically altering it, is the way to encourage that. That may sound counter-intuitive, but only because of the confused conversation about this law in recent years.

Preserving Section 230 is key to preserving free expression on the internet, and to making it free for all, not just for the privileged. Section 230 lowers barriers for people to post their ideas online, but it also lowers barriers to the content moderation choices that platforms have the right to make.

Changes to Section 230, if any, have to retain this balance and preserve the principle that interactive computer services are legally liable for their own bad acts but not for everything their users do in real time and at scale.

Powerful Platforms Are Still Powering Hate, and Only Slowly Changing Their Ways

Online content platforms like Facebook, Twitter and YouTube are omnipresent. Their global power has resulted in privacy violations, facilitated civil rights abuses, provided white supremacists and other violent groups a place to organize, enabled foreign-election interference and the viral spread of disinformation, hate and harassment.

In the last few months some of these platforms have begun to address their role in the proliferation and amplification of racism and bigotry. Twitter recently updated its policies by banning links on Twitter to hateful content that resides offsite. That resulted in the de-platforming of David Duke, who had systematically skirted Twitter’s rules by linking to hateful content across the internet while following some limits for what he said on Twitter itself.

Reddit also updated its policies on hate and removed several subreddits. Facebook restricted “boogaloo” and QAnon groups. YouTube banned several white supremacists accounts. Yet despite these changes and our years of campaigning for these kinds of shifts, hate still thrives on these platforms and others.

Some in Congress and on the campaign trail have proposed legislation to rein in these companies by changing Section 230, which shields platforms and other websites from legal liability for the material their users post online. That’s coming from those who want to see powerful social networks held more accountable for third-party content on their services, but also from those who want social networks to moderate less and be more “neutral.”

Taking away Section 230 protections would alter the business models of not just big platforms but every site with user-generated material. And modifying or even getting rid of these protections would not solve the problems often cited by members of Congress who are rightly focused on racial justice and human rights. In fact, improper changes to the law would make these problems worse.

That doesn’t make Section 230 sacrosanct, but the dance between the First Amendment, a platform’s typical immunity for publishing third-party speech, and that same platform’s full responsibility for its own actions, is a complex one. Any changes proposed to Section 230 should be made deliberately and delicately, recognizing that amendments can have consequences not only unintended by their proponents but harmful to their cause.

Revisionist History on Section 230 Can’t Change the Law’s Origins or Its Vitality

To follow this dance it’s important to know exactly what Section 230 is and what it does.

Written in the early web era in 1996, the first operative provision in Section 230 reads: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

When a book or a newspaper goes to print, its publisher is legally responsible for all the words printed. If those words are plagiarized, libelous, or unlawful then that publisher may face legal repercussions. In the terms of Section 230, they are the law’s “information content provider[s]”.

Wiping away Section 230 could revert the legal landscape to the pre-1996 status quo. That’s not a good thing. At the time, a pair of legal decisions had put into a bind any “interactive computer service” that merely hosts or transmits content for others. One case held a web platform that did moderate content could be sued for libel (just as the original speaker or poster could be) if that alleged libel slipped by the platform’s moderators. The other case held sites that did not moderate were not exposed to such liability.

Before Section 230 became law, this pair of decisions meant websites were incentivized to go in one of two directions: either don’t moderate at all, tolerating not just off-topic comments but all kinds of hate speech, defamation, and harassment on their sites; or vet every single post, leading inexorably to massive takedowns and removal of anything that might plausibly subject them to liability for statements made by their users.

The authors of Section 230 wanted to encourage the owners of websites and other interactive computer services, to curate content on their websites as these sites themselves saw fit. But back then that meant those websites could be just as responsible as newspapers for anything anyone said on their platforms if they moderated at all.

In that state of affairs, someone like Mark Zuckerberg or Jack Dorsey would have the legal responsibility to approve every single post made on their services. Alternatively, they would have needed to take a complete, hands-off approach. The overwhelming likelihood is that under a publisher-liability standard those sites would not exist at all, at least not in anything like their present form.

There’s an awful lot we’re throwing out with the bathwater if we attack not just the abuses of ad-supported and privacy-invasive social-media giants but all sites that allow users to share content on platforms they don’t own. Smaller sites likely couldn’t make a go of it at all, even if a behemoth like Facebook or YouTube could attempt the monumental task of bracing for potential lawsuits over the thousands of posts made every second of the day by their billions of users. Only the most vetted, sanitized, and anodyne discussions could take place in whatever became of social media. Or, at the other extreme, social media would descend into an unfiltered and toxic cesspool of spam, fraudulent solicitations, porn, and hate.

Section 230’s authors struck a balance for interactive computer services that carry other people’s speech: platforms should have very little liability for third-party content, except when it violates federal criminal law and intellectual property law.

As a result, websites of all sizes exist across the internet. A truly countless number of these — like Techdirt itself — have comments or content created by someone other than the owner of the website. The law preserved the ability of those websites, regardless of their size, to tend to their own gardens and set standards for the kinds of discourse they allow on their property without having to vet and vouch for every single comment.

That was the promise of Section 230, and it’s one worth keeping today: an online environment where different platforms would try to attract different audiences with varying content moderation schemes that favored different kinds of discussions.

But we must acknowledge where the bargain has failed too. Section 230 is necessary but not sufficient to make competing sites and viewpoints viable online. We also need open internet protections, privacy laws, antitrust enforcement, new models for funding quality journalism in the online ecosystem, and lots more.

Taking Section 230 off the books isn’t a panacea or a pathway to all of those laudable ends. Just the opposite, in fact.

We Can’t Use Torts or Criminal Law to Curb Conduct That Isn’t Tortious or Criminal

Hate and unlawful activity still flourish online. A platform like Facebook hasn’t done enough yet, in response to activist pressure or advertiser boycotts, to further modify its policies or consistently enforce existing terms of service that ban such hateful content.

There are real harms that lawmakers and advocates see when it comes to these issues. It’s not just an academic question around liability for carrying third-party content. It’s a life and death issue when the information in question incites violence, facilitates oppression, excludes people from opportunities, threatens the integrity of our democracy and elections, or threatens our health in a country dealing so poorly with a pandemic.

Should online platforms be able to plead Section 230 if they host fraudulent advertising or revenge porn? Should they avoid responsibility for facilitating either online or real-world harassment campaigns? Or use 230 to shield themselves from responsibility for their own conduct, products, or speech?

Those are all fair questions, and at Free Press we’re listening to thoughtful proposed remedies. For instance, Professor Spencer Overton has argued forcefully that Section 230 does not exempt social-media platforms from civil rights laws, for targeted ads that violate voting rights and perpetuate discrimination.

Sens. John Thune and Brian Schatz have steered away from a takedown regime like the automated one that applies to copyright disputes online, and towards a more deliberative process that could make platforms remove content once they get a court order directing them to do so. This would make platforms more like distributors than publishers, like a bookstore that’s not liable for what it sells until it gets formal notice to remove offending content.

However, not all amendments proposed or passed in recent times have been so thoughtful, in our view, Changes to 230 must take the possibility of unintended consequences and overreach into account, no matter how surgical proponents of the change may think an amendment would be. Recent legislation shows the need for clearly articulated guardrails. In an understandable attempt to cut down on sex trafficking, a law commonly known as FOSTA (the “Fight Online Sex Trafficking Act”) changed Section 230 to make websites liable under state criminal law for the knowing “promotion or facilitation of prostitution.”

FOSTA and the state laws it ties into did not precisely define what those terms meant, nor set the level of culpability for sites that unknowingly or negligently host such content. As a result, sites used by sex workers to share information about clients or even used for discussions about LGBTQIA+ topics having nothing to do with solicitation were shuttered.

So FOSTA chilled lawful speech, but also made sex workers less safe and the industry less accountable, harming some of the people the law’s authors fervently hoped to protect. This was the judgment of advocacy groups like the ACLU that opposed FOSTA all along, but also academics who support changes to Section 230 yet concluded FOSTA’s final product was “confusing” and not “executed artfully.”

That kind of confusion and poor execution is possible even when some of the targeted conduct and content is clearly unlawful. But, rewriting Section 230 to facilitate the take-down of hate speech that is not currently unlawful would be even trickier and fundamentally incoherent. Saying platforms ought to be liable for speech and conduct that would not expose the original speaker to liability would have a chilling impact, and likely still wouldn’t lead to sites making consistent choices about what to take down.

The Section 230 debate ought to be about when it’s appropriate or beneficial to impose legal liability on parties hosting the speech of others. Perhaps this larger debate on the legal limits of speech should be broader. But that has to happen honestly and on its own terms, not get shoehorned into the 230 debate.

Section 230 Lets Platforms Choose To Take Down Hate

Platforms still aren’t doing enough to stop hate, but what they are doing is in large part thanks to having 230 in place.

The second operative provision in the statute is what Donald Trump, several Republicans in Congress, and at least one Republican FCC commissioner are targeting right now. It says “interactive computer services” can “in good faith” take down content not only if it is harassing, obscene or violent, but even if it is “otherwise objectionable” and “constitutionally protected.”

That’s what much hate speech is, at least under current law. And platforms can take it down thanks not only to the platforms’ own constitutionally protected rights to curate, but because Section 230 lets them moderate without exposing themselves to publisher liability as the pre-1996 cases suggested.

That gives platforms a freer hand to moderate their services. It lets Free Press and its partners demand that platforms enforce their own rules against the dissemination of hateful or otherwise objectionable content that isn’t unlawful, but without tempting platforms to block a broader swath of political speech and dissent up front.

Tackling the spread of online hate will require a more flexible multi-pronged approach that includes the policies recommended by Change the Terms, campaigns like Stop Hate for Profit, and other initiatives. Platforms implementing clearer policies, enforcing them equitably, enhancing transparency, and regularly auditing recommendation algorithms are among these much-needed changes.

But changing Section 230 alone won’t answer every question about hate speech, let alone about online business models that suck up personal information to feed algorithms, ads, and attention. We need to change those through privacy legislation. We need to fund new business models too, and we need to facilitate competition between platforms on open broadband networks.

We need to make huge corporations more accountable by limiting their acquisition of new firms, changing stock voting rules so people like Mark Zuckerberg aren’t the sole emperors over these vastly powerful companies, and giving shareholders and workers more rights to ensure that companies are operated not just to maximize revenue but in socially responsible ways as well.

Preserving not just the spirit but the basic structure of Section 230 isn’t an impediment to that effort, it’s a key part of it.

Gaurav Laroia and Carmen Scurato are both Senior Policy Counsel at Free Press.

Posted on Techdirt - 5 June 2020 @ 12:00pm

Coronavirus Surveillance Is Far Too Important, And Far Too Dangerous, To Be Left Up To The Private Sector

Months into the global pandemic, governments, think tanks, and companies have begun releasing comprehensive plans to reopen the economy, while the world will have to wait a year or longer for the universal deployment of an effective vaccine.

A big part of many of these plans are digital tools, apps, and public-health surveillance projects that could be used to contain the spread of COVID-19. But even if they’re effective, these tools must be subject to rigorous oversight and laws preventing their abuse. Corporate America is already contemplating mandatory worker testing and tracking. Digital COVID passports that could grant those with immunity or an all-clear from a COVID test the right to enter stores, malls, hotels, and other spaces may well be on the way.

We must be ready to watch the watchers and guard against civil rights violations.

Many governments and pundits are turning to tech companies that are promising digital contact tracing applications and services to augment the capacity of manual contact tracers, as they work to identify transmission chains and isolate people exposed to the virus. Yet civil society groups are already highlighting the serious privacy implications of such tools, underscoring the need for robust privacy protections.

The potential for law enforcement and corporate actors alike to abuse these tracking systems is just too great to ignore. For their part, most democratic governments have largely recognized that the principle of voluntary adoption of this technology — rather than attempts at state coercion — is more likely to encourage use of these apps.

But these applications are not useful unless significant percentages of cellphone users use them. An Oxford University study suggests that for a similar app to successfully suppress the epidemic in the United Kingdom, 80 percent of British cellphone users would have to use it, which equates to 56 percent of the overall UK population. If the numbers for a digital contact tracing program to succeed stateside were similar, that would mean activating more than 100 million users.

The level of adoption will dictate just how well these technologies prevent the spread of the virus, but no matter how widespread such voluntary adoption may be, there is still potential for coercion, abuse, and targeting of specific users and communities without their consent. Some companies and universities are already planning to develop their own contact tracing systems and require their employees or students to participate. The consulting firm PricewaterhouseCoopers is advising companies on how to create these systems, and other smaller tech firms are designing Bluetooth beacons to facilitate the tracking of workers without smartphones.

An unaccountable regime of COVID surveillance could represent a great near-term threat to civil rights and privacy. Already marginalized communities suffering most from this crisis are the most exposed to the capricious whims of corporate leaders eager to restart supply chains and keep the manufacturing and service sector operating.

Essential workers are subject to serious health risks while doing their jobs during a pandemic, and employers mandating use of these technologies without public oversight creates another risk to worker rights. This paints a particularly tragic picture for the Black community which has been disproportionately affected by the pandemic in terms of sickness, death, and unemployment.

Black and Latinx people are more likely to work as cashiers in grocery stores, in nursing homes, or in other service-industry jobs that make infection far more likely. Many such workers are already subject to pervasive and punitive workplace surveillance regimes. But now, there may be real public-health equities at play. When these workers go to work, they have to do so in close proximity to others. Employers must protect them and digital tracking tools may well be part of saving lives. But that balance ought to be struck by public-health officials and worker-safety authorities in consultation with affected employees.

This system of private-health surveillance may not just affect workers. Grocery store, retail, and restaurant owners, eager to deploy this kind of technology to regain the confidence of shoppers, may well see the logic in incentivizing widespread public deployment as well.

Those same stores could offer a financial incentive to customers who can prove they have a contact-tracing app installed on their phone, or they could integrate it into already existing customer loyalty apps. Coordinated efforts from businesses to mitigate losses due to sick workers or the threat of repeated government shutdowns could make incentivizing or demanding COVID-passports worth the investment to them. We may well find ourselves in a situation where a digitally checkpointed mall, Whole Foods, or Walmart feels like an oasis — the safest place in the world outside our homes.

Unaccountable deployment of these systems threatens to create further divides between workers and consumers, the tracked and untracked, or perilous division between those who can afford repeated testing and those who can’t.

So far, few officials have weighed these tradeoffs. As of yet, the only federal legal guidance on these questions has come from the Equal Employment Opportunity Commission, which has ruled that employers can legally institute mandatory temperature checks and other medical exams as conditions of continued employment.

Lawmakers have to do more. They must provide protections for the unauthorized use of this information and not allow access to places of public accommodation – a core civil right – to be determined by a mere app. We must seriously consider what it would mean for a free society, should businesses find it makes financial sense to invest in their own health-surveillance systems or deny people access to corner markets or grocery stores if they aren’t carrying the right pass on their person.

We do not have to be resigned to the deployment of a permanent state surveillance apparatus or the capriciousness of the private sector. If our post-9/11 experience is a guide, then we know that unaccountable surveillance infrastructure implemented during a crisis is wildly difficult to dismantle.

We must not construct a recovery that casts a needless decades-long shadow over our society, entrenches the power of large corporations, and further exacerbates class and racial divides. Governments must proactively decide the permissible uses and limits of this technology and the data it collects, and they must demand that these surveillance systems, private or otherwise, be dismantled at the end of the crisis.

Gaurav Laroia is the Senior Policy Counsel at consumer Group Free Press, working alongside the policy team on topics ranging from internet-freedom issues like Net Neutrality and media ownership to consumer privacy and government surveillance.

More posts from gaurav.laroia >>