The Tech Policy Greenhouse is an online symposium where experts tackle the most difficult policy challenges facing innovation and technology today. These are problems that don't have easy solutions, where every decision involves tradeoffs and unintended consequences, so we've gathered a wide variety of voices to help dissect existing policy proposals and better inform new ones.

Intermediary Liability And Responsibilities Post-Brexit

from the the-entire-game-has-changed dept

This is a peculiar time to be an English lawyer. The UK has one foot outside the EU, and (on present intentions) the other foot will join it when the current transitional period expires at the end of 2020.

It is unclear how closely tied the UK will be to future EU law developments following any trade deal negotiated with the EU. As things stand, the UK will not have to implement future EU legislation, and is likely to have considerable freedom in many areas to depart from existing EU legislation.

The UK government has said that it has no plans to implement the EU Copyright Directive adopted in April 2019. Nor does it seem likely that it would have to follow whatever legislation may result from the European Commission’s proposals for an EU Digital Services Act. Conversely, the government has also said that it has no current plans to change the existing intermediary liability provisions of the EU Electronic Commerce Directive, or the Directive’s approach to prohibition of general monitoring obligations.

Looking across the Atlantic, there is the prospect of a future trade agreement between the UK and the USA. That has set off alarm bells in some quarters that the US government will want the UK to adopt an intermediary liability shield modeled on S.230 Communications Decency Act.

Domestically, the UK government is developing its Online Harms plans. The proposed legislation would impose a legal duty on user generated content-sharing intermediaries and search engines to prevent or inhibit many varieties of illegal or harmful UGC. Although branded a duty of care, the proposal is more akin to a broadcast-style content regulatory regime than to a duty of care as a tort lawyer would understand it. The regime would most likely be managed and enforced by the current broadcast regulator, Ofcom. As matters stand the legislation would not define harm, leaving Ofcom to decide (subject to some specific carve-outs) what should be regarded as harmful.

All this is taking place against the background of the techlash. This is not the place to get into the merits and demerits of that debate. The aim of this piece is to take an educational ramble around the UK and EU legal landscape, pausing en route to inspect and illuminate some significant features.

Liability Versus Responsibilities

The tour begins by drawing a distinction between liability and responsibilities.;

In the mid-1990s the focus was mostly on liability: the extent to which an intermediary can be held liable for unlawful activities and content of its users. The US and EU landmarks were S.230 CDA 1996 and S.512 DMCA 1998 (USA), and Articles 12 to 14 of the Electronic Commerce Directive 2000 (EU).

Liability presupposes the user doing something unlawful on the intermediary’s platform. (Otherwise, there is nothing for the intermediary to be liable for.) The question is then whether the platform, as well as the user, should be made liable for the user’s unlawful activity – and if so, in what circumstances. The risk (or otherwise) of potential liability may encourage the intermediary to act in certain ways. Liability regimes incentivise, but do not mandate.

Over time, the policy focus has expanded to take in responsibilities: putting an intermediary under a positive obligation to take action in relation to user content or activity.

A mandatory obligation to prevent users behaving in particular ways is different from being made liable for their unlawful activity. Liability arises from a degree of involvement in the primary unlawful activity of the user. Imposed responsibility does not necessarily rest on a user’s unlawful behavior. The intermediary is placed under an independent, self-standing obligation – one that it alone can breach.

Responsibilities Imposed By Court Orders

Responsibilities first manifested themselves as mandatory obligations imposed on intermediaries by specific court orders, but still predicated on the existence of unlawful third party activities.

In the US this development withered on the vine with SOPA/PIPA in 2012. Not so in the EU, where copyright site blocking injunctions can be (and have often been) granted against internet service providers under Article 8(3) of the InfoSoc Directive. The Intellectual Property Enforcement Directive requires similar injunctions to be available for other IP rights. In the UK it is established that a site blocking injunction can be granted based on registered trade marks, and potentially in respect of other kinds of unlawful activity.

Limits to the actions that court orders can oblige intermediaries to take in respect of third party activities have been explored in numerous cases: amongst them, at EU Court of Justice level, detection and filtering of copyright infringing files in SABAM v Scarlet and SABAM v Netlog; detection and filtering of equivalent defamatory content in Glawischnig-Piesczek v Facebook; and worldwide delisting in Glawischnig-Piesczek v Facebook. 

Such court orders tend not to be conceptualized in terms of remedying a breach by the intermediary. Rather, they are based on efficiency: the intermediary, as a choke point, should be co-opted as being in the best position to reduce unlawful activity by third parties. In UK law at least, the intermediary has no prior legal duty to assist – only to comply with an injunction if the court sees fit to grant one.

Responsibilities Imposed by Duties Of Care

Most recently the focus on intermediary responsibilities has broadened beyond specific court orders. It now includes the idea of a prior positive obligation, imposed on an intermediary by the general law, to take steps to reduce risks arising from user activities on the platform.

This kind of obligation, frequently labelled a duty of care, is contemplated by the UK Online Harms proposals and may form part of a future EU Digital Services Act.

In the form in which it has been adapted for the online sphere, a duty of care would impose positive obligations on the intermediary to prevent users from harming other users (and perhaps non-users). Putting aside the vexed question of what constitutes harm in the context of online speech, a legal responsibility to prevent activities of third parties is far from the norm. A typical duty of care is owed in respect of someone’s own acts, not to prevent acts of third parties.

Although conceptually distinct from liability, an intermediary duty of care can interact and overlap with it. For example, a damages claim framed as breach of a duty of care may in some circumstances be barred by the ECD liability shields. In McFadden the rightsowner sought to hold a Wi-Fi operator liable for damages in respect of copyright infringement by users, founded on an allegation that the operator had breached a duty to secure its network. The CJEU found that the claim for damages was precluded by the Article 12 conduit shield, even though the claim was framed as breach of a duty rather than as liability for the users’ copyright infringement as such.

At the other end of the spectrum, the English courts have held that if a regulatory sanction is sufficiently remote from specific user infringements as not to be in respect of those infringements, the sanction is not precluded by the ECD liability shields. The UK Online Harms proposals suggest that sanctions would be for breach of systemic duties, rather than penalties tied to failure to remove specific items of content.

Beyond Unlawfulness

Although intermediary liability is restricted to unlawfulness on the part of the user, responsibility is not. A self-standing duty of care is concerned with risk of harm. Harm may include unlawfulness, but is not limited to that.

The scope of such a duty of care depends critically on what is meant by harm. In English law, comparable offline duties of care are limited to objectively ascertainable physical injury and damage to physical property. The UK Online Harms proposals jettison that limitation in favor of undefined harm. Applied to lawful online speech, that is a subjective concept. As matters stand Ofcom, as the likely regulator, would in effect decide what does and does not constitute harm.

Article 15 ECommerce Directive

A preventative duty of care takes us into the territory of proactive monitoring and filtering. Article 15 ECD, which sits alongside the liability scheme enacted in Articles 12 to 14, prohibits Member States from imposing two kinds of obligation on conduits, caches or hosts: a general obligation to monitor information transmitted or stored, and a general obligation actively to seek facts or circumstances indicating illegal activity.

Article 15 does not on its face prohibit an obligation to seek out lawful but harmful activity, unless it constitutes a general obligation to monitor information. But in any event, for an EU Member State the EU Charter of Fundamental Rights would be engaged. The CJEU found the filtering obligations in Scarlet and Netlog to be not only in breach of Article 15, but also contrary to the EU Charter of Fundamental Rights. For a non-EU state such as the UK, the European Convention on Human Rights would be relevant.

So far, the scope of Article 15 has been tested in the context of court orders. The principles established are nevertheless applicable to duties of care imposed by the general law, with the caveat that Recital (48) permits hosts to be made subject to “duties of care, which can reasonably be expected from them and which are specified by national law, in order to detect and prevent certain types of illegal activities.” What those “certain types” might be is not stated. In any event the recital does not on the face of it apply to lawful activities deemed to be harmful.

The Future Post-Brexit

Both the UK and the EU are currently heading down the road of imposing responsibilities on intermediaries, while professing to leave the liability provisions of the ECD untouched. That is conceptually possible for some kinds of responsibilities, but difficult to navigate in practice. Add the prohibition on general monitoring obligations and the task becomes harder, especially if the prohibition stems not just from the ECD (which could be diluted in future legislation) but from the EU Charter of Fundamental Rights and the ECHR.

The French Loi Avia, very much concerned with imposing responsibilities, was recently partially struck down by the French Constitutional Council. Whilst no doubt it will return in a modified form, it is nevertheless a salutary reminder of the relevance of fundamental rights.

As for UK-US trade discussions, Article 19.17 of the US-Mexico-Canada Agreement has set a precedent for inclusion of intermediary liability. Whether the wording of Article 19.17 really does mandate full S.230 immunity, as some have suggested, is another matter. Damian Collins MP, asking a Parliamentary Question on 2 March 2020, said:

“the US-Mexico-Canada trade agreement required the insertion of the section 230 provisions of the United States’ Communications Decency Act, which give immunity from liability to the big social media companies.”

The Trade Secretary replied:

“I can confirm that we stand by our online harms commitment, and nothing in the US trade deal will affect that.”

Although the USMCA agreement uses language that tracks S.230, it does not fully replicate it. Notably, it does not use the magic word ‘publisher’ that appears in S.230 and which Zeran v America Online interpreted in 1997 as embracing both strict primary publisher liability and knowledge-based secondary publisher (a.k.a. distributor) liability.

Instead, Article 19.17 precludes liability as an “information content provider,” defined as “a person or entity that creates or develops, in whole or in part, information provided through the Internet or another interactive computer service” That aptly describes a primary publisher. But if that language does not cover secondary publishers, then if it were to appear in a UK-US trade agreement it would seem not to preclude a hosting liability regime akin to the existing ECD Article 14.

Graham Smith is Of Counsel at Bird & Bird LLP, London, England. He is the editor and main author of the English law textbook Internet Law and Regulation (5th ed 2020, Sweet & Maxwell). The views expressed in this article are the personal views of the author.

Filed Under: , , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Intermediary Liability And Responsibilities Post-Brexit”

Subscribe: RSS Leave a comment
6 Comments
Paul Johnson (profile) says:

The shape of the "duty of care".

From my experience in other regulated industries, it will play out like this.

OfCom will hold a consultation exercise in which they talk to the major companies that they plan to regulate (who may well form a lobby organisation for this purpose). Out of that will come an official “guidance” document. I put the term “guidance” in quotes because, while it won’t be mandatory to follow this document, the regulator will be on record as saying that, if you do so, you have jumped high enough. “How high?” is the fundamental question that any regulated company wants answered, so in practice following the guidance will become official policy at all the regulated companies.

Hence the “guidance” is where the rubber actually meets the road. The industry will have two concerns: 1: make sure the guidance clearly specifies exactly how high they must jump, and 2: make sure the height is optimal for their businesses. This is not necessarily “as low as possible” because this height is a barrier to competition, which is always nice for an incumbent to have.

The civil servants on the other side of the table will want to make the process effective at preventing on-line harms, so it becomes a matter of horse-trading over costs and perceived benefits. The actual end users who will suffer the harms aren’t at the table, and neither are other users who will suffer from being over-moderated. Hence the result is likely to reflect industry concerns more than anything else. In theory the civil servants should be protecting the users, but they have to do so through the lens of government policy, and government policy on this issue is primarily aimed at staying out of the news.

Given the vagueness of the top-level requirement of “prevent on-line harms” the guidance document will probably opt for a set of measurable goals, such as 95% of user flags checked by a human within in 1 hour, defined levels of keyword scanning, use of image signatures etc. What it won’t do is require anything impossible like preventing 100% of “harmful” content being posted ever. It probably won’t even put accuracy requirements on the review process, because how do you measure accuracy?

You can expect an appeals process to make an appearance here, but with much lower requirements on its effectiveness and timeliness. Nobody wants to deal with appeals, so in practice its going to be nigh on impossible to get anything reversed.

There is of course no democracy involved here. Parliament is 2 or 3 levels away from this level of detail. The guidance probably won’t even be a government publication: its more likely to be published by that lobby group I mentioned for £200 per copy. That way ordinary members of the public can’t get hold of it and start arguing that it wasn’t followed in their case.

A year or so after this system comes into force there will probably be an expose on Panorama about how this regulatory system is failing to prevent some online harms. It will feature silhouettes of frightened or abused women with traumatic stories set against bland official statements about the commitment of government and industry working together to stop this sort of thing.

Paul Johnson (profile) says:

The shape of the "duty of care".

From my experience in other regulated industries, it will play out like this.

OfCom will hold a consultation exercise in which they talk to the major companies that they plan to regulate (who may well form a lobby organisation for this purpose). Out of that will come an official “guidance” document. I put the term “guidance” in quotes because, while it won’t be mandatory to follow this document, the regulator will be on record as saying that, if you do so, you have jumped high enough. “How high?” is the fundamental question that any regulated company wants answered, so in practice following the guidance will become official policy at all the regulated companies.

Hence the “guidance” is where the rubber actually meets the road. The industry will have two concerns: 1: make sure the guidance clearly specifies exactly how high they must jump, and 2: make sure the height is optimal for their businesses. This is not necessarily “as low as possible” because this height is a barrier to competition, which is always nice for an incumbent to have.

The civil servants on the other side of the table will want to make the process effective at preventing on-line harms, so it becomes a matter of horse-trading over costs and perceived benefits. The actual end users who will suffer the harms aren’t at the table, and neither are other users who will suffer from being over-moderated. Hence the result is likely to reflect industry concerns more than anything else. In theory the civil servants should be protecting the users, but they have to do so through the lens of government policy, and government policy on this issue is primarily aimed at staying out of the news.

Given the vagueness of the top-level requirement of “prevent on-line harms” the guidance document will probably opt for a set of measurable goals, such as 95% of user flags checked by a human within in 1 hour, defined levels of keyword scanning, use of image signatures etc. What it won’t do is require anything impossible like preventing 100% of “harmful” content being posted ever. It probably won’t even put accuracy requirements on the review process, because how do you measure accuracy?

You can expect an appeals process to make an appearance here, but with much lower requirements on its effectiveness and timeliness. Nobody wants to deal with appeals, so in practice its going to be nigh on impossible to get anything reversed.

There is of course no democracy involved here. Parliament is 2 or 3 levels away from this level of detail. The guidance probably won’t even be a government publication: its more likely to be published by that lobby group I mentioned for £200 per copy. That way ordinary members of the public can’t get hold of it and start arguing that it wasn’t followed in their case.

A year or so after this system comes into force there will probably be an expose on Panorama about how this regulatory system is failing to prevent some online harms. It will feature silhouettes of frightened or abused women with traumatic stories set against bland official statements about the commitment of government and industry working together to stop this sort of thing.

Paul Johnson (profile) says:

The shape of the "duty of care".

From my experience in other regulated industries, it will play out like this.

OfCom will hold a consultation exercise in which they talk to the major companies that they plan to regulate (who may well form a lobby organisation for this purpose). Out of that will come an official "guidance" document. I put the term "guidance" in quotes because, while it won’t be mandatory to follow this document, the regulator will be on record as saying that, if you do so, you have jumped high enough. "How high?" is the fundamental question that any regulated company wants answered, so in practice following the guidance will become official policy at all the regulated companies.

Hence the "guidance" is where the rubber actually meets the road. The industry will have two concerns: 1: make sure the guidance clearly specifies exactly how high they must jump, and 2: make sure the height is optimal for their businesses. This is not necessarily "as low as possible" because this height is a barrier to competition, which is always nice for an incumbent to have.

The civil servants on the other side of the table will want to make the process effective at preventing on-line harms, so it becomes a matter of horse-trading over costs and perceived benefits. The actual end users who will suffer the harms aren’t at the table, and neither are other users who will suffer from being over-moderated. Hence the result is likely to reflect industry concerns more than anything else. In theory the civil servants should be protecting the users, but they have to do so through the lens of government policy, and government policy on this issue is primarily aimed at staying out of the news.

Given the vagueness of the top-level requirement of "prevent on-line harms" the guidance document will probably opt for a set of measurable goals, such as 95% of user flags checked by a human within in 1 hour, defined levels of keyword scanning, use of image signatures etc. What it won’t do is require anything impossible like preventing 100% of "harmful" content being posted ever. It probably won’t even put accuracy requirements on the review process, because how do you measure accuracy?

You can expect an appeals process to make an appearance here, but with much lower requirements on its effectiveness and timeliness. Nobody wants to deal with appeals, so in practice its going to be nigh on impossible to get anything reversed.

There is of course no democracy involved here. Parliament is 2 or 3 levels away from this level of detail. The guidance probably won’t even be a government publication: its more likely to be published by that lobby group I mentioned for £200 per copy. That way ordinary members of the public can’t get hold of it and start arguing that it wasn’t followed in their case.

A year or so after this system comes into force there will probably be an expose on Panorama about how this regulatory system is failing to prevent some online harms. It will feature silhouettes of frightened or abused women with traumatic stories set against bland official statements about the commitment of government and industry working together to stop this sort of thing.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
12:04 SOPA Didn't Die. It's Just Lying In Wait. (5)
09:30 Demanding Progress: From Aaron Swartz To SOPA And Beyond (3)
12:00 How The SOPA Blackout Happened (3)
09:30 Remembering The Fight Against SOPA 10 Years Later... And What It Means For Today (16)
12:00 Winding Down Our Latest Greenhouse Panel: Content Moderation At The Infrastructure Layer (4)
12:00 Does An Internet Infrastructure Taxonomy Help Or Hurt? (15)
14:33 OnlyFans Isn't The First Site To Face Moderation Pressure From Financial Intermediaries, And It Won't Be The Last (12)
10:54 A New Hope For Moderation And Its Discontents? (7)
12:00 Infrastructure And Content Moderation: Challenges And Opportunities (7)
12:20 Against 'Content Moderation' And The Concentration Of Power (32)
13:36 Social Media Regulation In African Countries Will Require More Than International Human Rights Law (7)
12:00 The Vital Role Intermediary Protections Play for Infrastructure Providers (7)
12:00 Should Information Flows Be Controlled By The Internet Plumbers? (10)
12:11 Bankers As Content Moderators (6)
12:09 The Inexorable Push For Infrastructure Moderation (6)
13:35 Content Moderation Beyond Platforms: A Rubric (5)
12:00 Welcome To The New Techdirt Greenhouse Panel: Content Moderation At The Infrastructure Level (8)
12:00 That's A Wrap: Techdirt Greenhouse, Broadband In The Covid Era (17)
12:05 Could The Digital Divide Unite Us? (29)
12:00 How Smart Software And AI Helped Networks Thrive For Consumers During The Pandemic (41)
12:00 With Terrible Federal Broadband Data, States Are Taking Matters Into Their Own Hands (18)
12:00 A National Solution To The Digital Divide Starts With States (19)
12:00 The Cost Of Broadband Is Too Damned High (12)
12:00 Can Broadband Policy Help Create A More Equitable And inclusive Economy And Society Instead Of The Reverse? (11)
12:03 The FCC, 2.5 GHz Spectrum, And The Tribal Priority Window: Something Positive Amid The COVID-19 Pandemic (6)
12:00 Colorado's Broadband Internet Doesn't Have to Be Rocky (9)
12:00 The Trump FCC Has Failed To Protect Low-Income Americans During A Health Crisis (26)
12:10 Perpetually Missing from Tech Policy: ISPs And The IoT (10)
12:10 10 Years Of U.S. Broadband Policy Has Been A Colossal Failure (7)
12:18 Digital Redlining: ISPs Widening The Digital Divide (21)
More arrow