European Court Of Justice Suggests Maybe The Entire Internet Should Be Censored And Filtered

from the oh-come-on dept

The idea of an open “global” internet keeps taking a beating — and the worst offender is not, say, China or Russia, but rather the EU. We’ve already discussed things like the EU Copyright Directive and the Terrorist Content Regulation, but it seems like every day there’s something new and more ridiculous — and the latest may be coming from the Court of Justice of the EU (CJEU), which frequently is a bulwark against overreaching laws regarding the internet, but sometimes (too frequently, unfortunately) gets things really, really wrong (saying the “Right to be Forgotten” applied to search engines was one terrible example).

And now, the CJEU’s Advocate General has issued a recommendation in a new case that would be hugely problematic for the idea of a global open internet that isn’t weighted down with censorship filters. The Advocate General’s recommendations are just that: recommendations for the CJEU to consider before making a final ruling. However, as we’ve noted in the past, the CJEU frequently accepts the AG’s recommendations. Not always. But frequently.

The case here involves a an attempt to get Facebook to delete critical information of a politician in Austria under Austrian law. In the US, of course, social media companies are not required to delete such information. The content itself is usually protected by the 1st Amendment, and the platforms are then protected by Section 230 of the Communications Decency Act that prevents them from being liable, even if the content in question does violate the law (though, importantly, most platforms will still remove such content if it’s been determined by a court to violate the law).

In the EU, the intermediary liability scheme is significantly weaker. Under the E-Commerce Directive’s rules, there is an exemption of liability, but it’s much more similar to the DMCA’s safe harbors for copyright-infringing material in the US. That is, the liability exemptions only occur if the platform doesn’t have knowledge of the “illegal activity” and if they do get such knowledge, they need to remove the content. There is also a prohibition on a “general monitoring” requirement (i.e., filters).

The case at hand involved someone on Facebook posting a link to an article about an Austrian politician, Eva Glawischnig-Piesczek, and added some comments along with the link. Specifically:

That user also published, in connection with that article, an accompanying disparaging comment about the applicant accusing her of being a ?lousy traitor of the people?, a ?corrupt oaf? and a member of a ?fascist party?.

In the US — some silly lawsuits notwithstanding — such statements would be clearly protected by the 1st Amendment. Apparently not so much in Austria. But then there’s the question of Facebook’s responsibility.

An Austrian court ordered Facebook to remove the content, which it complied with by removing access to anyone in Austria. The original demand was also that Facebook be required to prevent “equivalent content” from appearing as well. On appeal, a court denied Facebook’s request that it only had to comply in Austria, and also said that such “equivalent content” could only be limited to cases where someone then alerted Facebook to the “equivalent content” being posted (and, thus, not a general monitoring requirement).

From there, the case went to the CJEU, who was asked to determine if such blocking needs to be global and how should the “equivalent content” question be handled.

And, then, basically everything goes off the rails. First up, the Advocate General, seems to think that — like many misguided folks concerning CDA 230 — there’s some sort of “neutrality” requirement for internet platforms, and that doing any sort of monitoring might lose their safe harbors for no longer being neutral. This is mind-blowingly stupid.

It should be observed that Article 15(1) of Directive 2000/31 prohibits Member States from imposing a general obligation on, among others, providers of services whose activity consists in storing information to monitor the information which they store or a general obligation actively to seek facts or circumstances indicating illegal activity. Furthermore, it is apparent from the case-law that that provision precludes, in particular, a host provider whose conduct is limited to that of an intermediary service provider from being ordered to monitor all (9) or virtually all (10) of the data of all users of its service in order to prevent any future infringement.

If, contrary to that provision, a Member State were able, in the context of an injunction, to impose a general monitoring obligation on a host provider, it cannot be precluded that the latter might well lose the status of intermediary service provider and the immunity that goes with it. In fact, the role of a host provider carrying out general monitoring would no longer be neutral. The activity of that host provider would not retain its technical, automatic and passive nature, which would imply that that host provider would be aware of the information stored and would monitor it.

Say what now? It’s right that general monitoring is not required (and explicitly rejected) in the law, but the corollary that deciding to do general monitoring wipes out your safe harbors is… crazy. Here, the AG is basically saying we can’t have a general monitoring obligation (good) because that would overturn the requirement of platforms to be neutral (crazy):

Admittedly, Article 14(1)(a) of Directive 2000/31 makes the liability of an intermediary service provider subject to actual knowledge of the illegal activity or information. However, having regard to a general monitoring obligation, the illegal nature of any activity or information might be considered to be automatically brought to the knowledge of that intermediary service provider and the latter would have to remove the information or disable access to it without having been aware of its illegal content. (11) Consequently, the logic or relative immunity from liability for the information stored by an intermediary service provider would be systematically overturned, which would undermine the practical effect of Article 14(1) of Directive 2000/31.

In short, the role of a host provider carrying out such general monitoring would no longer be neutral, since the activity of that host provider would no longer retain its technical, automatic and passive nature, which would imply that the host provider would be aware of the information stored and would monitor that information. Consequently, the implementation of a general monitoring obligation, imposed on a host provider in the context of an injunction authorised, prima facie, under Article 14(3) of Directive 2000/31, could render Article 14 of that directive inapplicable to that host provider.

I thus infer from a reading of Article 14(3) in conjunction with Article 15(1) of Directive 2000/31 that an obligation imposed on an intermediary service provider in the context of an injunction cannot have the consequence that, by reference to all or virtually all of the information stored, the role of that intermediary service provider is no longer neutral in the sense described in the preceding point.

So the AG comes to a good result through horrifically bad reasoning.

However, while rejecting general monitoring, the AG then goes on to talk about why more specific monitoring and censorship is probably just fine and dandy, with a somewhat odd aside about how the “duration” of the monitoring can make it okay. However, the key point is that the AG has no problem with saying, once something is deemed “infringing,” that it can be a requirement on the internet platform to have to remove new instances of the same content:

In fact, as is clear from my analysis, a host provider may be ordered to prevent any further infringement of the same type and by the same recipient of an information society service. (24) Such a situation does indeed represent a specific case of an infringement that has actually been identified, so that the obligation to identify, among the information originating from a single user, the information identical to that characterised as illegal does not constitute a general monitoring obligation.

To my mind, the same applies with regard to information identical to the information characterised as illegal which is disseminated by other users. I am aware of the fact that this reasoning has the effect that the personal scope of a monitoring obligation encompasses every user and, accordingly, all the information disseminated via a platform.

Nonetheless, an obligation to seek and identify information identical to the information that has been characterised as illegal by the court seised is always targeted at the specific case of an infringement. In addition, the present case relates to an obligation imposed in the context of an interlocutory order, which is effective until the proceedings are definitively closed. Thus, such an obligation imposed on a host provider is, by the nature of things, limited in time.

And then, based on nothing at all, the AG pulls out the “magic software will make this work” reasoning, insisting that software tools will make sure that the right content is properly censored:

Furthermore, the reproduction of the same content by any user of a social network platform seems to me, as a general rule, to be capable of being detected with the help of software tools, without the host provider being obliged to employ active non-automatic filtering of all the information disseminated via its platform.

This statement… is just wrong? First off, it acts as if using software to scan for the same content is somehow not a filter. But it is. And then it shows a real misunderstanding about the effectiveness of filters (and the ability of some to trick filters). And there’s no mention of false positives. I mean, in this case here, a politician was called a corrupt oaf. How should Facebook be forced to block that. Is any use of the phrase “corrupt oaf” now blocked? Perhaps it would have to be “corrupt oaf” and the politician, Eva Glawischnig-Piesczek, that need to be together to be blocked. But, in that case, does it mean that this article itself cannot be posted on Facebook? So many questions…

The AG then insists that somehow this isn’t too burdensome (based on what, exactly?) and seems to make the mistake of many non-technical people, who think that filters are (a) much better than they are, and (b) not dealing with significant gray areas all the time.

First of all, seeking and identifying information identical to that which has been characterised as illegal by a court seised does not require sophisticated techniques that might represent an extraordinary burden.

And, I mean, perhaps that’s true for Facebook — but it certainly could represent a much bigger burden for lots of other, smaller providers. Like, us, for example.

Hilariously, as soon as the AG is done saying the filtering is easy, the recommendation notes that (oh right!) context may be important:

Last, such an obligation respects internet users? fundamental right to freedom of expression and information, guaranteed in Article 11 of the Charter, in so far as the protection of that freedom need not necessarily be ensured absolutely, but must be weighed against the protection of other fundamental rights. As regards the information identical to the information that was characterised as illegal, it consists, prima facie and as a general rule, in repetitions of an infringement actually characterised as illegal. Those repetitions should be characterised in the same way, although such characterisation may be nuanced by reference, in particular, to the context of what is alleged to be an illegal statement.

Next up is the question of blocking “equivalent content.” The AG, properly notes that determining what is, and what is not, “equivalent” represents quite a challenge — and at least seeks to limit what may be ordered to be blocked, saying that it should only apply to content from the same user, and that any injunction be quite specific in what needs to be blocked:

I propose that the answer to the first and second questions, in so far as they relate to the personal scope and the material scope of a monitoring obligation, should be that Article 15(1) of Directive 2000/31 must be interpreted as meaning that it does not preclude a host provider operating a social network platform from being ordered, in the context of an injunction, to seek and identify, among all the information disseminated by users of that platform, the information identical to the information that was characterised as illegal by a court that has issued that injunction. In the context of such an injunction, a host provider may be ordered to seek and identify the information equivalent to that characterised as illegal only among the information disseminated by the user who disseminated that illegal information. A court adjudicating on the removal of such equivalent information must ensure that the effects of its injunction are clear, precise and foreseeable. In doing so, it must weigh up the fundamental rights involved and take account of the principle of proportionality.

Then, finally, it gets to the question of global blocking — and basically says that nothing in EU law prevents a member state, such as Austria, from ordering global blocking, and therefore, that it can do so — but that local state courts should consider the consequences of ordering such global takedowns.

… as regards the territorial scope of a removal obligation imposed on a host provider in the context of an injunction, it should be considered that that obligation is not regulated either by Article 15(1) of Directive 2000/31 or by any other provision of that directive and that that provision therefore does not preclude that host provider from being ordered to remove worldwide information disseminated via a social network platform. Nor is that territorial scope regulated by EU law, since in the present case the applicant?s action is not based on EU law.

Regarding the consequences:

To conclude, it follows from the foregoing considerations that the court of a Member State may, in theory, adjudicate on the removal worldwide of information disseminated via the internet. However, owing to the differences between, on the one hand, national laws and, on the other, the protection of the private life and personality rights provided for in those laws, and in order to respect the widely recognised fundamental rights, such a court must, rather, adopt an approach of self-limitation. Therefore, in the interest of international comity, (51) to which the Portuguese Government refers, that court should, as far as possible, limit the extraterritorial effects of its junctions concerning harm to private life and personality rights. (52) The implementation of a removal obligation should not go beyond what is necessary to achieve the protection of the injured person. Thus, instead of removing the content, that court might, in an appropriate case, order that access to that information be disabled with the help of geo-blocking.

That is a wholly unsatisfying answer, given that we all know how little many governments think about “self-limitation” when it comes to censoring critics globally.

And now we have to wait to see what the court says. Hopefully it does not follow these recommendations. As intermediary liability expert Daphne Keller from Stanford notes, there are some serious procedural problems with how all of this shakes out. In particular, because of the nature of the CJEU, they will only hear from some of the parties whose rights are at stake (a lightly edited quote of her tweetstorm):

The process problems are: (1) National courts don?t have to develop a strong factual record before referring the case to the CJEU, and (2) Once cases get to the CJEU, experts and public interest advocates can?t intervene to explain the missing info. That?s doubly problematic when ? as in every intermediary liability case ? the court hears only from (1) the person harmed by online expression and (2) the platform but NOT (3) the users whose rights to seek and impart information are at stake. That’s an imbalanced set of inputs. On the massively important question of how filters work, the AG is left to triangulate between what plaintiff says, what Facebook says, and what some government briefs say. He uses those sources to make assumptions about everything from technical feasibility to costs.

And, in this case in particular, that leads to some bizarre results — including quoting a fictional movie as evidence.

In the absence of other factual sources, he also just gives up and quotes from a fictional movie ? The Social Network — about the permanence of online info.

That, in particular, is most problematic here. It is literally the first line of the AG’s opinion:

The internet?s not written in pencil, it?s written in ink, says a character in an American film released in 2010. I am referring here, and it is no coincidence, to the film The Social Network.

But a quote in a film that is arguably not even true, seems like an incredibly weak basis for a law that fundamentally could lead to massive global censorship filters across the internet. Again, one hopes that the CJEU goes in a different direction, but I wouldn’t hold my breath.

Filed Under: , , , , , , , , , , ,
Companies: facebook

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “European Court Of Justice Suggests Maybe The Entire Internet Should Be Censored And Filtered”

Subscribe: RSS Leave a comment
53 Comments
That One Guy (profile) says:

An advocate general with cognitive dissonance, lovely

‘General monitoring requirements are bad and prohibited.’ -CJEU Advocate General

If however a court were to require a platform to keep specific content from being re-posted, either by the original poster or other users, something that could only be done via general monitoring/filters, that’s okay.’ – Also CJEU Advocate General

Anonymous Coward says:

just allow governments and the 1% to do what they want but stop everyone else! remove freedom and stop the people from being allowed to discover exactly what is being done in their name, without giving them the option of changing anything. democracy is being killed off by a certain mindset of people who are controlling governments, just like Nazi Germany wanted to do but this way, not a shot is being fired or a bomb being dropped!

TheGreatKarmacSays says:

Prediction time

Prediction time

The global internet becomes regional, limited to designated safe havens.

Russia links to Russia, China blocks everyone including themselves, the EU countries allow limited content to pass between borders only after Value Added Taxing any news links or commerce and the United States and Canada enjoy an internet filled with false information and a lot of Disney content that has taken over every aspect of that internet, meanwhile in Australia no one uses the internet anymore as the government has claimed that encryption was used on a banking interface and shut the whole thing down.

Wyrm (profile) says:

Global censorship

This is not a first, though.
As an example, I remember a court in Canada declared a content was to be removed globally in pretty much the same way.

As for content recognition (being "content similar to something previously judged illegal" or "obviously illegal content"), as long as things seem easy to the eyes of people ignorant of technology, they will make such poor decisions. They ignore the amount of resources, time and effort required to build a recognition engine, as well as the amount of both false positive and false negative percentages. Worse yet, something flawed will be hand-waved as "good enough" despite the impact on freedom of expression (false positives) and legal responsibility (false negatives), with requests – or more like demands – to "nerd harder" in the face of critics.

And that’s not going to improve as long as the ruling class is in majority composed of the "rich old white men" (or more generally people who know nothing about the life of people whose yearly income isn’t at least a 7-digit figure) who will willfully ignore any attempt to educate them. And only vote along partisan lines.

Mason Wheeler (profile) says:

Re: Re: Re:

No. That was a completely wrong interpretation when Thad said it, and it’s still wrong when you say it now.

The correct paradigm from which to understand the idea of geofencing Europe is the notion of quarantine. Right now, any given European user may or may not be a carrier of a deadly malady known as "liability", and you have no way of knowing until they infect you. From a simple self-preservation perspective, the only rational decision is to quarantine Europe–lock it down entirely until the disease burns itself out.

Anonymous Coward says:

Re: Re: Re:2 Re:

I can’t help noticing that you’re being your usual moronic self and completely missing the point of what was written. So what if it’s a literal statement? That just means it’s literally wrong about the motivation behind the desire for geofencing, which is what was being discussed.

Please try to keep up.

Anonymous Coward says:

Re: Re: Re:2 Re:

Dude, you have no room to throw around the word "deluded" after making a statement that implies that American politicians are some sort of heterogeneous group. Of course there are a few idiots thinking along those lines, but pick just about any political idea, no matter how crazy, and you’ll find a few politicians willing to espouse it. But no way no how is the idea gaining enough traction over here to be viable!

PaulT (profile) says:

Re: Re: Re:3 Re:

"making a statement that implies that American politicians are some sort of heterogeneous group"

I did no such thing. Maybe you should brush up on reading comprehension rather than getting all whiny of word choices?

"But no way no how is the idea gaining enough traction over here to be viable!"

You tell yourself that if it makes you feel better.

Adrian Lopez says:

Here is yet another reason why American companies need a safe harbor against the enforcement of foreign laws. Let the eurocrats regulate the Internet as they see fit, but not a single U.S. court should get to recognize such judgments unless the judgments are identical to what you’d get under U.S. law.

Anonymous Coward says:

Re: Re:

You talk as if EU companies weren’t forced to comply with US laws, lol.

There is some kind of reciprocity in that, you respect my laws, I respect yours.

Combine that with a lot of lobbying in the EU from US companies to pass stupid laws, like copyright laws; or visits from the US President to different EU countries to "convince" them to pass such laws.

And well, I wouldn’t be so sure of who started the stupid game. Or at least, who has played at it most.

Anonymous Coward says:

Re: Re: Re:

Btw, I’d also like to add.

Should people from the US have the right to complain about shit they have done for years biting them now?

It isn’t as if the US never did something against someone for not following their laws, did they?

That’s when they don’t change the laws of your country with "diplomacy" or extradite you. That never happened. No.

Anonymous Coward says:

"And, then, basically everything goes off the rails. First up, the Advocate General, seems to think that — like many misguided folks concerning CDA 230 — there’s some sort of "neutrality" requirement for internet platforms, and that doing any sort of monitoring might lose their safe harbors for no longer being neutral. This is mind-blowingly stupid. "

Sorry, but either you have no idea about EU law or you’re being dishonest because it suits you.

And I’m starting to get inclined towards the 2nd part, as I don’t believe that you don’t know about the EU e-Commerce Directive.

What I’m talking is about Article 14 and it’s application in EU law. I’m talking about hosting providers, but my guess is that it also applies to Facebook, Twitter and others.

I’ll cite you Article 16 of Ley 34/2002 de Servicios de la Sociedad de Información. It’s regarding hosting providers:

"1. Los prestadores de un servicio de intermediación consistente en albergar datos proporcionados por el destinatario de este servicio no serán responsables por la información almacenada a petición del destinatario, siempre que:

a) No tengan conocimiento efectivo de que la actividad o la información almacenada es ilícita o de que lesiona bienes o derechos de un tercero susceptibles de indemnización, o

b) Si lo tienen, actúen con diligencia para retirar los datos o hacer imposible el acceso a ellos.

Se entenderá que el prestador de servicios tiene el conocimiento efectivo a que se refiere el párrafo a) cuando un órgano competente haya declarado la ilicitud de los datos, ordenado su retirada o que se imposibilite el acceso a los mismos, o se hubiera declarado la existencia de la lesión, y el prestador conociera la correspondiente resolución, sin perjuicio de los procedimientos de detección y retirada de contenidos que los prestadores apliquen en virtud de acuerdos voluntarios y de otros medios de conocimiento efectivo que pudieran establecerse."

I copied the first part from the EU e-Commerce Directive Article 14, that is the same as Spanish law. Rest comes from the Spanish interpretation of what "actual knowledge" means, but I guess it’s pretty much set in EU law too (at least in the interpretation of it):

  1. Where an information society service is provided that consists of the storage of information provided by a recipient of the service, Member States shall ensure that the service provider is not liable for the information stored at the request of a recipient of the service, on condition that:

(a) the provider does not have actual knowledge of illegal activity or information and, as regards claims for damages, is not aware of facts or circumstances from which the illegal activity or information is apparent; or

(b) the provider, upon obtaining such knowledge or awareness, acts expeditiously to remove or to disable access to the information.

Actual knowledge in Spanish law means:

  • Either the authorities have declared that the data is illegal, illicit or harms 3rd parties and there is a resolution regarding it.
  • Notwithstanding the procedures of content detection and deletion set by the OSPs,
  • Or other means of effective knowledge that could be set.

As you see, in Spanish law (and in EU law too) the foundation for those safe-harbors is the fact that the provider doesn’t have the "actual knowledge".

If he actively monitors what goes inside his network, there are a lot of hints that he has that "actual knowledge" and my guess is that a judge could rule that he has failed his responsibility.

As you see, at least in EU law, it has nothing to do with "neutrality", but with "actual knowledge".

What Advocate General advises is the application of EU law:

  • You don’t have to monitor your network.
  • But if you do and you do nothing about it, you’re screwed.
Anonymous Coward says:

Just "geo-block"

"Thus, instead of removing the content, that court might, in an appropriate case, order that access to that information be disabled with the help of geo-blocking."

We complain about pollies who dont get tech, but here you’ve got an AG. (sigh)

The next time someone cries geo-block, we all yell … VPN!

Your move.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »