The DOJ Is Conflating The Content Moderation Debate With The Encryption Debate: Don't Let Them

from the it's-not-the-same dept

As we’ve detailed a lot over the last week, the DOJ has decided that after years of failing to get backdoors mandated by warning about the “terrorism” bogeyman, it’s decided to pick up the FOSTA playbook, and instead start focusing on child porn — or what “serious people” now refer to as Child Sexual Abuse Material (CSAM). It did this last week with an assist from the NY Times, who published an article with (legitimately) scary stories, but somehow blaming the internet companies… because they actually report it when they find such content on their networks. I’ve seen more than a few people, even those who generally have been strong voices on the encryption debate and against backdoors, waver a bit on this particular subject, and note that maybe there shouldn’t be encryption on social media networks, because it might (as the narrative says) help awful people hide their child porn.

Except… that’s confusing a few different things. Mainly, it’s mixing up the content moderation debate with the “lawful access” or “backdoors” debate. Yes, encryption makes it harder for the police to get in and see certain things, but that’s by design. We live in a country with the 4th Amendment, in which we believe that it should be difficult for law enforcement to snoop deeply into our lives — and that’s always meant that some people will do and plot bad stuff out of the sight and hearing of law enforcement. Yet, if you were to look at law enforcement over the past 100 years, you can bet that they have many times more access to information about people today than they have in the past. The claim of “going dark” is laughable when you compare the information that law enforcement can get today even to what it could get 15 or 30 years ago.

But, importantly, bringing CSAM into the debate muddies the water by pretending — incorrectly — that in an end-to-end encrypted world you can’t do any content moderation, and there’s simply no way for platforms to block or report certain kinds of content. Yet, as Princeton professor Jonathan Mayer highlights in a new paper, content moderation is not impossible in an encrypted system. It may be different than it is today, but it’s still very much possible:

Much of the public discussion about content moderation and end-to-end encryption over the past week has appeared to reflect two common technical assumptions:

  1. Content moderation is fundamentally incompatible with end-to-end encrypted messaging.
  2. Enabling content moderation for end-to-end encrypted messaging fundamentally poses the same challenges as enabling law enforcement access to message content.

In a new discussion paper, I provide a technical clarification for each of these points.

  1. Forms of content moderation may be compatible with end-to-end encrypted messaging, without compromising important security principles or undermining policy values.
  2. Enabling content moderation for end-to-end encrypted messaging is a different problem from enabling law enforcement access to message content. The problems involve different technical properties, different spaces of possible designs, and different information security and public policy implications.

You can read the whole thing, but as the paper notes, user reporting of such content still works in an end-to-end encrypted world, as does hash matching if done at the client end. There’s a lot more in there as well, but what you realize in reading the paper is that while law enforcement has now latched onto the CSAM issue as its hook to break encryption (in part, I’ve been told by someone working with the DOJ, because they found it “polled well”), it’s an entirely different problem. This is yet another “but think of the children” argument, which ignores the technical and societal realities.

Filed Under: , , , , , ,
Companies: facebook

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “The DOJ Is Conflating The Content Moderation Debate With The Encryption Debate: Don't Let Them”

Subscribe: RSS Leave a comment
25 Comments
A Guy says:

Re: Re: Out of sight, out of their minds

That’s not exactly true, content moderation between users requires the receiver to report the sender or an automated filter on the receivers device.

If the sender and receiver agree not to report the content then it goes through more often.

An automated filter on the device can do a lot of keyword filtering and message rejection without human intervention.

Anonymous Coward says:

Re: Out of sight, out of their minds

"Enabling content moderation for end-to-end encrypted messaging is a different problem…"

That very "problem statement" demonstrates a major difficulty – even smart, informed, well-intentioned technologist conflate issues.

There is no issue of end-to-end encryption involved in content moderation. There’s no mid-stream moderation that requires access. Moderation happens at an end-point, operating on decrypted, clear-text content (text, images, media, etc.).

User-to-user moderation works the same way as public forum moderation, i.e., the receiving user (or his filter proxy-app) deletes undesirable content and possibly blocks the sender. Again – no need to decrypt mid-stream.

Anonymous Coward says:

Re: Re: Out of sight, out of their minds

Lets keep the term moderation for when a person or organization decides what other people can see, and filtering for when a person decides what content they want to see. The distinction is important, as moderation imposes values on an audience, which is not always bad, while filtering is a personal decision that does not impact what other people see.

urza9814 (profile) says:

Re: Re: Out of sight, out of their minds

There’s different kinds of moderation, and plenty of sites DO use mid-stream moderation that requires access. Facebook, for example.

I like the idea posted by AC below where they suggest moderation vs filtering, although those words already have different uses so they probably aren’t the best choice. I’d call it something like "policy moderation" vs "user moderation". Policy moderation is like Facebook, where you set a bunch of rules about what is and is not allowed, you let users file reports of specific content, but then you have hired moderators who review that content and determine if it is actually in violation. Some sites also use immediate policy moderation, where your post will be reviewed by a human to see if it complies before it is ever visible. Some sites use a mix, with automated filters which will determine if a comment should be held for human review. But all of those options require administrators at the company to be able to review the posted content. So either the company needs to be able to decrypt everything, or at the very least they need to insert code that will take the decrypted message from the user and pass that back to the company unencrypted. Either way they’re getting unencrypted access. And obviously you can’t count on any automatic filtering on the client end — for example, if you do the thing where automatic filtering can flag a comment as requiring human review, the client can easily prevent that code from running on their end. You can use that to prevent things from being viewed, but not from being posted and distributed.

For "user moderation", you just count downvotes and hide anything with enough downvotes. That could be done without direct access by the company to the decrypted content. But it doesn’t let you set any kind of consistent rules, and it can often get abused, especially in larger communities. Things will get flagged because people just don’t like the opinion expressed or the person expressing it…and there’s not much you can do to prevent that.

This comment has been deemed insightful by the community.
Anonymous Coward says:

It’s true that the mere existance on encryption on social media messanger apps might help one or more child abusers evade detections. This inspite of the fact that said encryption might also help keep children away from those who would abuse them

It’s also true that allowing any non-children to breath would enable some of those said non-children to sexually abuse children.

However for some reason I don’t see anyone proposing a breathing ban.

ECA (profile) says:

Lets see.....

How could I really piss off people…

How about we goto the major locations in the world where Child abuse is Dominant.. Bangkok.. Install remote cameras thru the cellphone system, and take pictures..
Lets monitor those persons that Take plane flights to those areas..and link the pictures with the Airplane flights..

We could also match up incoming flights into those Areas, and keep tabs on those persons.. Why not?? well, every nation has their OWN laws. and whats legal here may not BE there.. like Chewing gum in public.(yep its illegal in a few locations)

so with all this, and tracking CERTAIN persons..What could we find? could we send a BOT to their phones and see who else is Around this situation?? do some of these folks have enough money to PAY us off, so we dont do anything??

How much of FOSTA is logical, or probably Somewhat easy to figure WHO ISNT doing it? and the number represented seem to be Picked out of the air, and SLAMMED together from any/every source of missing children and not verified that they are NOW HOME. And even when looked/verified tend to be Less then .1% of their guesstimate..
This seems MORE of a lost Law, that will never be enforced, because its already enforced other ways. The only problem in this tends to be a Way for those in need, to have access to what they need. and I dont see that happening.

ECA (profile) says:

Lets see.....

How could I really piss off people…

How about we goto the major locations in the world where Child abuse is Dominant.. Bangkok.. Install remote cameras thru the cellphone system, and take pictures..
Lets monitor those persons that Take plane flights to those areas..and link the pictures with the Airplane flights..

We could also match up incoming flights into those Areas, and keep tabs on those persons.. Why not?? well, every nation has their OWN laws. and whats legal here may not BE there.. like Chewing gum in public.(yep its illegal in a few locations)

so with all this, and tracking CERTAIN persons..What could we find? could we send a BOT to their phones and see who else is Around this situation?? do some of these folks have enough money to PAY us off, so we dont do anything??

How much of FOSTA is logical, or probably Somewhat easy to figure WHO ISNT doing it? and the number represented seem to be Picked out of the air, and SLAMMED together from any/every source of missing children and not verified that they are NOW HOME. And even when looked/verified tend to be Less then .1% of their guesstimate..
This seems MORE of a lost Law, that will never be enforced, because its already enforced other ways. The only problem in this tends to be a Way for those in need, to have access to what they need. and I dont see that happening.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Gee, it must have been tough back in the day when people weren’t sending messages or sharing content that couldn’t be intercepted. I wonder how they busted child abusers, pedophiles, ephebephiles, and illegal porn producers and consumers before the net.

Anonymous Coward says:

Re: Re:

They didn’t, where do you think the current crop of Politicians, CFO’s and Media moglu’s came from? (from the child abusers, pedophiles, and illegal porn producers)

And why do you think "they" want to pass all these ‘save the children’ laws to prevent anyone else from doing what they have done? (their own guilt/obsessions…)

What will the public do about it? (literally gives no fucks…)

this is also why there are no serious mental health related ‘issues’ in the USA, we wouldn’t want to have to lock up Politicians and CFO’s because they are psychopathic with narcissistic personality disorders, now would we?

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...