Software Legend Ray Ozzie Thinks He Can Safely Backdoor Encryption; He's Very Wrong

from the and-dangerous dept

There have been ongoing debates for a while now about the stupidity of backdooring encryption, with plenty of experts explaining why there’s no feasible way to do it without causing all sorts of serious consequences (some more unintended than others). Without getting too deep into the weeds, the basic issue is that cryptography is freaking difficult and if something goes wrong, you’re in a lot of trouble very fast. And it’s very, very easy for something to go wrong. Adding in a backdoor to encryption is, effectively, making something go wrong… on purpose. In doing so, however, you’re introducing a whole host of other opportunities for many, many things to go wrong, blowing up the whole scheme and putting everyone’s information at risk. So, if you’re going to show up with a “plan” to backdoor encryption, you better have a pretty convincing argument for how you avoid that issue (because the reality is you can’t).

For at least a year (probably more) the one name that has kept coming up over and over as one of the few techies who insists that the common wisdom on backdooring encryption is wrong… is Ray Ozzie. Everyone notes that he’s Microsoft’s former Chief Software Architect and CTO, but some of us remember him from way before that when he created Lotus Notes and Groove Networks (which was supposed to be the nirvana of collaboration software). In recent months his name has popped up here and there, often by FBI/DOJ folks seeking to backdoor encryption, as having some possible ways forward.

And, recently, Wired did a big story on his backdoor idea, where he plays right into the FBI’s “nerd harder” trope, by saying exactly what the FBI wants to hear, and which nearly every actual security expert says is wrong:

Ozzie, trim and vigorous at 62, acknowledged off the bat that he was dealing with a polarizing issue. The cryptographic and civil liberties community argued that solving the problem was virtually impossible, which ?kind of bothers me,? he said. ?In engineering if you think hard enough, you can come up with a solution.? He believed he had one.

This, of course, is the same sort of thing that James Comey, Christopher Wray and Rod Rosenstein have all suggested in the past few years: “you techies are smart, if you just nerd harder, you’ll solve the problem.” Ozzie, tragically, is giving them ammo. But he’s not delivering the actual goods.

The Wired story details his plan which is not particularly unique. It takes concepts that others have proposed (and which have been shown to not be particularly secure) and puts a fresh coat of paint on them. Basically, the vendor of a device has a private key that it needs to keep secret, and under some “very special circumstances” it can send an employee into the dark chamber to do the requisite dance, retrieve the code, and give it to law enforcement. That’s been suggested many times, and it’s been explained many times why that opens up all sorts of dangerous scenarios that could put everyone at risk. The one piece that does seem different is that Ozzie wants a sort of limitation on the possible damage his system does if it goes wrong (in one particular way), which is that under his system if the backdoor is used, it can only be used on one phone and then it disables that phone forever:

Ozzie designed other features meant to ?reassure skeptics. Clear works on only one device at a time: Obtaining one phone?s PIN would not give the authorities the means to crack anyone else?s phone. Also, when a phone is unlocked with Clear, a special chip inside the phone blows itself up, freezing the contents of the phone thereafter. This prevents any tampering with the contents of the phone. Clear can?t be used for ongoing surveillance, Ozzie told the Columbia group, because once it is employed, the phone would no longer be able to be used.

So, let’s be clear. That piece isn’t what’s useful in “reassuring skeptics.” That piece is the only thing that really appears to be that unique about Ozzie’s plan. And it hasn’t done much to reassure skeptics. As the report notes, when Ozzie laid this out at a special meeting of super smart folks in the field, it didn’t take long for one to spot a hole:

The most dramatic comment came from computer science professor and cryptographer Eran Tromer. With the flair of Hercule Poirot revealing the murderer, he announced that he?d discovered a weakness. He spun a wild scenario involving a stolen phone, a second hacked phone, and a bank robbery. Ozzie conceded that Tromer found a flaw, but not one that couldn?t be fixed.

“Not one that couldn’t be fixed.” But it took this guy just hearing about the system to find the flaw. There are more flaws. And they’re going to be catastrophic. Because that’s how cryptogrpahy works. Columbia computer science professor and all around computer security genius Steve Bellovin (who was also at that meeting) highlights how Tromer’s flaw-spotting shows why Ozzie’s plan is a fantasy with dangerous consequences:

Ozzie presented his proposal at a meeting at Columbia?I was there?to a diverse group. Levy wrote that Ozzie felt that he had “taken another baby step in what is now a two-years-and-counting quest” and that “he’d started to change the debate about how best to balance privacy and law enforcement access”. I don’t agree. In fact, I think that one can draw the opposite conclusion.

At the meeting, Eran Tromer found a flaw in Ozzie’s scheme: under certain circumstances, an attacker can get an arbitrary phone unlocked. That in itself is interesting, but to me the important thing is that a flaw was found. Ozzie has been presenting his scheme for quite some time. I first heard it last May, at a meeting with several brand-name cryptographers in the audience. No one spotted the flaw. At the January meeting, though, Eran squinted at it and looked at it sideways?and in real-time he found a problem that everyone else had missed. Are there other problems lurking? I wouldn’t be even slightly surprised. As I keep saying, cryptographic protocols are hard.

Bellovin also points out — as others have before — that there’s a wider problem here: how other countries will use whatever stupid example the US sets for much more nefarious purposes:

If the United States adopts this scheme, other countries, including specifically Russia and China, are sure to follow. Would they consent to a scheme that relied on the cooperation of an American company, and with keys stored in the U.S.? Almost certainly not. Now: would the U.S. be content with phones unlockable only with the consent and cooperation of Russian or Chinese companies? I can’t see that, either. Maybe there’s a solution, maybe not?but the proposal is silent on the issue.

And we’re just getting started on how many experts are weighing in on just how wrong Ozzie is. Errata Security’s Rob Graham pulls no punches pointing out that:

He’s only solving the part we already know how to solve. He’s deliberately ignoring the stuff we don’t know how to solve. We know how to make backdoors, we just don’t know how to secure them.

Specifically, Ozzie’s plan relies on the idea that companies can keep their master private key safe. To support that this is possible, Ozzie (as the FBI has in the past) points to the fact that companies like Apple already keep their signing keys secret. And that’s true. But that assumes incorrectly that signing keys and decryption keys are the same thing and can be treated similarly. They’re not and they cannot be. The security protocols around signing keys are intense, but part of that intensity is built around the idea that you almost never have to use a signing key.

A decryption key is a different story altogether, especially with the FBI blathering on about thousands of phones it wants to dig its digital hands into. And, as Graham notes, you quickly run into a scaling issue, and with that scale, you ruin any chance of keeping that key secure.

Yes, Apple has a vault where they’ve successfully protected important keys. No, it doesn’t mean this vault scales. The more people and the more often you have to touch the vault, the less secure it becomes. We are talking thousands of requests per day from 100,000 different law enforcement agencies around the world. We are unlikely to protect this against incompetence and mistakes. We are definitely unable to secure this against deliberate attack.

And, even worse, when that happened, we wouldn’t even know.

If Ozzie’s master key were stolen, nothing would happen. Nobody would know, and evildoers would be able to freely decrypt phones. Ozzie claims his scheme can work because SSL works — but then his scheme includes none of the many protections necessary to make SSL work.

What I’m trying to show here is that in a lab, it all looks nice and pretty, but when attacked at scale, things break down — quickly. We have so much experience with failure at scale that we can judge Ozzie’s scheme as woefully incomplete. It’s not even up to the standard of SSL, and we have a long list of SSL problems.

And so Ozzie’s scheme relies on an impossibility. That you could protect a decryption key that has to be used frequently, the same way that a signing key is currently protected. And that doesn’t work. And when it fails, everyone is seriously fucked.

Graham’s article also notes that Ozzis is — in true nerd harder fashion — focusing on this as a technological problem, ignoring all the human reasons why such a system will fail and such a key won’t be protected.

It focuses on the mathematical model but ignores the human element. We already know how to solve the mathematical problem in a hundred different ways. The part we don’t know how to secure is the human element.

How do we know the law enforcement person is who they say they are? How do we know the “trusted Apple employee” can’t be bribed? How can the law enforcement agent communicate securely with the Apple employee?

You think these things are theoretical, but they aren’t.

Cryptography expert (and professor at Johns Hopkins), Matt Green did a fairly thorough tweetstorm debunking of Ozzie’s plan as well. He also points out, as Graham does, the disaster scenario of what happens when (not if) the key gets out. But, an even bigger point that Green makes is that Ozzie’s plan relies on a special chip in every device… and assumes that we’ll design that chip to work perfectly and never get broken. And that’s ridiculous:

Green and Graham also both point to the example of GrayKey, the recently reported on tool that law enforcement has been using to crack into all supposedly encrypted iPhones. Already, someone has hacked into the company behind GrayKey and leaked some of the code.

Put it all together and:

Suddenly the fawning over Ozzie’s plan doesn’t look so good any more, does it? And, again, these are the problems that everyone who has dug into why backdoors are a bad idea have pointed out before:

Green expanded some of his tweets into a blog post as well, which is also worth reading. In it, he also points out that even if we acknowledge the difference between signing keys and decryption keys, companies aren’t even that good at keeping signing keys safe (and those are almost certainly going to be more protected that decryption keys since they need to be access much less frequently):

Moreover, signing keys leak all the time. The phenomenon is so common that journalists have given it a name: it?s called ?Stuxnet-style code signing?. The name derives from the fact that the Stuxnet malware ? the nation-state malware used to sabotage Iran?s nuclear program ? was authenticated with valid code signing keys, many of which were (presumably) stolen from various software vendors. This practice hasn?t remained with nation states, unfortunately, and has now become common in retail malware.

And he also digs deeper into the point he made in his tweetstorm about how on the processor side, not even Apple has been able to keep its secure chip from being broken — yet Ozzie’s plan is based almost entirely on the idea that such an unbreakable chip would be available:

The richest and most sophisticated phone manufacturer in the entire world tried to build a processor that achieved goals similar to those Ozzie requires. And as of April 2018, after five years of trying, they have been unable to achieve this goal ? a goal that is critical to the security of the Ozzie proposal as I understand it.

Now obviously the lack of a secure processor today doesn?t mean such a processor will never exist. However, let me propose a general rule: if your proposal fundamentally relies on a secure lock that nobody can ever break, then it?s on you to show me how to build that lock.

Update: We should add that the criticisms raised here are not new either. Back in February we wrote about a whitepaper by Riana Pfefferkorn making basically all of these same points that the folks quoted above are making. In other words, it’s a bit bizarre that Wired wrote this article as if Ozzie is doing something new and noteworthy.

So that’s a bunch of experts highlighting why Ozzie’s plan is silly. But, from the policy side it’s awful too. Because having Ozzie going around and spouting this debunked nonsense, but with his pedigree, simply gives the “going dark” and “responsible encryption” pundits something to grasp onto to claim they were right all along, even though they weren’t. They’ve said for years that the techies just need to nerd harder, and they will canonize Ray Ozzie as the proof that they were right… even though they’re not and his plan doesn’t solve any of the really hard problems.

And, as we noted much earlier in this post, cryptography is one of those areas where the hard problems really fucking matter. And if Ozzie’s plan doesn’t even touch on most of the big ones, it’s no plan at all. It’s a Potemkin Village that law enforcement types will parade around for the next couple of years insisting that backdoors can be made safely, even though Ozzie’s plan is not safe at all. I am sure that Ray Ozzie means well — and I’ve got tremendous respect for him and have for years. But what he’s doing here is actively harmful — even if his plan is never implemented. Giving the James Comeys and Chris Wrays of the worlds some facade they can cling to to say that this can be done is only going to create many more problems.

Filed Under: , , , , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Software Legend Ray Ozzie Thinks He Can Safely Backdoor Encryption; He's Very Wrong”

Subscribe: RSS Leave a comment
66 Comments
Anonymous Anonymous Coward (profile) says:

Not if, when

The thing that bothers me most about things like this is that at some point the government is going to latch onto one of them and via trumped up legislation that has no relation to sensibility, reality, or anything to do with doing things right, force folks who make encrypted things to use the bogus ‘safe’ backdoor. Then we will just go back to doing things in person, or using cash, or use carrier pigeons, or tin cans with string between them, or all of the above and others I haven’t thought of today.

Roger Strong (profile) says:

Re: Re:

a special chip inside the phone blows itself up, freezing the contents of the phone

It’s only a very tiny explosion. It’s not like it’s half a millimeter from a paper-thin bit of foil that stops a lithium-ion battery from exploding on contact with air.

Airlines won’t have any problems with the idea. No sir, none at all.

Anonymous Coward says:

Re: Re: Black Hats

It is also worth noting that security expert != encryption expert, and any encryption expert will tell you it is easy for an individual to design a system that is easily broken, and difficult for someone working within the encryption community to design a system that survives the community peer review process.

Anonymous Coward says:

Truely secure encryption

I have the solution, shared key escrow… now I know this isn’t new, but just wait for my “patent pending, trademarked, and copyrighted solution” (all your bases are belong to us)

We encrypt all messages with 2 keys, but one of the keys is an extra long 5 part kajillion bit key, and we split the parts between the following countries:
United States
Russia
China
European Nation
and North Korea

So all that’s required for ‘Government’ access to private communications is for all 5 Governments to agree and provide their keys, and the communication will be available to them.

Any idea even slightly similar to this one being implemented will result in trademark and copyright lawsuits…

ECA (profile) says:

So, we take everyones CHEAP cellphones

And give them SMART phones with specific hardware that LOCKS up the phone,,
Any hardware Cryptography is Overridden..(KEY WORD, hardware)

Using a Dongle or Software Encryption??
We loose all our DEPENDABLE, DURABLE, Older phones..That have bounced off the roads and in toilets..
Then we have special hardware, which means a redesign of EVERY smartphone on the market..Unless you can incorporate it into the battery, as a part to spy on the whole system.

I have to ask..How do we get our phones back if we loose them?? Do you have your name on it?? How to find or send back a phone to an owner??
HOW do you lock a phone to a specific PERSON??

In all of this, there is 1 trick they cant get past.. IF I dont store my remote info/password/accounts and so forth ON THE PHONE. When my data ISNT installed on my phone unless I have Account, Name, Password..REMOTELY..
I have now broken any hardware Crack they can have..

bshock says:

This individual is offensive

Discussion of technical issues surrounding encryption is a red herring in the question “safe backdoors” to encryption. The idea of creating an encryption backdoor that only good guys can access is an inherent contradiction — when I encrypt a message, the only good guy who can read it is the message’s intended recipient. Anyone else — whether Vladimir Putin, the FBI, or Mr. Rogers — is a “bad guy.”

You don’t have the right to read my communications unless I say so. End of discussion.

bob says:

what he is doing is a brilliant move... for himself.

Ray is doing exactly the right play to make money. It doesn’t matter if his idea works or not, doesn’t matter if it is secure or not. All that matters is that he is a lone person in a sea of experts that can use this heightened visibility to make money from government contracts.

If he can convince enough people, by giving false promises, in Congress to pursue this matter then a government agency will be allowed to provide a contract for bidding. If he is the only company to bid he could get a large sum to conduct R&D. Even if others bid, the government will just split the funding among a few companies to start designing prototypes.

His false claims about being able to provide this product/service even if he knows he can’t is bad and a well managed contracting office would hold the developers feet to the fire. In my own experience the government won’t do a good job managing the contract, I’m looking at you F-35 program team.

Despite how bad this practice is, it’s pretty standard among major defense contractors to promise big, miss the mark, and then still be paid in full without punishment for delivering a substandard product. At least in those cases the end product was feasibly possible even if the company couldn’t deliver because the design wasn’t mature enough. What Ray is proposing is anything but feasible, unless you corrupt your definition of secure and/or private.

Anonymous Coward says:

Distraction

All this debate over whether they can or can’t.

But they already did. And it was a bad idea after all. Now the secrets escape.

So more distraction needed! Argue hard that it’s just impossible! Hold a huge debate! Maybe the adversary won’t notice what already’s been done. Because it’s just impossible! They can’t do that!

Anonymous Coward says:

Re: Re: Re:2 Distraction

So you have a conspiracy theory…

You know, you can argue for years about whether Oswald really shot JFK —whether Oswald acted alone —whether he had help —whether it was the Russians —the Cubans —the CIA.

But there’s one pretty undeniable fact you can hang your hat on: JFK is deader than a doornail. The evidence on that is fairly firm. JFK is dead.

Now, if you want, you can call the whole thing a looney tunes “conspiracy theory.” But JFK is dead. There’s actual evidence that he’s dead.

Uriel-238 (profile) says:

Re: Re: Re:3 JFK is dead

No we can’t. The FBI took control of the body super quickly and gave very limited access to coroners who agreed to cooperate with the FBI, even if that meant filing false documents.

JFK is alive and well, running the Liberal Deep State from beyond the grave, controlling the Islamic State through Obama and Clinton.

You have no idea how deep it goes.

Anonymous Coward says:

Re: Re: Distraction

If there is a way in, just stay silent, as discussing…

Why should I stay silent? I don’t really give much of a flying fuck at this point.

Right now, most people are going to believe whatever it is they’re going to believe. Eventually, the facts may or may not become so blindingly obvious that the truth gets splashed on the front pages.

But I’m not holding my breath on that.

Anonymous Coward says:

Re: strawman

… a ‘bad guy’…

An appropriately paranoid tourist won’t trust anything anyhow.

The really, truly, severely ‘bad guys’, though, are just consumers around the world who might not buy enough gadgets from overseas manufacturers and suppliers. Those ‘bad guys’ might do actual damage to the global economy.

Anonymous Coward says:

All of this misses a couple of points:

1: any phone they use this on will be permanently destroyed. Meaning: if you have a phone and the powers that be want to see what’s on it, you have a choice of providing them with the password or having your $1,000 phone permanently destroyed, with them getting the data anyway. Chilling effects anyone?

2: On protection of signing certificates: usually signing certificates have an expiration date. Is this PIN going to have an expiration date too? After it expires, is my phone going to be permanently unlockable via the PIN, or is it going to be permanently disabled? If the first, then I just spoof a timestamp and my phone’s no longer unlockable by the government. If the second, I’d be pretty upset that hardware I bought came with a “use before” date like that.

Or can the PIN be replaced? If THIS is the case, then you’ve just opened up a whole new security can of worms. Because if it can be replaced by someone, it can be replaced by anyone… so the government/company no longer has control of the PIN, and someone else with darker motives *could* have control of the PIN.

There really is no way to fix these issues. Any fix will just create more problems.

That One Guy (profile) says:

Any idiot can 'mean well'

I am sure that Ray Ozzie means well — and I’ve got tremendous respect for him and have for years.

Still? Because after screwing over everyone by handing over ‘evidence’ that it’s absolutely possible to ‘nerd harder’ and solve the encryption problem, I’d have lost all respect for him.

I’d never even heard of the guy before, but this article alone is enough for me to classify him in the category of ‘dangerous fool’. Whether or not he’s done impressive stuff in the past the idea he’s pushing now is both incredibly stupid and dangerous, and those are not things I’m willing to give a pass on.

Anonymous Coward says:

Another problem with the ‘brick the phone’ idea… Even if the idea ever did work, people are still often misidentified.

If someone was misidentified as committing a crime, there is a good chance their phone would be searched. As a result that phone may be bricked…

So an innocent person would have their phone destroyed and will likely have to pay for a new one out of pocket , regardless of his innocence.

Nemo says:

The term ‘backdoor’ may have become counter-productive due to the dilution and skewing of its original meaning, and this article’s an example of just that. By using “backdoor” when it means one thing to you, but something else to the other guy, you’re essentially halfway conceding that the othe3r guy’s got a valid argument, even if it’s wrong.

Instead, I suggest changing the approach to a more descriptive one, f’rex: ‘Comey insists that it’s possible to design a secure system featuring a way in that’s easy for him, and hard for the rest of humanity – and he wants this system used on phones en masse.’.

I know it’s wordy, but the point is to use descriptions that highlight the oxymoronic ‘it’s both easy and hard at the same time’ premise.

And really, this issue, as important as it is, serves in part as a smokescreen for an even uglier reality, that Comey wants a way to casually search cell phones on the flimsiest of premises.

How’s that relevant? Because that’s normally how the warrant system works. The magistrates/judges who approve them are generally little more than rubber stamps of approval. Thus warrants provide little protection to we, the people, especially in situations like this.

So the bottom line is that the Comeys out there want an easy way to browse your data, and your protection from that consists of the gov’t getting an approval that’s so routine that it’s virtually automatic. In short, presuming arguendo that such a ‘perfect’ system could be deployed, your data would very nearly be in plain sight to any gov’t official who wants to see it.

But back to the original point, the Comeys out there need to be called out on their desire for an access point that’s both easy and hard to get through at the same time.

‘C’mon, guys, it just needs to be easy and hard to get through at the same time, if you just nerded a little harder, you can make that happen.’ Oy.

Anonymous Coward says:

Counterpart to Key Escrow

Reminds me of one crazy idea that is still better than this senile moron.

Include ‘hostages’ with every key escrowed in this case each hostage being some secret document they don’t want released verified by a ‘trusted third party’ before it is accepted. If a key is ever compromised, retrieved through abusive means, an investigation into said abuse is stonewalled, or a breach occurs only due to flaws in their systems then a ‘hostage’ is killed and release to the public for every incident.

Sadly said crazy idea is still probably one of the best key escrow schemes out there. Since then we’d see just how confident they are in their terrible idea and if it breaches then we’ll have finally worked through the declassification backlog up until the inevitable incident.

Uriel-238 (profile) says:

Re: Hostages

If you could assure a fair review of each incident so that it doesn’t get brushed away like murders by police officers, and if you could assume the hostages are genuinely embarrassing and will cause the current regime (but not the general public) lasting damage, then you might have a system.

Of course, it still doesn’t stop others from cracking / leaking / bypassing the mechanism.

Uriel-238 (profile) says:

Re: Re: Counterpart to Key Escrow

Well, the problem, as per usual, is that when you define some people as important and some as not then only the important people retain their rights.

Case in point: The US legal system.

This is the reason some of us are hesitant about gun regulation, especially when police (id est important people are not subject to the same regulation).

John Strosnider (profile) says:

No Nerd Harder Needed

Congress doesn’t need to get anyone to nerd harder and make a backdoor, they can just use the one tool they already have at their disposal: legislation.

They just need to pass a law that requires everything to be stored twice with one copy encrypted with your private key and a separate identical copy with the government’s public key. Then, they’ll be able to read whatever they want.

Just pass a law, congress! After all, that’s how you solved the war on drugs, eliminated terrorism and stopped all mass shootings!

David (profile) says:

How much will this cost?

First, the unbreakable lock that does not exist and the iterative nature of that type of development.

Second, the physical vault at Apple, or Samsung. With special thoughts to Chinese phone devs.

Third, since the lock doesn’t exist, the will to force crap down everyone throat is high, who is going to replace all the phones that *will* get bricked when the non-existing lock leaks the key.

Pretty much as much money as you can throw at it. Because this is serious rathole territory.

Anonymous Coward says:

The scale...

I don’t think I have ever really thought about the whole scale before… I saw all the problems, some of them really huge and some smaller, and those alone would be impossible to solve.
I knew that if such a system was implemented, those problems would ALL need to be solved perfectly… but that is not enough. They need to solve these problems AND they need to do it in such a fashion that it cannot be broken for the next 5-10 years, no matter the developments of technology. I suppose they could make an update function, which opens up another backdoor, but they would then run into possible hardware or software limitations (like how old phones and OS’es are easy to break into).
To me it just made it go from a dizzying amount of seemingly unsolvable problems to a vast galaxy of impossible problems.

AnonCow says:

Let’s be clear, this backdoor would be for consumer encryption only. Mainly websites and devices.

And that is what makes any claims about the ability to secure a backdoor make completely pointless.

It’s like buying an airplane from someone who wouldn’t fly it themselves. “Oh, it will be great for you. Me? Not a chance I would go up in that bucket.”

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...