Why Section 230 'Reform' Effectively Means Section 230 Repeal

from the catalog-of-bad-ideas dept

Some lawmakers are candid about their desire to repeal Section 230 entirely. Others, however, express more of an interest to try to split this baby, and “reform” it in some way to somehow magically fix all the problems with the Internet, without doing away with the whole thing and therefore the whole Internet as well. This post explores several of the types of ways they propose to change the statute, ostensibly without outright repealing it.

And several of the reasons why each proposed change might as well be an outright repeal, given each one’s practical effect.

But before getting into the specifics about why each type of change is bad, it is important to recognize the big reason why just about every proposal to change Section 230, even just a little bit, undermines it to the point of uselessness: because if you have to litigate whether Section 230 applies to you, you might as well not have it on the books in the first place. Which is why there’s really no such thing as a small change, because if your change in any way puts that protection in doubt, it has the same debilitating effect on online platform services as an actual repeal would have.

This was a key point we keep coming back to, including in suggesting that Section 230 operates more as a rule of civil procedure than any sort of affirmative subsidy (as it is often mistakenly accused of being). Section 230 does not do much that the First Amendment would not itself do to protect platforms. But the crippling expense of having to assert one’s First Amendment rights in court, and potentially at an unimaginable scale given all the user-generated content Internet platforms facilitate, means that this First Amendment protection is functionally illusory if there’s not a mechanism to get platforms out of litigation early and cheaply. It is the job of Section 230 to make sure they can, and that they won’t have to worry about being bled dry in legal costs having to defend themselves even where, legally, they have a defense.

Without Section 230 their only choice would be to not engage in the activity that Section 230 explicitly encourages: intermediating third party content, and moderating it. If they don’t moderate it then their services may become a cesspool, but if the choice they face is either to moderate, or to potentially be bankrupted in litigation (or even, as in the case of FOSTA, potentially prosecuted), then they won’t. And as for intermediating content, if they can get into legal trouble for allowing the wrong content, then they will either host less user-generated content, or not be in the business of hosting any user content at all. Because if they don’t make these choices, they set themselves up to be crushed by litigation.

Which is why it is not even the issue of ultimate liability that makes lawsuits such an existential threat to an Internet platform. It’s just as bad if the lawsuit that crushes them is over whether they were entitled to the statutory liability protection needed to avoid the lawsuit entirely. And we know lawsuits can have that annihilating effect when platforms are forced to litigate these questions. One conspicuous example is Veoh Networks, a video-hosting service who today should still be a competitor to YouTube. But it isn’t a competitor because it is no longer a going concern. It was obliterated by the costs of defending its entitlement to assert the more conditional DMCA safe harbor defense, even though it won! The Ninth Circuit found the platform should have been protected. But by then it was too late; the company had been run out of business, and YouTube lost a competitor that, today, the marketplace still misses.

It would therefore be foolhardy and antithetical to lawmakers’ professed interest in having a diverse ecosystem of Internet services were they to do anything to make Section 230 similarly conditional, thereby risking even further market consolidation than we already have. But that’s the terrible future that all these proposals tempt.

More specifically, here’s why each type of proposal is so infirm:

Liability carve-outs. One way lawmakers propose to change Section 230 is to deny its protection to specific forms of liability that may arise in user content. A variety of these liability carve-outs have been proposed, and all require further scrutiny. For instance, one popular carve-out with lawmakers is trying to make Section 230 useless against claims of liability for posts that allegedly violate anti-discrimination laws. But while on first glace such a carve-out may seem innocuous, we know that it’s not. And one way it’s not is because people eager to discriminate themselves have shown themselves keen to try to force platforms to help them do it, including by claiming that anti-discrimination laws serve to protect their own efforts to discriminate. So far they have largely been unable to conscript platforms into enabling their hate, but if Section 230 no longer protects platforms from these forms of liability, then racists will finally be able to succeed by exploiting that gap.

These carve-outs also run the risk of making it harder for people who have been discriminated against from finding a place to speak out about it, since it will force platforms to be less willing to offer space to speech that they might find themselves forced to defend, because even if the speech were defensible just having to answer for it can be ruinous for the platform. We know that they will feel forced to turn away all sorts of worthy and lawful speech if that’s what they need to do to protect themselves, because we’ve seen this dynamic play out as a result of the few carve-outs Section 230 has had from the start. For example, if the thing wrong with the user expression was that it implicated an intellectual property right, then Section 230 didn’t protect the platform from liability in their users’ content. Now, it turns out that platforms have some liability protection via the DMCA, but this protection is weaker and more conditional than Section 230, which is why we see all the swiss cheese online with videos and other content so often removed ? even in cases when they were not actually infringing ? because taking it down is the only way platforms can avoid trouble and not run the risk of going the way of Veoh Networks themselves.

Such an outcome is not good for encouraging free expression online, which was a main driver behind passing Section 230 originally, and it isn’t even good for the people these carve outs were ostensibly intended to help, which we saw with FOSTA, which was an additional liability carve-out more recently added. With FOSTA, instead of protecting people from sexual exploitation, it led to platforms taking away their platform access, which drove them into the streets, where they got hurt or killed. And, of course, it also led to other perfectly lawful content disappearing from the Internet, like online dating and massage therapy ads, since FOSTA had made it impossibly risky for the platforms to continue to facilitate it.

It’s already a big problem that there are even just these liability carve-outs. If Section 230 were to be changed in any way, it should be changed to remove them. But in any case, we certainly shouldn’t be making any more if Section 230 is still to maintain any utility in protecting the platforms we need to facilitate online user expression.

Transactional speech carve-outs. As described above, one way lawmakers are proposing to change Section 230 is to carve out certain types of liability that might attach to user-generated content. Another way is to try to carve out certain types of user expression itself. And one specific type of user expression in lawmakers’ crosshairs (and also some courts’) is transactional speech.

The problem with this invented exception to Section 230 is that transactional speech is still speech. “I have a home to rent” is speech, regardless of whether it appears on a specialized platform that only hosts such offers, or more general purpose platforms like Craigslist or even Twitter where such posts are just some of the kinds of user expression enabled.

Lawmakers seem to be getting befuddled by the fact that some of the more specialized platforms may earn their money through a share of any consummated transaction their user expression might lead to, as if this form of monetization were somehow meaningfully distinct from any other monetization model, or otherwise somehow waived their First Amendment right to do what basically amounts to moderating speech to the point where it is the only type of user content they allow. And it is this apparent befuddlement that has led to attempts by lawmakers to tie Section 230 protection to certain monetization models and go so far as to eliminate it for certain ones.

Even these proposals were carefully drafted such proposals they would only end up chilling e-commerce by forcing platforms to use less-viable monetization models. But what’s worse is that the current proposals are not being carefully drafted, and so we end up seeing bills end up threatening the Section 230 protection of any platform with any sort of profit model. Which, naturally, they all need to have in some way. After all, even non-profit platforms need some sort of income stream to keep the lights on, but proposals like these threaten to make it all but impossible to have the money needed for any platform to operate.

Mandatory transparency report demands. As we’ve discussed before, it’s good for platforms to try to be candid about their moderation decisions and especially about what pressures forced them to make these decisions, like subpoenas and takedown demands, because it helps highlight when these instruments are being abused. Such reports are therefore a good thing to encourage.

But encouragement is one thing; requiring them is another, but that’s what certain proposals try to do in conditioning Section 230 protection to the publication of these reports. And they are all a problem. Making transparency reports mandatory is an unconstitutional form of compelled speech. Platforms have the First Amendment right to be arbitrary in their moderation practices. We may prefer them to make more reasoned and principled decisions, but it is their right not to. But they can’t enjoy that right if they are forced to explain every decision they’ve made. Even if they wanted to, it may be impossible, because content moderation is happening at scale, which inherently means it will never be perfect, and it also may be ill-advised to be fully transparent because it teaches bad actors how to game their systems.

Obviously a platform could still refuse to produce the reports as these bills would prescribe. But if that decision risks the statutory protection the platform depends on to survive, then it is not really much of a decision. It finds itself compelled to speak in the way that the government requires, which is not constitutional. And it also would end up impinging on that freedom to moderate, which both the First Amendment and Section 230 itself protect.

Mandatory moderation demands. But it isn’t just transparency in moderation decisions that lawmakers want. Some legislators are running straight into the heart of the First Amendment and demanding that they get to dictate how platforms get to do any of their moderation by conditioning Section 230 protection to the platforms making these decisions the way the government insists.

These proposals tend to come in two political flavors. While they are generally utterly irreconcilable ? it would be impossible for any platform to simultaneously satisfy both of them at the same time ? they each boil down to the same unconstitutional demand.

Some of these proposals reflect legislative outrage at platforms for some of the moderation decisions they’ve made. Usually they condemn platforms for having removed certain speech or even banned certain speakers, regardless of how poor their behavior or how harmful the things those speakers said. This condemnation leads lawmakers who favor these speakers and their speech to want to take away the platforms’ right to make these sorts of moderation decisions by, again, conditioning Section 230 on their continuing to leave these speakers and speech up on these systems. The goal with these proposals is to set up the situation where it is impossible for platforms to continue to exercise their First Amendment discretion in moderation and possibly take them down, lest they lose the protection they depend on to exist. Which is not only unconstitutional compulsion, but also itself ultimately voids the part of Section 230 that expressly protects that discretion, since it’s discretion that platforms can no longer exercise.

On the flip side, instead of conditioning Section 230 on not removing speakers or speech, other lawmakers would like to condition Section 230 on requiring platforms to kick off certain speakers and speech (and sometimes even the same ones that the other proposals are trying to keep up). Which is just as bad as the other set of proposals, for all the same reasons. Platforms have the constitutional right to make these moderation choices however they choose, and the government does not have the right, per the First Amendment, to force them to make them in any particular way. But if their critical Section 230 protection can be taken away if they don’t moderate however the sitting political power demands at the moment, then that right has been impinged and Section 230 rendered a nullity.

Algorithmic display carve-outs. Algorithmic display has become a target for many lawmakers eager to take a run at Section 230. But as with every other proposed reform, changing Section 230 so that it no longer applies to platforms using algorithmic display would end up obliterating the statute for just about everyone. And it’s not quite clear that lawmakers proposing these sorts of changes quite realize this inevitable impact.

And part of the problem seems to be that they don’t really understand what an algorithm is, or how commonly they are used. They seem to regard it as something nefarious, but there’s nothing about an algorithm that inherently is. The reality is that nearly every platform uses software in some way to handle the display of user-provided content, and algorithms are just the programming logic coded into the software giving it the instructions for how to display that content. Moreover, these instructions can even be as simple as telling the software to display the content chronologically, alphabetically, or some other relevant way the platform has decided to render content, which the First Amendment protects. After all, a bookstore can decide to shelve books however it wants, including in whatever order or with whatever prominence it wants. What these algorithms do is implement these sorts of shelving decisions, just as applied to the online content a platform displays.

If algorithms were to end up banned by making the Section 230 protection platforms need to host user-generated content contingent on not using them, it would make it impossible for platforms to actually render any of that content. They either couldn’t do it technically, if they were to abide by this rule withholding their Section 230 protection, or legally if that protection were to be withheld because they used this display. Such a rule would also represent a fairly significant change to Section 230 itself by gutting the protection for moderation decisions, since those decisions are often implemented by an algorithm. In any case, conditioning Section 230 on not using algorithms is not a small change but one that would radically upend the statutory protection and all the online services it enables.

Terms of Service carve-outs. One idea (which is, oddly, backed by Facebook, even though it needs Section 230 to remain robust in order to defeat litigation like this) is that Section 230 protection should be contingent on platforms upholding their terms of service. As with these other proposals, this one is also a bad idea.

First of all, it negates the utility of Section 230 protection by making its applicability the subject of litigation. In other words, instead of being protected from litigation, platforms will now have to litigate whether they are protected from litigation, which means they aren’t really protected at all.

It also fails to understand what terms of service are for. Platforms have them in order to limit their liability exposure. There’s no way that they are going to write them in a way that has the effect of increasing their liability exposure.

The way they are generally written now is to put potentially wayward users on notice that if they don’t act consistently with these terms of service, the service may be denied them. They aren’t written to be affirmative promises to do anything because they can’t be affirmative promises ? content moderation at scale is impossible to do perfectly, so it would be foolish for platforms to obligate themselves to do the impossible. But that’s what changing Section 230 in this way would do, create this obligation if platforms are to retain their needed protection.

This pipe dream that some seem to have, that if only platforms did more moderation in accordance with their terms of service as currently written, everything would be perfect and wonderful is hopelessly na?ve. After all, nothing about how the Internet works is nearly that simple. Nevertheless, it is fine to want platforms to do as much as they can to meet the aspirational goals they’ve articulated in their terms of service. But changing Section 230 in this way won’t lead them to. Instead it will make it legally unsafe for platforms to even articulate any such aspirations and thus less likely to meet any of them. Which means that regulators won’t get more of what they seek with this sort of proposal, but less.

Pre-emption elimination. One of the key clauses that makes Section 230 useful is its pre-emption provision. This is the provision that tells states that they cannot rejigger their own state laws in ways that would interfere with the operation of Section 230. The reason it is so important is because it gives the platforms the certainty they need to be able to benefit from the statute’s protection. For it to be useful they need to know that it applies to them and that states have no ability to mess with it.

Unfortunately we are already seeing increasing problems with state and local jurisdictions attempting to ignore this pre-emption provision, and courts even sometimes letting them. But on top of that there are proposals in Congress to deliberately undermine it. In fact, with FOSTA, it already has been undermined, with individual state governments now able to impose liability directly on platforms for their user activity, no matter how arbitrarily.

We see with the moderation bills an illustration of what is wrong with states getting to mess with Section 230 and make its protection suddenly conditional ? and therefore effectively useless. Given our current political polarity, the problem should be obvious: how is any platform going to reconcile the moderation demands of a Red State with the moderation demands of a Blue State? What is an inherently interstate Internet platform to do? Whose rules should they follow? What happens to them if they don’t?

Congress put in the pre-emption provision because it knew that platforms could not possibly comply with all the myriad rules and regulations that every state, county, city, town, and locality might develop to impose liability on platforms. So it told them all to butt out. It’s a mistake to now gut that provision if Section 230 is going to still have any value in making it safe for platforms to continue to do their job enabling the Internet.

Filed Under: , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Why Section 230 'Reform' Effectively Means Section 230 Repeal”

Subscribe: RSS Leave a comment
43 Comments
James Burkhardt (profile) says:

Cathy, loving the breakdown here.

In the "Transactional speech carve-outs" section, you end with:

But what’s worse is that the current proposals are not being carefully drafted, and so we end up seeing bills end up threatening the Section 230 protection of any platform with any sort of profit model. Which, naturally, they all need to have in some way. After all, even non-profit platforms need some sort of income stream to keep the lights on, but proposals like these threaten to make it all but impossible to have the money needed for any platform to operate.

the use of the highlighted profit is misleading. The word to use here is revenue. This helps remind people that profit is different from revenue and that a non-profit doesn’t have $0 revenue, it just is not intended to seek revenue in excess of expenses. I like the work as a whole, but that took me out hard as I was reading as I tried to parse what you were actually trying to say.

James Burkhardt (profile) says:

Re: Re: Re:

I know. My issue wasn’t the word non-profit, it was the word profit in the sentance, i screwed up my markdown and did not bold that one.

I was not admonishing you to for the use of "non-revenue platforms", a non-profit platform is the correct term. i was admonishing the use of ‘profit model’, when the word to use is revenue. Not everyone has a profit model, everyone that takes in money has a revenue model.

This comment has been flagged by the community. Click here to show it.

Koby (profile) says:

Sign Of Growth

But the crippling expense of having to assert one’s First Amendment rights in court, and potentially at an unimaginable scale given all the user-generated content Internet platforms facilitate

Numerous other industries have had to go through this same process. The scale doesn’t matter. Everyone has a vehicle in their driveway. Everyone has a credit card in their wallet. Companies faced the prospect of class action lawsuits from millions of customers. In many cases, the industries helped shape the laws that would govern their product. Others fought serial litigants in court to establish precedent. But it wasn’t easy. However, it did result in a more predictable product, one with which customers are more satisfied. Section 230 could use a lemon law.

This comment has been deemed insightful by the community.
James Burkhardt (profile) says:

Re: Sign Of Growth

Vehicle manufacturers are not liable for the use a car is put to. Nor are they required to sell cars to anyone who walks on the lot. They are only responsible for failures caused by their own actions. Section 230 replicates that level of liability.

Credit cards banks are not liable for the misuse of the credit card, only their own malfeasance. Section 230 Replicates this level of liability

Thank you for higlighting that section 230 does not provide special immunity.

This comment has been deemed funny by the community.
Anonymous Coward says:

Re: Sign Of Growth

Hi Koby,

Due to repeated missed assignments you will fail this class unless you start doing extra credit assignments. Starting with TWO peer reviewed studies on the so called "Ferguson Effect" and an essay about how the First Amendment applies only the the government and not private individuals.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Sign Of Growth

"Everyone has a vehicle in their driveway"

…which they have to go through a process to register and legally own before they can have it there.

"Everyone has a credit card in their wallet"

…which they have to apply for and be approved for before they can have it in their wallet.

"Companies faced the prospect of class action lawsuits from millions of customers."

…which your dumb ass tries to insist increases exponentially beyond their control because your Klan buddies can’t accept some people as being equal human beings.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Sign Of Growth

Here is your daily reminder that you have no fucking clue how section 230 works.

Remember that time that you thought Facebook could use section 230 to dismiss a lawsuit against Facebook’s own speech?

I do:

Instead, they will seek a dismissal based on grounds that their speech did not reach the level of actual malice, or perhaps 230.

You really do suck at this, just saying…

Anonymous Coward says:

if there is something that can be done that means restricting access to the internet for ordinary people, while at the same time handing to certain industries the right to prevent ordinary people from having that access (unless they pay, of course) it is going to happen. what makes this so scarey is that those who are plying for 230 to be ‘reformed’ are politicians who are all in the pay clutches of the industries that want to take this internet control, but no one of any consequence can see it or they dont want to see it or dont give a fuck anyway! but once the net access has been lost to us, once it has been put under the control of these industries (and they are the ENTERTAINMENT INDUSTRIES, pure and simple) it wont ever come back!

Anonymous Coward says:

Re: Re:

Go and read about the events leading to the reformation, where entrenched powers, the Church and aristocracy tried to control what was printed, and eventually failed, although it took several wars of persecution to break that power.

The invention of the printing press is the nearest historical event to the Invention of the Internet. It enabled one to many communication, and radio and television are just faster ways of implementing one to many communications, where those who control the presses and studios decide what get widespread distribution. The internet is revolutionary in that it enables many to many communications, and a step change in communication ability like the step change from letters to the printed book.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

'I'm not cutting down the tree, I'm just removing all the roots'

I have no doubt that for a good many politicians gunning for 230 the fact that reform is the functional equivalent of repeal is seen as a feature, not a bug, as it allows them to effectively kill the law without having to admit or defend that they’ve trying to do so.

ECA (profile) says:

So

Take a simple Short law/regulation.
Backdoor it, compromise it, Rip it apart and what do you get?

Pages and pages of BS to sort thru that only a lawyer MIGHT be able to Fathom.
But can we take the 1st amendment and Back it up. That NO ONE is responsible for what another person or company SAYS OR DOES.
Allot of this tends to be 1 simple fact. REAL NAMES and your personal info or where you live, so the corps can find you. And if they find you dont own anything, that they can SUE someone that has money.

The worst part of all this is WHO is responsible and WHO pays, on the internet. NOT for all the other corps out there that SCREW us every day.

freelunch says:

fortunately

in this instance, there is no political consensus to pass any specific reform, since the near consensus that Section 230 creates "a problem" hides fundamental disagreements about what the problem might be, with such common gripes as "too much disinformation" and "they are censoring discriminatorily" calling for changes in opposite directions.

Thank you for this insightful article, Ms. Gellis.

This comment has been flagged by the community. Click here to show it.

Vermont IP Lawyer (profile) says:

Mea Culpa

Severeal months ago, I posted a comment to a different article in which (here comes the mea culpa) I suggested that the community of people who read and post to Techdirt were extremely qualified to respond to Section 230 criticisms with possible improvements to Sec. 230. My post was more or less uniformly condemned by this community (sometimes not in the politest terms). Many of the comments on my post suggested that I must hate free speech and/or Sec. 230 and/or have a political agenda. That was not and is not the case–I may not be as much of a 1st Amendment "absolutist" as some of those who post here but I lean strongly in that direction.

(For example, I disagree with this assertion in the very first comment on Cathy’s post: "everybody who wants to change or eliminate 230 does not support free speech, but is rather seeking the means to force the Internet to reflect their political views, and only their political views." More accurate if it changed "everybody" to "most.")

So, with that introduction, let me say that I REALLY like Cathy’s explanation of the defects in a wide variety of proposals for amendments to Sec. 230. I agree 100% with the key point that Sec. 230 does not provide a new substantive right but is, rather, a critical civil procedure optimization of what the 1st Amendment would provide for defendants with deep enough pockets.

In an ideal universe, where everyone with a view on this domain, understood Cathy’s point, and was operating in good faith, maybe we could agree on an improved Sec. 230. But, regrettably, it is clear that in the current real world, any proposed amendment to Sec. 230 will really be designed to further a political agenda and degrade a key constitutional right and, therefore, worthy of condemnation.

Vermont IP Lawyer (profile) says:

Re: Re: Mea Culpa

Comment slightly missing my point. If I had some brilliant idea for an improvement, I’d say what it was. In my earlier comment, I was just wondering whether this community might come up with some alternatives to "don’t try to fix it; leave it alone." Repeating myself, in an ideal world, we could all have polite debate about what that might be. But, in the real world, as convincingly explained by Cathy, the winner is "don’t try to fix it; leave it alone."

That One Guy (profile) says:

Re: Re: Re: Mea Culpa

‘Ideal world’ or not the premise still seems to be based on the idea that there’s something that needs to be fixed and similar to Nasch I’ve yet to see that argument be presented in any convincing or even persuasive way.

You could(and I have) argue that 230 shouldn’t be needed because the legal system should recognize that just like it would be absurd to blame someone who sold a car if the one they sold it to got drunk and hit someone it’s equally absurd to blame the platform if a user misuses it but that’s a different argument then arguing whether it needs to be ‘fixed’ or ‘improved’ in the legal landscape we do have, and in that landscape it seems to work just fine.

Scary Devil Monastery (profile) says:

Re: Mea Culpa

"But, regrettably, it is clear that in the current real world, any proposed amendment to Sec. 230 will really be designed to further a political agenda and degrade a key constitutional right and, therefore, worthy of condemnation."

This, in essence, is the important bit. Once you’ve compromised on a principle everything else becomes a matter of scale – which will always tilt in the due direction of vested interests who have a lot of money and/or political power riding on the outcome.

In short we can’t have nice things because the Powers That Be consists of too many shady grifters. A natural outcome of an election system always aimed at individuals rather than parties and a first-past-the-post system where the winner takes all and politics are naturally skewed towards extremes.

230 works well enough because it is simple. You literally can’t change a single word without it breaking completely.

Anonymous Coward says:

Put simply if you need to go to court to show section 230 protects moderation choices or the freedom to block or remove users who break the rules then many small websites will shut Down or remove all ability to comment or make a post eg minority voices will be silenced only big services like Facebook will be left for ordinary people to use for political discussions
The whole point of section 230 is to stop expensive legal cases made by trolls or extremist users also some users will bring random legal cases in the hope of closing down the website or service

PaulT (profile) says:

Re: Re:

"Sec 230 will be changed, it is just a matter of time."

Then it’s a matter of whether those changes do something to make things better, or a hell of a lot worse. People are generally open to the former, but the suggestions as to how to actually improve a rule that essentially says "prosecute the person who did a thing rather than the most convenient bystander" are thin on the ground.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...