Beware Of Facebook CEOs Bearing Section 230 Reform Proposals

from the good-for-facebook,-not-good-for-the-world dept

As you may know, tomorrow Congress is having yet another hearing with the CEOs of Google, Facebook, and Twitter, in which various grandstanding politicians will seek to rake Mark Zuckerberg, Jack Dorsey, and Sundar Pichai over the coals regarding things that those grandstanding politicians think Facebook, Twitter, and Google “got wrong” in their moderation practices. Some of the politicians will argue that these sites left up too much content, while others will argue they took down too much — and either way they will demand to know “why” individual content moderation decisions were made differently than they, the grandstanding politicians, wanted them to be made. We’ve already highlighted one approach that the CEOs could take in their testimony, though that is unlikely to actually happen. This whole dog and pony show seems all about no one being able to recognize one simple fact: that it’s literally impossible to have a perfectly moderated platform at the scale of humankind.

That said, one thing to note about these hearings is that each time, Facebook’s CEO Mark Zuckerberg inches closer to pushing Facebook’s vision for rethinking internet regulations around Section 230. Facebook, somewhat famously, was the company that caved on FOSTA, and bit by bit, Facebook has effectively lead the charge in undermining Section 230 (even as so many very wrong people keep insisting we need to change 230 to “punish” Facebook). That’s not true. Facebook is now perhaps the leading voice for changing 230, because the company knows that it can survive without it. Others? Not so much. Last February, Zuckerberg made it clear that Facebook was on board with the plan to undermine 230. Last fall, during another of these Congressional hearings, he more emphatically supported reforms to 230.

And, for tomorrow’s hearing, he’s driving the knife further into 230’s back by outlining a plan to further cut away at 230. The relevant bit from his testimony is here:

One area that I hope Congress will take on is thoughtful reform of Section 230 of the Communications Decency Act.

Over the past quarter-century, Section 230 has created the conditions for the Internet to thrive, for platforms to empower billions of people to express themselves online, and for the United States to become a global leader in innovation. The principles of Section 230 are as relevant today as they were in 1996, but the Internet has changed dramatically. I believe that Section 230 would benefit from thoughtful changes to make it work better for people, but identifying a way forward is challenging given the chorus of people arguing?sometimes for contradictory reasons?that the law is doing more harm than good.

Although they may have very different reasons for wanting reform, people of all political persuasions want to know that companies are taking responsibility for combatting unlawful content and activity on their platforms. And they want to know that when platforms remove harmful content, they are doing so fairly and transparently.

We believe Congress should consider making platforms? intermediary liability protection for certain types of unlawful content conditional on companies? ability to meet best practices to combat the spread of this content. Instead of being granted immunity, platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it. Platforms should not be held liable if a particular piece of content evades its detection?that would be impractical for platforms with billions of posts per day?but they should be required to have adequate systems in place to address unlawful content.

Definitions of an adequate system could be proportionate to platform size and set by a third-party. That body should work to ensure that the practices are fair and clear for companies to understand and implement, and that best practices don?t include unrelated issues like encryption or privacy changes that deserve a full debate in their own right.

In addition to concerns about unlawful content, Congress should act to bring more transparency, accountability, and oversight to the processes by which companies make and enforce their rules about content that is harmful but legal. While this approach would not provide a clear answer to where to draw the line on difficult questions of harmful content, it would improve trust in and accountability of the systems and address concerns about the opacity of process and decision-making within companies.

As reform ideas go, this is certainly less ridiculous and braindead than nearly every bill introduced so far. It attempts to deal with the largest concerns that most people have — what happens when illegal, or even “lawful but awful,” activity is happening on websites and those websites have “no incentive” to do anything about it (or, worse, incentive to leave it up). It also responds to some of the concerns about a lack of transparency. Finally, to some extent it makes a nod at the idea that the largest companies can handle some of this burden, while other companies cannot — and it makes it clear that it does not support anything that would weaken encryption.

But that doesn’t mean it’s a good idea. In some ways, this is the flip side of the discussion that Mark Zuckerberg had many years ago regarding how “open” Facebook should be regarding third party apps built on the back of Facebook’s social graph. In a now infamous email, Mark told someone that one particular plan “may be good for the world, but it’s not good for us.” I’d argue that this 230 reform plan that Zuckerberg lays out “may be good for Facebook, but not good for the world.”

But it involves some thought, nuance, and predictions of how this plays out to understand why.

First, let’s go back to the simple question of what problem are we actually trying to solve for. Based on the framing of the panel — and of Zuckerberg’s testimony — it certainly sounds like there’s a huge problem of companies not having any incentive to clean up the garbage on the internet. We’ve certainly heard many people claim that, but it’s just not true. It’s only true if you think that the only incentives in the world are the laws of the land you’re in. But that’s not true and has never been true. Websites do a ton of moderation/trust & safety work not because of what legal structure is in place but because (1) it’s good for business, and (2) very few people want to be enabling cesspools of hate and garbage.

If you don’t clean up garbage on your website, your users get mad and go away. Or, in other cases, your advertisers go away. There are plenty of market incentives to make companies take charge. And of course, not every website is great at it, but that’s always been a market opportunity — and lots of new sites and services pop up to create “friendlier” places on the internet in an attempt to deal with those kinds of failures. And, indeed, lots of companies have to keep changing and iterating in their moderation practices to deal with the fact that the world keeps changing.

Indeed, if you read through the rest of Zuckerberg’s testimony, it’s one example after another of things that the company has already done to clean up messes on the platform. And each one describes putting huge resources in terms of money, technology, and people to combat some form of disinformation or other problematic content. Four separate times, Zuckerberg describes programs that Facebook has created to deal with those kinds of things as “industry-leading.” But those programs are incredibly costly. He talks about how Facebook now has 35,000 people working in “safety and security,” which is more than triple the 10,000 people in that role five years ago.

So, these proposals to create a “best practices” framework, judged by some third party, in which you only get to keep your 230 protections if you meet those best practices, won’t change anything for Facebook. Facebook will argue that its practices are the best practices. That’s effectively what Zuckerberg is saying in this testimony. But that will harm everyone else who can’t match that. Most companies aren’t going to be able to do this, for example:

Four years ago, we developed automated techniques to detect content related to terrorist organizations such as ISIS, al Qaeda, and their affiliates. We?ve since expanded these techniques to detect and remove content related to other terrorist and hate groups. We are now able to detect and review text embedded in images and videos, and we?ve built media-matching technology to find content that?s identical or near-identical to photos, videos, text, and audio that we?ve already removed. Our work on hate groups focused initially on those that posed the greatest threat of violence at the time; we?ve now expanded this to detect more groups tied to different hate-based and violent extremist ideologies. In addition to building new tools, we?ve also adapted strategies from our counterterrorism work, such as leveraging off-platform signals to identify dangerous content on Facebook and implementing procedures to audit the accuracy of our AI?s decisions over time.

And, yes, he talks about making those rules “proportionate to platform size” but there’s a whole lot of trickiness in making that work in practice. Size of what, exactly? Userbase? Revenue? How do you determine and where do you set the limits? As we wrote recently in describing our “test suite” of internet companies for any new internet regulation, there are so many different types of companies, dealing with so many different markets, that it wouldn’t make any sense to apply a single set of rules or best practices across each one. Because each one is very, very different. How do you apply similar “best practices” on a site like Wikipedia — where all the users themselves do the moderation — to a site like Notion, in which people are setting up their own database/project management setups, some of which may be shared with others. Or how do you set up the same best practices that will work in fan fiction communities that will also apply to something like Cameo?

And, even the “size” part can be problematic. In practice, it creates so many wacky incentives. The classic example of this is in France, where stringent labor laws kick in only for companies at 50 employees. So, in practice, there are a huge number of French companies that have 49 employees. If you create thresholds, you get weird incentives. Companies will seek to limit their own growth in unnatural ways just to avoid the burden, or if they’re going to face the burden, they may make a bunch of awkward decisions in figuring out how to “comply.”

And the end result is just going to be a lot of awkwardness and silly, wasteful lawsuits for companies arguing that they somehow fail to meet “best practices.” At worst, you end up with an incredible level of homogenization. Platforms will feel the need to simply adopt identical content moderation policies to ones who have already been adjudicated. It may create market opportunities for extractive third party “compliance” companies who promise to run your content moderation practices in the identical way to Facebook, since those will be deemed “industry-leading” of course.

The politics of this obviously make sense for Facebook. It’s not difficult to understand how Zuckerberg gets to this point. Congress is putting tremendous pressure on him and continually attacking the company’s perceived (and certainly, sometimes real) failings. So, for him, the framing is clear: set up some rules to deal with the fake problem that so many insist is real, of there being “no incentive” for companies to do anything to deal with disinformation and other garbage, knowing full well that (1) Facebook’s own practices will likely define “best practices” or (2) that Facebook will have enough political clout to make sure that any third party body that determines these “best practices” is thoroughly captured so as to make sure that Facebook skates by. But all those other platforms? Good luck. It will create a huge mess as everyone tries to sort out what “tier” they’re in, and what they have to do to avoid legal liability — when they’re all already trying all sorts of different approaches to deal with disinformation online.

Indeed, one final problem with this “solution” is that you don’t deal with disinformation by homogenization. Disinformation and disinformation practices continually evolve and change over time. The amazing and wonderful thing that we’re seeing in the space right now is that tons of companies are trying very different approaches to dealing with it, and learning from those different approaches. That experimentation and variety is how everyone learns and adapts and gets to better results in the long run, rather than saying that a single “best practices” setup will work. Indeed, zeroing in on a single best practices approach, if anything, could make disinformation worse by helping those with bad intent figure out how to best game the system. The bad actors can adapt, while this approach could tie the hands of those trying to fight back.

Indeed, that alone is the very brilliance of Section 230’s own structure. It recognizes that the combination of market forces (users and advertisers getting upset about garbage on the websites) and the ability to experiment with a wide variety of approaches, is how best to fight back against the garbage. By letting each website figure out what works best for their own community.

As I started writing this piece, Sundar Pichai’s testimony for tomorrow was also released. And it makes this key point about how 230, as is, is how to best deal with misinformation and extremism online. In many ways, Pichai’s testimony is similar to Zuckerberg’s. It details all these different (often expensive and resource intensive) steps Google has taken to fight disinformation. But when it gets to the part about 230, Pichai’s stance is the polar opposite of Zuckerberg’s. Pichai notes that they were able to do all of these things because of 230, and changing that would put many of these efforts at risk:

These are just some of the tangible steps we?ve taken to support high quality journalism and protect our users online, while preserving people?s right to express themselves freely. Our ability to provide access to a wide range of information and viewpoints, while also being able to remove harmful content like misinformation, is made possible because of legal frameworks like Section 230 of the Communications Decency Act.

Section 230 is foundational to the open web: it allows platforms and websites, big and small, across the entire internet, to responsibly manage content to keep users safe and promote access to information and free expression. Without Section 230, platforms would either over-filter content or not be able to filter content at all. In the fight against misinformation, Section 230 allows companies to take decisive action on harmful misinformation and keep up with bad actors who work hard to circumvent their policies.

Thanks to Section 230, consumers and businesses of all kinds benefit from unprecedented access to information and a vibrant digital economy. Today, more people have the opportunity to create content, start a business online, and have a voice than ever before. At the same time, it is clear that there is so much more work to be done to address harmful content and behavior, both online and offline.

Regulation has an important role to play in ensuring that we protect what is great about the open web, while addressing harm and improving accountability. We are, however, concerned that many recent proposals to change Section 230?including calls to repeal it altogether?would not serve that objective well. In fact, they would have unintended consequences?harming both free expression and the ability of platforms to take responsible action to protect users in the face of constantly evolving challenges.

We might better achieve our shared objectives by focusing on ensuring transparent, fair, and effective processes for addressing harmful content and behavior. Solutions might include developing content policies that are clear and accessible, notifying people when their content is removed and giving them ways to appeal content decisions, and sharing how systems designed for addressing harmful content are working over time. With this in mind, we are committed not only to doing our part on our services, but also to improving transparency across our industry.

That’s standing up for the law that helped enable the open internet, not tossing it under the bus because it’s politically convenient. It won’t make politicians happy. But it’s the right thing to say — because it’s true.

Filed Under: , , , , , ,
Companies: facebook, google

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Beware Of Facebook CEOs Bearing Section 230 Reform Proposals”

Subscribe: RSS Leave a comment
63 Comments
Anonymous Coward says:

Facebook and section 230

So, Facebook thinks they can keep on running without section 230… well, let’s see them go ahead and accept liability for user posts starting now… … I’m sure they can figure out a way to shift liability from the user to themselves with some legalese… and if they can do that for a few years, then let’s chat about section 230 reform…

This comment has been deemed insightful by the community.
Jojo (profile) says:

This is so stupid and naive

Congress: “These companies are too large and need to be checked.”

Also Congress: “Let’s hear their ideas of how things should be run.”

But in all seriousness, Congress cannot be this naive. What Zuckerberg is proposing basically ensures that the status quo on the internet is solidified. He knows that this method is so expensive that it keeps out any potential rival. That’s like asking Richard Liebolewiz to form a proposal for fair copyright reform or a convicted serial killer asked to reform prisons. Take these ideas and concepts with a grain of salt. If Congress follows through on Zuckerburg’s idea, Google, Facebook and Twitter could become what’s left of the open internet.

Anonymous Coward says:

Matt, why do I get the feeling this is all meant to force more folks into the Facebook pipeline? Like think of it this way, the supposed reforms Mark Zuckerberg proposes means your small self-hosted content providers (hobby forums, blogs, etc) would have to either hire an organization like Facebook to police their content or move to those platforms wholesale or worse: give up hosting their own content altogether. It smacks of the further acceleration of the walled garden trend that Facebook and Google enjoy. Frankly, if it comes down to that I’ll just personally host my content on my home server whether or not my ISP approves of it. In this case, it’s just my crappy MUD projects but still it’s mine and I’m sure some granny who loves hosting her own knitting forum and blog would feel the same.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Who's "Matt"?

Frankly, if it comes down to that I’ll just personally host my content on my home server whether or not my ISP approves of it.

Yeah, but GOOGLE doesn’t have to index it if doesn’t meet their ideological test (by AI), so you have zero chance of it ever being noticed. Even any links to it that you manually make can be automatically suppressed. — That last may not happen for your little projects, but can if putting out "dangerous" ideas or "hate speech" , all as defined by unaccountable mega-corporations.

This comment has been flagged by the community. Click here to show it.

migi says:

Re: Re:

I was about to post something similar, I think the end goal is for Facebook to start selling automated comment moderation as a service, probably via some sort of wordpress like plug in.
By increasing the liability for sites without automated moderation they practically force sites to buy automated moderation, which Facebook conveniently starts selling, so the law generates them a huge locked-in market.

This comment has been flagged by the community. Click here to show it.

bhull242 (profile) says:

Re: innocuous leader -- M's pieces usually seemed locked down

I have no idea where you’re getting that from. Most of my comments appear immediately, and even the ones that do get held for moderation (which then do have to be individually okayed) usually don’t stay there for long. Also, if your accusations were true, why would he let this one through?

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: innocuous leader -- M's pieces usually seemed locked dow

That would be because you’re a normal sane person and your reaction to a held post is to shrug your shoulders and go "ok, I’ll check back later when it’s visible and I get a reply", and not "damn you! I’ll now make another 15 posts whining about it and try using Tor to bypass the spam filter!".

This comment has been flagged by the community. Click here to show it.

try a different screen name too says:

Manic sets up perfect as enemy of the merely good...

it’s literally impossible to have a perfectly moderated platform at the scale of humankind.

… in order to argue for doing nothing, thereby allowing the current giants to continue gaining power. — M always trots out the "baby — bath water" line for same purpose. Everyone else wants reform.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

YOU WISH THE DISSENTERS TO GO AWAY!

If you don’t clean up garbage on your website, your users get mad and go away.

You DO doublethink if can’t grasp that applies here. Maybe you just don’t admit that I gave you good advice and now clearly proven right: all the reasonable LEFT Techdirt.

That’s WHY you allow the "garbage" from your fanboys, and other tactics like the apparent lockdown I’m at present finding.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

SO do nothing that disturb your favorite corps?

Manic sets up perfect as enemy of the merely good…

it’s literally impossible to have a perfectly moderated platform at the scale of humankind.

… in order to argue for doing nothing, thereby allowing the current giants to continue gaining power. — Maz always trots out the "baby — bath water" line for same purpose. Everyone else wants reform.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Re: Re:

It’s more than a little entertaining seeing someone who professes such hate for ‘Big Tech’ dancing to their tunes and playing right into their hands, and arguing in favor of actions that far from being problems for those companies would be the greatest gift possible to them by killing off any competition both now and in the future that might have otherwise forced them to change for the better.

Scary Devil Monastery (profile) says:

Re: Re: Re:2 Re:

"…since his whole raison d’être is to take dishonest cheap-shots at TD for an imaginary slight 10 years ago."

Oh, let’s be fair. Someone implying he’s a particularly toxic brand of stupid motherfucker is certainly a "slight" in old Baghdad Bob’s eyes.

The rest of us looking at his usual drivel may think it’s a compliment relative to the quality of what he usually posts around here, but he himself was obviously cruelly harmed and crippled by the implication.

This comment has been deemed insightful by the community.
Samuel Abram (profile) says:

Re: SO do nothing that disturb your favorite corps?

… in order to argue for doing nothing, thereby allowing the current giants to continue gaining power. — Maz

When did Mike Masnick argue that Facebook, Twitter, Google, et al. should "do nothing" because moderating well at their scale is impossible? Your quote doesn’t say that, so try again.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

But not if they allow "conservatives"!

The amazing and wonderful thing that we’re seeing in the space right now is that tons of companies are trying very different approaches to dealing with it, and learning from those different approaches.

Oh, really? Yet you take shots at Parler and Gab, jeer that they too find the same problems. — Above you’ve found the word "homogenization" to wedge in, but it’s silly railing because people are the same in general ways, same problems will arise, and so same solutions will mostly work.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

The potshots at Gab and Parler occur because those two sites made it a point of acting like they were bastions of “uncensored free speech”. Then they found out they would still need to moderate some content for the sake of avoiding legal headaches (civil and possibly criminal) and keeping users around. At that point, they stopped being about “uncensored free speech” and started being about “keeping the ship afloat so we can maybe take on new passengers”.

Also: Yes, certain generalized solutions for moderation will work across the board because they apply to specific situations that look the same on every platform. But on Gab and Parler, those solutions have to factor in another element: a userbase that doesn’t want many of those solutions applied to them, per the “uncensored free speech” promises of Gab and Parler.

Were Gab or Parler to ever suspend or ban because of what we would colloquially call “hate speech” — someone using the n-word, for example — the userbase would revolt over a fellow user being “censored” for using what is 100% legally protected speech. In that instance, Gab/Parler would have to decide whether upholding the suspension or ban is worth the trouble that could be avoided by lifting it and apologizing for the “censorship”.

But to even be able to do any of that without risking a lawsuit of some kind, both Gab and Parler need 47 U.S.C. § 230 to remain intact. Without it, both services would be open to far more legal liability than they are right now — especially in the criminal cases involving the Capitol insurrection. So which one do you want, Brainy: a world where Gab and Parler can legally moderate however they wish, or a world where Gab and Parler almost have to overmoderate because they can’t afford to fend off lawsuits with the same general ease as “Big Tech”?

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:

"The potshots at Gab and Parler occur because those two sites made it a point of acting like they were bastions of “uncensored free speech”. Then they found out they would still need to moderate some content for the sake of avoiding legal headaches (civil and possibly criminal) and keeping users around"

Not really. They marketed themselves as "free speech", but targeting the type of right-wing moron who thinks that someone arguing back to them is a violation of free speech. They then moderated based of political viewpoint, because those snowflakes need their safe spaces. Post-insurrection they might be changing their policies, but before that they were censoring based on political viewpoint and letting the more objectionable and dangerous speech flow freely.

But, it is true that should they not have section 230 protections, they will be bigger targets than Facebook – because their audience is more likely to post objectionable content. At least Facebook still has hundreds of millions of users who don’t get involved in political speech at all. Parler’s main draw is political speech, nobody’s going on there to share pet and baby pictures with family they can’t see during the pandemic.

As I’ve often said – I wish this fool could get a taste of what he’s asking for, because it’s not what he thinks he’s asking for.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

'A very fine point about the negatives of doors Mr. Fox...'

Congress: Facebook and Google have too much power and use their consititional rights in ways we don’t like!

Also Congress: Facebook and Google, what can be done to ensure that you not only keep your power but cement it such that there will never be a viable alternative to you?

Having Facebook involved in not only trying to gut 230 but also provide guidence on how to do it is rather like having the fox leading the discussion on whether or not chicken coops need to exist, and if they do how they should be built. The ‘conflict of interest’ here should be glaringly obvious to anyone to see such that I can only assume that the grandstanding idiots in congress are either blinded by their hatred of tech or are working with them to kill off competition even as they pretend to rake them over the coals.

Either way they’re dancing to the tune of the very same companies they are supposedly trying to ‘reign in’, and it sure would be nice if more people called them out on that.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

WHEW! Worst ever blocking, then suddenly none.

Has to be admin action, getting tired of me filling up the queue of supposedly "Moderated" that "staff" will look at.

For a site that claims to do no moderation, I sure see that lie often.

But, there it is, kids, what I wanted to say, a bit splattered, which is fine with me. Maz runs the site, complain to him.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Re: Re: Usually you have to pay good bribe money for that outcome

Oh it would hurt them some but they’ve got the resources to deal with the resulting lawsuits and increased moderation costs whereas their competitors current and future would not and would therefore go under either immediately or in time as the bills add up, and a field where there not only aren’t competitors but it’s effectively not possible for any to spring up is something companies usually have to spend a good chunk of money bribing politicians to get rather than have handed to them by idiots.

David Powell says:

So if we lose section 230, could we not then put are website...

In another country so we can avoid the new section 230. What I am hearing from everyone that if section 230 is gone, it’s over. We cannot put are web pages overseas to avoid the problems with the new section 230. I heard people suggest internet blackouts like with SOPA AND PIPA, using trade deals.

Let me ask this question:

What happens if section 230 goes? With all the advancements we made isn’t there a loop hole small web pages can use to avoid the bad problems. Hosting a webpage in Africa let’s say instead of America or do we kiss Techdirt comment line goodbye if Section 230 by Mark Warner becomes law??

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

What happens if section 230 goes?

With the exception of the companies/services large enough to fend off the legal headaches that would come with the repeal of 230, every interactive web service would have to do one of three things for the sake of avoiding legal liability for user-generated content:

  1. Overmoderate — hold all posts back until they can be cleared as legally “okay” and delete anything already on the service that would land it in legal jeopardy
  2. Undermoderate — go 4chan and let everything but the illegal shit stay up while moderating nothing but the illegal shit
  3. Shut down — close off submissions from third parties in any and every way possible, which could mean the shutdown of the service itself

You will find no in-between for these options. If a service becomes legally liable for third-party content, it will have to choose one of those three outcomes. Yes, that includes sites like Gab and Parler as much as it includes Techdirt.

If you want the Internet — or the American side of it, at any rate — to continue operating as normal, contact your lawmakers at both the state and federal levels about it. Tell them to leave Section 230 alone. Tell people you know to do the same. Support any effort to show what a lack of 230 could do to the Internet (e.g., another Internet Blackout). This only ends if Congress finds out that it’s fucking around with the wrong law — and that such efforts will be both political and commercial poison.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:

Option 4 – the US internet becomes a separate and less relevant part of the overall internet, since the rest of the world can survive without it and we already have the protections afforded by section 230 baked into our laws without it having to be specifically spelled out. Sites like Facebook can just segregate their services and operate according to the different laws, while sites that depend on US traffic will have to completely change business models or die, while Europe and Asia become much more attractive to new startups than Silicon Valley.

This comment has been deemed insightful by the community.
Blake C. Stacey (profile) says:

This bit at the end of the Axios story linked in the post caught my eye:

Smaller tech companies and online sites will balk at any Section 230 changes, even if considered narrow. The biggest companies have the greatest ability to respond and adapt to legislation.

This manages to be absolutely true and yet gallingly phrased. It still puts the emphasis on "tech companies", rather than people. The mindset at work is that the Internet is a medium for commerce, not communication.

Jojo (profile) says:

Re: Re: Re: Re:

I’m going to be honest, I agree with both of you. Yes, these horrid bills have a chance of passing; however, it’s still low, mainly because so far (emphasis on so far) the bills that have been introduced haven’t gained enough steam for cosponsors. But also based on the fact that Congress is infamously slow and very few bills are given a pathway to law.

To my estimates, It’s a 10% chance that one of these bills may pass in the coming months (if not years). But remember. it’s not a 0%.

Anonymous Coward says:

Hilarious

According to an article on Politico, the VERY SAME PEOPLE who want such a scheme that Zukerberg is proposing are lambasting it. Even Max Blumenthal who co-sponsored the EARN IT Act.

https://www.politico.com/news/2021/03/24/facebook-proposed-internet-rules-477868

They are lambasting it just because he said it. They’re like children.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Hilarious

The whole idea of section 230 "reform" is that some people on the right consider themselves victims who shouldn’t be held liable for the things they say. They’ve invented this idea of persecution by "big tech" (who they interchangeably refer to as "leftist", "socialist" or even "communist", proving they don’t know what words means) in order to try and avoid responsibility for their own actions.

Therefore the fix for their invented problem must come from some hero of the right wing, who will step in and slay the Big Tech demons and restore justice and light to their dark "our actions may have consequences" hellscape. It screws up the narrative when one of the "enemy" agrees with them, even though in reality Zuckerberg has been their ally for a long time.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...