Former FCC Boss Tom Wheeler Continues To Misunderstand And Misrepresent Section 230 And The Challenges Of Content Moderation

from the disappointing dept

It’s not just Ajit Pai who is an FCC chair who misunderstands Section 230. His predecessor, Tom Wheeler continues to get it totally wrong as well. A year ago, we highlighted Wheeler’s complete confusion over Section 230, that blamed Section 230 for all sorts of things… that had nothing at all to do with Section 230. I was told by some people that they had talked to Wheeler and explained to him some of the mistakes in his original piece, but it appears that they did not stick.

This week he published another bizarre and misguided attack on Section 230 that gets a bunch of basic stuff absolutely wrong. What’s weird is that in the last article we pointed to, Wheeler insisted that social media websites do no moderation, because of 230. But in this one, he’s now noting that 230 allowed them to close down the accounts of Donald Trump and some other insurrectionists — but he’s upset that it came too late.

These actions are better late than never. But the proverbial horse has left the barn. These editorial and business judgements do, however, demonstrate how companies have ample ability to act conscientiously to protect the responsible use of their platforms.

Right. Except that… the reason they have “ample ability” is because they know they can’t be sued over those choices thanks to Section 230 and the 1st Amendment. Wheeler’s real complaint here is that these private companies didn’t act as fast as he wanted to in pulling down 1st Amendment protected speech. Then he misrepresents how Section 230 itself works:

Subsection (2) of Section 230 provides that a platform shall not be liable for, ?Any action voluntarily taken in good faith to restrict access to or availability of material that any provider or user considers to be?excessively violent, harassing, or otherwise objectionable?? In other words, editorial decisions by social media companies are protected, as long as they are undertaken in good faith.

This is… only partially accurate, and very misleading. First of all, editorial decisions by companies are protected by the 1st Amendment. Second, subsection (2) almost never comes into play, and the vast, vast majority of Section 230 cases around moderation say that it’s subsection (c)(1), not (c)(2), that gives companies immunity from lawsuits over moderation. Assuming that it’s (c)(2) alone leads you into dangerously misleading territory. Even worse, (c)(2) has two subsections as well, and when Wheeler says that it applies “as long as they are undertaken in good faith” he ignores that (c)(2)(B) has no such good faith requirement.

Of course, in the very next paragraph, he admits that (c)(1) is what grants the companies immunity, so I’m not even sure why he brings up (c)(2) and the good faith line. That’s almost never an issue in Section 230 cases. But the crux of his complaint is that he seems to think it’s obvious that social media should have banned Trump and Trump cultists earlier — and he invokes the classic “nerd harder” line:

Dealing with Donald Trump is a targeted problem that the companies just addressed decisively. The social media companies assert, however, that they have no way to meaningfully police the information flowing on their platform. It is hard to believe that the brilliant minds that produced the algorithms and artificial intelligence that powers those platforms are incapable of finding better outcomes from that which they have created. It is not technological incapacity that has kept them from exercising the responsibility we expect of all other media, it is the lack of will and desire for large-scale profits. The companies? business model is built around holding a user?s attention so that they may display more paying messages. Delivering what the user wants to see, the more outrageous the better, holds that attention and rings the cash register.

This is a commonly stated view, but it tends to reveal a near total ignorance with how these decisions are made. These companies have large trust and safety teams, staffed with thoughtful professionals who work through a wide variety of trade-offs and challenges in making these decisions. While Wheeler is over here saying that it’s obvious that the problem is they waited too long and didn’t nerd harder to remove these people earlier, you have plenty of others out there screaming that this proves the companies are too powerful, and they should be barred from banning him.

Anyone who thinks it’s a simple business model issue has never been involved in any of these discussions. It’s not. There are a ton of factors involved, including what happens if you make this move and there’s a legal backlash? Or what happens if you’re driving all the cultists into underground sites where we no longer know what they’re planning? There are lots of questions and demanding that these large companies with a variety of competing interests must do it to your standard is the height of privilege. It’s impossible to do moderation “right.” Because there is no “right.” There are just a broad spectrum of wrong.

It’s fine to say that companies can do better. It’s fine to suggest ways to make better decisions. But too many pundits and commentators act as if there’s some “correct” decision and any result that differs from that cannot possibly be right. And, even worse, they blame Section 230 for that — when the reality is that Section 230 is what enables the companies to explore different solutions, as both Twitter and Facebook have done for years.

Wheeler’s “solution” for reforming Section 230 is also ivory tower academic nonsense that seems wholly disconnected from the reality of how content moderation works within these companies.

Social media companies are media, not technology

Mark Zuckerberg testified to Congress, ?I consider us to be a technology company because the primary thing we do is have engineers who write code and build product and services for other people.? That software code, however, makes editorial decisions about which information to choose to route to which people. That is a media decision. Social media companies make money by selling access to its users just like ABC, CNN, or The New York Times.

Even though he says this is an idea for reform… it’s just a statement? And a meaningless one at that. It doesn’t matter if they’re media or technology. They’re a mixture of both and something new. Trying to lump them into old buckets doesn’t help and doesn’t take us anywhere useful. And, honestly, if your goal here is to reform Section 230, declaring these companies media companies doesn’t help, because media companies and their editorial decisions are wholly protected by the 1st Amendment.

There are well established behavioral standards for media companies

The debate should be over whether and how those standards change because of user generated content. The absolute absence of liability afforded by Section 230 has kept that debate from occurring.

Um. No. Again, these are not the same as traditional media companies. They have some similarities and some differences. Section 230 doesn’t change anything. And if Tom Wheeler honestly thinks that there hasn’t been a debate about behavioral standards on content moderation, then he honestly shouldn’t be commenting on this. There has been an active discussion and debate on this stuff for years. The fact that he’s ignorant of it doesn’t mean it doesn’t happen. Indeed, the very fact that he doesn’t know about the debate that has gone on among trust and safety professionals and the executives at these companies going back many, many years suggests that Tom Wheeler should perhaps take some time to learn what’s really going on before declaring from on high what he thinks is and is not happening.

But the key point here is that the standards of traditional media companies don’t work well for social media because of the very differences in social media. A regular media company has standards because they need to review a very, very limited amount of content each day, on the order of dozens of stories. A social media company often has millions or billions of pieces of content every day (or in some cases every hour). The unwillingness to comprehend the difference in scale suggests someone who has not thought these issues through.

Technology must be a part of the solution

When the companies hire thousands of human reviewers it is more PR than protection. Asking humans to inspect the data constantly generated by algorithms is like watching a tsunami through a straw. The amazing power of computers created this situation, the amazing power of computers needs to be part of the solution.

I mean… duh? Is there anyone who doesn’t think technology is a part of the solution? Every single company with user generated content, even tiny ones like us, make use of technology to help us moderate. And there are a bunch of companies out there building more and more solutions (some of them very cool!). I’m confused, though, how this matters to the Section 230 debate. Changing section 230 will not change the fact that companies use technology to help them moderate. It won’t suddenly make more technology to help companies moderate. This whole point makes it sound like Tom Wheeler didn’t ever bother to actually speak to an expert on how content moderation works — which, you know, is kind of astounding when he then positions himself to give advice on how to force companies to moderate.

It is time to quit acting in secret

When algorithms make decisions about which incoming content to select and to whom it is sent, the machines are making a protected editorial decision. Unlike the editorial decisions of traditional media whose editorial decisions are publicly announced in print or on screen and uniformly seen by everyone, the platforms? determinations are secret: neither publicly announced nor uniformly available. The algorithmic editorial decision is only accidentally discoverable as to the source of the information and even that it is being distributed. Requiring the platforms to provide an open API (application programming interface) to their inflow and outflow, with appropriate privacy protections, would not interfere with editorial decision-making. It would, however, allow third parties to build their own algorithms so that, like other media, the results of the editorial process are seen by all.

So, yes, some of this I agree with. I mean, I wrote a whole damn paper on trying to move away from proprietary social media platforms to a world built on protocols. But, the rest of this is… again, suggestive of someone who has little knowledge or awareness of how moderation works.

First, I don’t see how Wheeler’s analogy with media even makes sense here. There are tons of editorial decisions that the public will never, ever know about. How can he argue that they’re “publicly announced”? The only information that news media makes public is what they finally decide to publish or air. But… that’s not near the entirety of editorial decision making. We don’t see what stories never make it. We don’t see how stories are edited. We don’t see what important facts or quotes are snipped out. We don’t see the debates over headlines. We have no idea why one story gets page 1 top of the page treatment, while some other story gets buried on A17. The idea that media editorial is somehow more public than social media moderation choices is… weird?

Indeed, in many ways, social media companies are way more transparent than traditional media companies. They even have transparency reports that have details about content removals and other information. I’ve yet to see a mainstream media operation do that about their editorial practices.

Finally demanding “transparency” is another one of those solutions that occurs to people who have never done content moderation. I recently wrote about the importance of transparency, but the dangers of mandated transparency. I won’t rehash that all over again, but the debate is not nearly as simple as Wheeler makes it out to be. But a few quick points: transparency reports have already been abused by some governments to allow them to celebrate and push for ever greater censorship of criticism of the government. We should be concerned about that. On top of that, transparency around moderation can be extremely costly and again create a massive burden for smaller players.

But perhaps one of the biggest issues with the kind of transparency that Wheeler is asking for is that it assumes good faith on the part of users. I’ve pointed out a few times now that we’ve had our comment moderation system in place for over a decade now and in that time the only people who have ever demanded “more transparency” into how it works are those looking to game the system. Transparency is often demanded from the worst of your users who want to “litigate” every aspect of why their content was removed or why they were banned. They want to search for loopholes or accuse you of unfair treatment. In other words, despite Wheeler’s whole focus being on encouraging more moderation of voices he believes are harmful, forced transparency is likely to cut down on that, as it gives those moderated more “outs” or limits the willingness of companies to moderate “edge” cases.

The final paragraph of Wheeler’s piece is so egregious and so designed to make a 1st Amendment lawyer’s head explode, that I’m going to go over it sentence by sentence.

Expecting social media companies to exercise responsibility over their practices is not a First Amendment issue.

Uh… expecting social media companies to exercise responsibility over their practices absolutely is a 1st Amendment issue. The 1st Amendment has long been held to both include a prohibition on compelled speech, as well as a right of association (or non-association). That is, these companies have a 1st Amendment right to moderate as they see fit, and to not be compelled to host speech, or be forced to associate with those they don’t want to associate with. That’s why many of the complaints are really 1st Amendment issues, not Section 230 issues.

Relatedly, it feels like some of the problems with Wheeler’s piece is he’s bought into the myth that with Section 230 there are no incentives to moderate at all. That’s clearly false, given how much moderation we’ve seen. The false thinking is driven by the belief that the only incentive to moderate is the law. That’s ridiculous. The health of your platform is dependent on moderation. Keeping your users happy, and not having your site turn into a garbage dump of spam, harassment and hate, is a very strong motivator for moderation. Advertisers are another motivation, since they don’t want their ads appearing next to bigotry and hatred. The focus on the law as the main lever here is just wrong.

It is not government control or choice over the flow of information.

No, but changing Section 230… would do that. It would force companies to change how they moderate. This is a reason why Section 230 is so important. It gives companies (and users!) a freedom to experiment.

It is rather the responsible exercise of free speech.

Which… all of these companies already do. So what’s the point here?

Long ago it was determined that the lie that shouted ?FIRE!? in a crowded theater was not free speech. We must now determine what is the equivalent of ?FIRE!? in the crowded digital theater.

Long time Techdirt readers will already be screaming about this. This claim is not just wrong, it’s very, very ignorant about the 1st Amendment. The “falsely shouting fire in a crowded theater” line was a throwaway line in an opinion by Justice Holmes that was actually about jailing someone for handing out anti-war pamphlets. It was never actually standard for 1st Amendment jurisprudence, and was effectively overturned in later cases, meaning it is not an accurate statement of law.

Tom Wheeler is very smart and thoughtful on so many things, that it perplexes me that he jumps into this area without bothering to understand the first thing about Section 230, the 1st Amendment, or content moderation. There are experts on all three things that he could talk to. But even more ridiculous, even assuming everything he says is accurate — what actual policy proposal does he make in this piece? Tech companies should use tech in their moderation efforts? That seems like the only actionable point.

There are lots of bad Section 230/content moderation takes out there, and I can’t respond to them all. But this is the former chair of the FCC, and when he speaks, people pay attention. And it’s extremely disappointing that he would jump into this space headfirst with so many factual errors and mistaken assumptions. It’s doubly troubling that this is the second time (at least!) that he’s now done this. I hope that someone at Brookings, or someone close to him suggests he speak to some actual experts before speaking on this subject again.

Filed Under: , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Former FCC Boss Tom Wheeler Continues To Misunderstand And Misrepresent Section 230 And The Challenges Of Content Moderation”

Subscribe: RSS Leave a comment
63 Comments
This comment has been deemed insightful by the community.
Anonymous Coward says:

subsection

section 1
      subsection (a)
            paragraph (1)
                  subparagraph (A)
                        clause (i)

 

Statutory Structure and Legislative Drafting Conventions: A Primer for Judges”, by M. Douglass Bellis, FJC (archived copy), 2008

Anonymous Coward says:

This is really weird and sad. Of all the "well-intentioned" people to lob bombs into things they don’t understand because they don’t like something else, Wheeler is one of the surprising ones. And that piece is literally arglebargle. What the hell?

If it were someone else i’d be more inclined to think this was another attempt at context modulation for other purposes in the guise of an argument about content moderation.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Nice, managed a 'bogus 230 arguments' BINGO

‘Nerd harder’, ‘it’s not a first amendment issue’, ‘traditional media can do it why can’t social media?’… I swear, it’s like someone handed him a list of the talking points that are trotted out by those attacking 230 and told him to squeeze as many in as possible.

Getting this wrong once is understandable, everyone can screw up. Twice is stretching things, especially if people knowledgeable on the subject had talked to him after the first time to explain why he was wrong, but if he pops up to parrot the same debunked points a third time I honestly cannot see any explanation other than deliberate dishonesty and axe-grinding. However it goes in the future though for the moment it sounds like he could really do with TD’s article laying out the basics of 230 and why the arguments against it are flawed.

This comment has been flagged by the community. Click here to show it.

Koby (profile) says:

He knows

so I’m not even sure why he brings up (c)(2) and the good faith line. That’s almost never an issue in Section 230 cases.

Because tech company decisions lately have not been in good faith, they have been based on politics. It turns out, he got that part correct.

‘traditional media can do it why can’t social media?’.

It sounds like he understands in his heart that there is a difference between a platform and a publisher.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: He knows

they have been based on politics.

So ban political parties because their decisions are also based on politics. /s

When people become so proselytizing that there is no reasoning with them, the only way for more reasonable people to have a discussion is to push the proselytizers out of the conversation.

That One Guy (profile) says:

Re: Re: He knows

Along with that question I’d be curious to know if they apply that standard equally, because if political motivation means you’re not acting in good faith then it would seem they just condemned the people agitating against 230 to score cheap points with the gullible, along with a whole slew of other people and politicians.

Anonymous Coward says:

Section 230 of the …what?

Public Law 104-104

SECTION 1. SHORT TITLE; REFERENCES.

     (a) Short Title.–This Act may be cited as the “Telecommunications Act of 1996”.

 . . .

SEC. 509. ONLINE FAMILY EMPOWERMENT.

     Title II of the Communications Act of 1934 (47 U.S.C. 201 et seq.) is amended by adding at the end the following new section:

          SEC. 230. <<NOTE: 47 USC 230.>> . . .

(Emphasis.)

From the Wheeler article

Section 230 of the 1996 Communications Act.

We went over this at Techdirt just ten days ago: Section 230 of… what?. But Tom Wheeler was in charge of the agency whose chief governing statute is the ”Communications Act of 1934, as amended by the Telecommunications Act of 1996”.

Anonymous Coward says:

Re: Re: Section 230 of the ... what?

If only we had a common shorthand label to call it…

47 USC § 609

This chapter may be cited as the “Communications Act of 1934.”

 


Editorial Notes This chapter, referred to in text, was in the original “this Act”, meaning act June 19, 1934, ch. 652, 48 Stat. 1064, known as the Communications Act of 1934…

(Supplemental hint: Click on the “Notes” tab on the LII page — the notes often contain helpful material! It used to be possible to link to that tab directly, but that capability seems to have gone away somewhere over the years now. So click.)

Anyhow, in short, just leave off the ‘as amended by…”, and just call it the “Communications Act of 1934”.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Re:

I’d start a lot more basic than that and ask them to simply define what 230 is, why it was created and what it actually does, as I suspect that for a great many anti-230 people they either have no clue what it actually says and does, or they do know and they’d be forced to lie in order to defend their previous claims and position.

Anonymous Coward says:

Re: Re: Re:

… to simply define what 230 is

If you don’t know the proper names of things, then good luck looking through the United States Code for “230” — let alone searching deeper into the public record for things like S. Rept. 104-230 or H. Rept. 104-458. Sincerely, “Good luck”.

(Well, to test that, before posting, I did just try Google’s, “I’m feeling lucky”. You kids have it so easy these days… or maybe my result is just lucky.)

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:2 Re:

Really? After all the discussion for years…

Let me blather you a story then… I remember once telling people, ‘I cannot predict how the jury is going to go’ — or words to that effect.

I was thinking, back at that time, here I’ve spent years intensely focused on this (one particular) case. I know so much more about this case than was presented at trial, that I cannot really put myself into the mind of a juror looking through the limited window that they saw the case framed in. Just can’t set aside all the additional facts I know that weren’t given to them, but still color my thinking.

Anyhow, we did win. But that’s not the point of this story.

Jumping to the present, we know that a large percentage of readers just won’t click on the hyperlinks. (It gets worse with links to PDFs or YouTube.) It gets a little bit better if the link has good anchor text, and surrounding explanation. But it still can be pretty dismal.

On the flip side, a minuscule number will do truly heroic research from just the itsy-bitsiest scap of info — coding up perl (or these days, python) to search through all the titles for a particular section, say. After all, there are only a little over 50 titles in the US Code, so the results might be fit to finish off with an eyeball scan.

In betweeen, there’s a group who are moderately interested, but some of them may be new.

Rocky says:

Re: Re: Re:3 Re:

If your argument is that a large percentage of readers can’t be bothered to click a link to educate themselves, how will it help them if you write 47 U.S.C. § 230, a Provision of the Communication Decency Act or just section 230?

And there is no need to heroic searches, most search-engines will return relevant information in the top 5 if you search for section 230 or cda 230.

The real problem is that there are readers that have already made their mind up about what Section 230 is, and no amount of links or information will change their minds. Those who can think for themselves and are interested in the subject will look up information.

Anonymous Coward says:

Re: Re: Re:4 Re:

[H]ow will it help them if you write… the Communication Decency Act …?

Going to post an answer to this question, even though it may not be exactly the question you thought you were asking. 😉

US House OLRC: Popular Name Tool: letter C

The Popular Name Tool enables you to search or browse the United States Code Table of Acts Cited by Popular Name. . . .

Communications Decency Act of 1996
      Pub. L. 104-104, title V, Feb. 8, 1996,110 Stat. 133
      Short title, see 47 U.S.C. 609 note

PaulT (profile) says:

Re: Re: Re:3 Re:

"we know that a large percentage of readers just won’t click on the hyperlinks"

Which is an issue with the people who refused to look at the data provided, not the people who provided the data.

"After all, there are only a little over 50 titles in the US Code"

How many come up when searching for CDA 230, the specific one being discussed?

"In betweeen, there’s a group who are moderately interested, but some of them may be new."

If you’re jumping in to a discussion that’s several years old, it’s up to you to either catch up or ask what the relevant data is. Not to randomly post links that might be what everyone else is talking about and mock them for not having repeated the same information in the article you stumbled across 5 years later.

nasch (profile) says:

Re: Re: Re:3 Re:

Jumping to the present, we know that a large percentage of readers just won’t click on the hyperlinks.

Yeah because the surrounding context is a bunch of WELL ACKSHUALLY without any indication that there is anything to see other than boring pedantry. Why would anyone click?

If you don’t know the proper names of things, then good luck looking through the United States Code for “230”

It’s really not as hard as you seem to think. I searched for "CDA section 230" and the very first link gave the full text of the statute.

PaulT (profile) says:

Re: Re: Re:4 Re:

"It’s really not as hard as you seem to think. I searched for "CDA section 230" and the very first link gave the full text of the statute."

Yeah, before I commented I did a quick test search, and the full text came up with "CDA 230" in the first google result.

I’m not sure what he’s doing to make it so difficult for him to find the text, but it has to be deliberate.

Anonymous Coward says:

Re: Re: Re:6 Re:

Maybe he has a physical copy of the entire Federal code…

Got rid of the U.S.C.A. bound paper volumes and supplements two moves ago. It was a coast-to-coast move. That was the same move that saw all the 5-1/4 in floppies go.

Last move was also coast-to-coast, back the other direction. Got rid of almost all the 3-1/2 in floppies on that one. Still have one USB floppy reader — unused, I’d expect it to remain functional longer than any of that media will.

This laptop still has a CD/DVD reader (burner, in fact.)

Anonymous Coward says:

Re: Re: Re:8 Re:

[S]o what is it that makes you think section 230 is hard to find …?

You haven’t seen the unpacked boxes stacked up. I think I just located the box with the CD-ROMs. That whole box would be better off going straight to the dump, unopened — if I open it, I’m afraid there’s stuff in there that requires physical destruction before discarding.

 

[W]hy are you so interested in making sure people know what it "really" is?

Don’t confuse my motivations with Blakes’ motivations. I don’t know why Blake wrote his post, or why he took the tone he did.

Do read “I’m feeling lucky”, along with the DC Circuit’s, “No Dice”. Folks should understand the various potentially-operative context(s) here. Especially with § 230 perhaps on the table for amendment in the new congress.

The last word is not likely to remain Brand X (2005).

bhull242 (profile) says:

Re: Re: Re:9 Re:

You haven’t seen the unpacked boxes stacked up. I think I just located the box with the CD-ROMs. That whole box would be better off going straight to the dump, unopened — if I open it, I’m afraid there’s stuff in there that requires physical destruction before discarding.

Okay, I’m a bit lost on this. Are you a former government employee or something? Or do those CDs require destruction because they contain private information? Either way, if you’re worried that you might have to physically destroy the CDs if you unbox them, you should destroy them before tossing them anyways. You do not want to risk someone else opening the box to find sensitive information.

Regardless, a simple internet search will still solve the problem of finding the text of CDA §230 and it’s “shortform” name (which is a bit of a misnomer).

Anonymous Coward says:

Re: Re: Re:10 Re:

[D]o those CDs require destruction…

Almost certainly… well, I don’t know that right now — my memory’s getting flaky these past few years, and the only thing I absolutely know for sure is what I wrote on the outside of the box a few years ago when I packed them up.

Ok. They require physical destruction. Damn.

Have you ever taken apart a dead hard drive with a screwdriver and tin-snips? Those are even worse than CDs, so I guess it isn’t that bad.

bhull242 (profile) says:

Re: Re: Re:11 Re:

Okay, this is a bit of a tangent, but yeah, you should probably get on that as soon as reasonably possible. If they contain sensitive information, you shouldn’t risk it getting out.

I had to do the same thing a while back, and yes, it’s definitely not nearly as bad as dismantling a dead hard drive, an experience I thankfully only had to do once so far. Still, if only someone invented a shredder for CDs.

PaulT (profile) says:

Re: Re: Re:9 Re:

"You haven’t seen the unpacked boxes stacked up. I think I just located the box with the CD-ROMs."

So, you’re saying that your entire effort to locate the law depends on an old archive of it from unspecified date with no access to any changes made since that date, and archive that you don’t think you’d dare to open even if you can locate the relevant section?

That explains why you’re having so much difficulty locating it compared to people who can link to a current version of the exact law after a few seconds of searching, but it doesn’t make your argument have any worth in the real world.

nasch (profile) says:

Re: Re: Re:9 Re:

Don’t confuse my motivations with Blakes’ motivations. I don’t know why Blake wrote his post, or why he took the tone he did.

I don’t know who Blake is. You have the same gravatar icon as the anonymous commenter I was addressing, which means (I think) the same IP address. So I assume you’re the same person.

Anonymous Coward says:

Re: Re: Re:10 Re:

I don’t know who Blake is.

Professor Blake Reid at Colorado Law, the author of the original “Section 230 of… what?”. Mike just referred to him as Blake in his comment earlier. It’s easy to lose context.

I assume you’re the same person.

I am definitely not Professor Blake Reid. Just trust me on that one.

nasch (profile) says:

Re: Re: Re:11 Re:

I am definitely not Professor Blake Reid. Just trust me on that one.

I never thought you might be. You seemed to think I was confusing you for Blake, and I was assuring you that I was not, never thought you might be Blake, and that I was in fact addressing my replies to the correct person. That is, it was your words I was replying to, not his.

Anonymous Coward says:

This is a commonly stated view, but it tends to reveal a near total ignorance with how these decisions are made. These companies have large trust and safety teams, staffed with thoughtful professionals who work through a wide variety of trade-offs and challenges in making these decisions. While Wheeler is over here saying that it’s obvious that the problem is they waited too long and didn’t nerd harder to remove these people earlier, you have plenty of others out there screaming that this proves the companies are too powerful, and they should be barred from banning him.

If they were staffed with ‘thoughtful professionals’, then a fuckton of the conspiracy theorists, instigators, and nazis who sling threats, lies, and hate around would’ve been banned a lot sooner. It’s obvious they waited too long. These corps have been treating Trump, the Q cultists, and more like their golden geese. The "plenty of others that scream that the companies are too powerful and they should be barred from banning him" are mostly right-wing bigots and assholes who would’ve been banned ages ago if social media corps like Twitter actually enforced their ToS equally. It’s a fact that they don’t because a lot of Republican lawmakers would get kicked off too.

Anyone who thinks it’s a simple business model issue has never been involved in any of these discussions. It’s not. There are a ton of factors involved, including what happens if you make this move and there’s a legal backlash?

I thought that Section 230 was supposed to protect from the worst of the legal backlash?

Or what happens if you’re driving all the cultists into underground sites where we no longer know what they’re planning?

Deplatforming works, though. TechCrunch had an article about this a good while back. Some of the most toxic subreddits on Reddit were banned and removed and I’m sure a lot of people on those subreddits were banned as well. That had an effect of making other subreddits, as well as former users of those toxic subreddits, less toxic.

I’m also tired of the "What about making sure we know what they’re planning?" scenario. I’m gonna take a quote from a commenter on a recent Ars Technica article.:

For the umpteen-thousandth time: deplatforming these people is not supposed to make them vanish. It is supposed to prevent them from being able to recruit people in plain sight within the public sphere. The fewer platforms that allow them and the fewer places that they get to push their conspiracies, their bile, and their hate, the less opportunities that they have to suck more gullible people into their violent, dangerous bullshit.

Deplatforming is about ensuring that radicalization can’t happen at the scale that it has been with Facebook giving a free pass.

There are lots of questions and demanding that these large companies with a variety of competing interests must do it to your standard is the height of privilege.

The ‘competing interests’ in this current instance are 1) People who don’t want bigots, nazis and fascist conspiracy theorists to be able to thrive right out in the open and are questioning why social media corps have allowed them to do so for so long, and 2) The bigots, nazis, and fascist conspiracy theorists and their enablers such as the one in the White House, the ones in Congress, and the pundits cheering them on. I feel like framing this as if one of these ‘competing interests’ are legitimate and that it’s the ‘height of privilege is disingenuous.

It’s fine to say that companies can do better. It’s fine to suggest ways to make better decisions. But too many pundits and commentators act as if there’s some "correct" decision and any result that differs from that cannot possibly be right.

I am quite sure that when the pundits and commentators take social media corps to task for not banning nazis and allowing them to recruit and fester and grow their groups and do nothing about it when they find info about their plans to inflict terror and violence on people, that said taking to task is as close to "correct" as you can get.

And, even worse, they blame Section 230 for that — when the reality is that Section 230 is what enables the companies to explore different solutions, as both Twitter and Facebook have done for years.

They’ve explored different solutions, but they do nothing with them. As linked farther up, Twitter has the tech available to drop the hammer on white supremacists. It has the tech available to make their site a better place. But it doesn’t use it. Because that would mean less engagement, less ads, and less money for them. The only thing they do is leap in and do major bannings when the controversy is at its largest and the cost of keeping the controversy-generator around outweighs the money that the controversy-generator brings in. Section 230 gives them the freedom to act, but they choose not to because not acting is more profitable for them.

Wheeler’s "solution" for reforming Section 230 is also ivory tower academic nonsense that seems wholly disconnected from the reality of how content moderation works within these companies.

The idea that the Internet as a whole can eventually be reformed into a much better place through "Protocols Not Platforms" where decentralized protocols rule the day is also Ivory Tower academic nonsense. "Protocols Not Platforms" also depends on the idea that enough people can nerd harder to figure out a solution to the many issues that it would bring along with it.

Finally demanding "transparency" is another one of those solutions that occurs to people who have never done content moderation. I recently wrote about the importance of transparency, but the dangers of mandated transparency.

Steve Bannon wasn’t banned for calling for Fauci to be beheaded. I would like to know the nitty-gritty processes by which Facebook came to that conclusion. The shitty double-standards moderation decisions that corps like Facebook and Twitter make are why some regulation and mandated transparency are needed.

I’ve pointed out a few times now that we’ve had our comment moderation system in place for over a decade now and in that time the only people who have ever demanded "more transparency" into how it works are those looking to game the system. Transparency is often demanded from the worst of your users who want to "litigate" every aspect of why their content was removed or why they were banned. They want to search for loopholes or accuse you of unfair treatment.

Then you can ban the people who try to litigate and game the system to their own ends under the rationale that "This user was trying to game the system to their own ends." Problem solved. And if the treatment is indeed unfair in a way that lets assholes get away with things while others get banned, then maybe mandated transparency that lets people see that is a good thing?

That is, these companies have a 1st Amendment right to moderate as they see fit, and to not be compelled to host speech, or be forced to associate with those they don’t want to associate with. That’s why many of the complaints are really 1st Amendment issues, not Section 230 issues.

But when their 1st Amendment right to moderate as they see fit leads to them, for profit, letting Q cultists and fascists have at it in ways that seek to end the 1st Amendment rights (and many other rights) of others, and they almost succeeded on the 6th, maybe something’s gotta give and we need to talk about what that ‘something’ is.

Relatedly, it feels like some of the problems with Wheeler’s piece is he’s bought into the myth that with Section 230 there are no incentives to moderate at all. That’s clearly false, given how much moderation we’ve seen. The false thinking is driven by the belief that the only incentive to moderate is the law. That’s ridiculous. The health of your platform is dependent on moderation. Keeping your users happy, and not having your site turn into a garbage dump of spam, harassment and hate, is a very strong motivator for moderation. Advertisers are another motivation, since they don’t want their ads appearing next to bigotry and hatred. The focus on the law as the main lever here is just wrong.

Everything about this paragraph. By "How much moderation we’ve seen", do you mean over the time that these platforms have existed, or as of this recent moment? Because in both cases, the amount of moderation is insufficient. It’s insufficient over the time that these platforms have existed in that, as discussed above, they let shit fester. And it’s insufficient in regards to this recent moment post January 6th in that it’s too little too late on intent of saving face.

Twitter, Facebook, and more have faced next to no actual repercussions for failing to moderate sufficiently. The global scale at which these platforms operate means that they don’t have to worry about their sites turning into garbage dumps of spam, harassment and hate since the money they make from their business model means that anybody that leaves their platform, be it the users don’t want to deal with it anymore, or the site let itself be used by hateful users as a propaganda generator for a genocide of people in Myanmar and now those users are dead, the sites make enough money in a day to where they don’t usually have to care about the day-to-day churn of death threats and shit. The only thing they care about is saving face when that shit piles up to where it can’t be ignored, like what happened on the 6th.

Facebook let’s the same sorts of fascist nutjobs that stormed the Capitol on the 6th advertise without restriction. The idea that advertisers are another significant motivation that can get them to do better is hilarious.

The incentive at play here for the platforms is to do as little moderation as possible that they can get away with, save face when big controversies blow up by banning people who’ve had free reign thanks to them doing as little moderation as possible, make false promises to do better, and then continue the charade.

History ain’t gonna be kind to the likes of Dorsey, Zuckerberg, Sandberg, et al. who’ve championed surveillance capitalism and climbed to great riches using the mountain of bodies its created. It also won’t be kind to the capitalist cheerleaders who constantly go to bat for the idea that the magical Free Market and Marketplace Of Ideas will eventually win out.

This comment has been deemed insightful by the community.
Mike Masnick (profile) says:

Re: Re:

If they were staffed with ‘thoughtful professionals’, then a fuckton of the conspiracy theorists, instigators, and nazis who sling threats, lies, and hate around would’ve been banned a lot sooner. It’s obvious they waited too long.

Obvious to you. But meanwhile there are a ton of people, including world leaders, media moguls, and others insisting they should not have done this. And that’s the point. What’s obvious to you is not "correct." It’s dumb, short sighted and evident of your particular world view and privilege. There are massive consequences for these decisions and while I think the companies were correct, to argue that it’s "obvious" how this should have been done earlier is to reveal that you’re ignorant of the realities.

I thought that Section 230 was supposed to protect from the worst of the legal backlash?

The legal backlash I meant was the removal of 230.

Deplatforming works, though.

Deplatforming works in some cases, and some contexts. There are some examples of it working. There are also examples of it not working. It’s wrong, silly, and ignorant to argue it always works in all contexts.

The ‘competing interests’ in this current instance are 1) People who don’t want bigots, nazis and fascist conspiracy theorists to be able to thrive right out in the open and are questioning why social media corps have allowed them to do so for so long, and 2) The bigots, nazis, and fascist conspiracy theorists and their enablers such as the one in the White House, the ones in Congress, and the pundits cheering them on. I feel like framing this as if one of these ‘competing interests’ are legitimate and that it’s the ‘height of privilege is disingenuous.

I’m not though. You’re putting disingenuous words in my mouth. I’m not saying "good people on both sides." I’m saying there are legitimate arguments for why a social media site didn’t want to go this far, and it’s not "obvious" that those reasons are bad.

I am quite sure that when the pundits and commentators take social media corps to task for not banning nazis and allowing them to recruit and fester and grow their groups and do nothing about it when they find info about their plans to inflict terror and violence on people, that said taking to task is as close to "correct" as you can get.

Your argument makes sense if it’s obvious who is and who is not a Nazi. The problem with assholes like you is the insistence that because YOU’RE sure who the Nazi is, everyone else is too. IT’S NOT THAT SIMPLE.

And fucking this up creates significant damage. We’ve SEEN THAT ELSEWHERE. Efforts to ban "terrorists" in the middle east have cut off aid workers and cut off those documenting war crimes. You think it’s easy? Then you’re too ignorant to comment and should shut up.

They’ve explored different solutions, but they do nothing with them.

Bullshit. You don’t know what you’re talking about. There have been constant changes and experiments. Twitter alone has tried out a variety of different ideas and approaches. To say they did nothing is just wrong.

The idea that the Internet as a whole can eventually be reformed into a much better place through "Protocols Not Platforms" where decentralized protocols rule the day is also Ivory Tower academic nonsense. "Protocols Not Platforms" also depends on the idea that enough people can nerd harder to figure out a solution to the many issues that it would bring along with it.

There’s a big difference though. Protocols not Platforms is a suggestion for people building the technology — not a policy proposal for the government. That’s the issue. And if it’s "academic" why do so many people keep reaching out to me to show me the solutions they’re actually building?

Steve Bannon wasn’t banned for calling for Fauci to be beheaded. I would like to know the nitty-gritty processes by which Facebook came to that conclusion. The shitty double-standards moderation decisions that corps like Facebook and Twitter make are why some regulation and mandated transparency are needed.

He was banned from Twitter, not from Facebook. Different sites have different rules. It’s not "double standards" it’s their own rules and their own contexts. You can disagree with the decision, but your willingness to immediately impute bad motives is the problem. It’s wrong.

But when their 1st Amendment right to moderate as they see fit leads to them, for profit, letting Q cultists and fascists have at it in ways that seek to end the 1st Amendment rights (and many other rights) of others, and they almost succeeded on the 6th, maybe something’s gotta give and we need to talk about what that ‘something’ is.

Do you really think that if Facebook had been more aggressive what happened on the 6th wouldn’t have happened? You’re not that gullible, are you?

The rest of your comment is completely fantasyland disconnected from reality. It’s the kind of talk that someone who has never worked in this space, has no experience, and thinks that these things are easy. It’s the kind of thing that someone who has never had to grasp the trade offs and consequences of decisions makes. In short: it’s ignorant claptrap.

Anonymous Coward says:

Re: Re: Re:

Do you really think that if Facebook had been more aggressive what happened on the 6th wouldn’t have happened?

Yes. Yes I am. If Facebook had been more aggressive in rooting out conspiracy theorists and white supremacists over the years and deplatforming them, this wouldn’t have happened. If there were fewer public places that these fuckers could congregate and radicalize & recruit regular people, this wouldn’t have happened.

Are you going to treat what happened on January 6th as if it was inevitable? Like a price we have to pay for having the First Amendment, the way gun-lovers treat school shootings as the price we have to pay for the 2nd? Because it honestly seems that way.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re: Re:

"Are you going to treat what happened on January 6th as if it was inevitable?"

The main inspiration for the event was the President of the United States talking for weeks about how the election was a fraud and that the election was being stolen from the people who voted for him. While recruitment for stuff like Q Anon might have been lower if social media platforms had taken down their speech, they were still getting their propaganda from Fox, OANN, Infowars, etc. Anyone dumb enough to have voted for Trump twice was probably dumb enough to believe them without social media influence.

So, I would say there was a certain amount of inevitability that a bunch of desperate morons who have been trained for years to hate Pelosi and Biden would have gathered to hear Trump, Guiliani and Alex Jones preach to them. Whether that would have been less of a powder keg if Trump had been forced to whine outside of Facebook is anyone’s guess, but it’s not like he and his sycophants wouldn’t have been heard.

Mike Masnick (profile) says:

Re: Re: Re: Re:

Are you going to treat what happened on January 6th as if it was inevitable? Like a price we have to pay for having the First Amendment, the way gun-lovers treat school shootings as the price we have to pay for the 2nd? Because it honestly seems that way.

No. My argument is that the President of the US and Fox News are way more to blame than idiots on Facebook. Without Facebook, you still have those two other, more powerful driving forces.

Anonymous Coward says:

Re: Re: Re: Re:

What are you smoking if you think that the lack of Facebook would mean extremists wouldn’t try to radicalize and couldn’t succeed by any other means?

Assuming that the rights are responsible for bad actors is beyond brain damaged as a notion. We have seen how those power grabs go before – they claim it is to stop X and then subsequently fails at it and yet they not only keep their ill gotten gains but insist they need just a little bit more.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

… letting Q cultists and fascists have at it in ways that seek to end the 1st Amendment rights (and many other rights) of others, and they almost succeeded on the 6th, maybe something’s gotta give and we need to talk about what that ‘something’ is.

Here’s a paper bag. Breathe into it a few minutes, you’ll feel better.

No, they didn’t "almost succeed on the 6th" in bringing down the government. They were a far cry from succeeding at that. On the other hand, they were close to causing harm to elected representatives, and definitely did succeed at disrupting proceedings. But it was never going to be more than a temporary disruption. And given how "rights" generally applies to government action against citizens, again: the rioters were not close to violating rights. Laws, certainly. Rights, not so much.

But there’s something else from that quote that we must discuss. You sitting down? Good. Being an asshole isn’t illegal. Being ignorant isn’t illegal. Believing in conspiracy theories, or believing "fascism is good" … is not illegal. Nor is talking about them. What IS illegal are things like conspiring to break laws, defaming someone and so on.

Does Facebook deserve public censure for letting fascist nutjobs advertise without restriction? Sure thing. Should the government require Facebook do so? There’s a whole pile of free speech precedent that says that that would be very, very bad indeed.

maybe something’s gotta give and we need to talk about what that ‘something’ is

We always need to talk about free speech. About the big things, and about the little things. And not just when there is a crisis, but all the time. But in the vast majority of cases, the thing that’s had to give is not the right to say something. And that’s good.

Stephen T. Stone (profile) says:

Re:

it was never going to be more than a temporary disruption

Some of the rioters were chanting for the hanging of the Vice President. Some of them had pipebombs, firearms, and/or plastic cuffs. The difference between relative safety and genuine life-threatening peril for lawmakers in the Capitol was slim enough that the Vice President was, by some accounts, about a minute away from being in the line of sight of those terrorists.

And I doubt the murder of the Vice President/the Speaker of the House/other Congresspeople would’ve been as “temporary” a distraction as you seem to think it would’ve been.

PaulT (profile) says:

Re: Re:

"But it was never going to be more than a temporary disruption"

You might need to update your talking points as new confirmed information comes out:

https://www.newsweek.com/capitol-assassinate-qanon-shaman-jake-angeli-1561872

"Capitol Rioters Intent ‘Was to Capture and Assassinate Elected Officials,’ Prosecutors Say"

Anonymous Coward says:

Re: Re: Re:

It is still a temporary disruption from a high level standpoint – that is how electoral systems are designed for frequent replacement anyway. It isn’t like Franco where you kill a few of his heirs and the cryptoliberal (in the sectetly not a totalitarian sense) king left in line transitions to representative government.

It is horrible of course and would leave a scar but wouldn’t be a permanent disruption like say Alaska and Washingron state being annexed by Canada and that becoming the status quo.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

What Section 230 WAS is not relevant. Facebook is being blamed by law enforcement for enabling all these crimes. If the victims could file civil suits (and they likely will), with 230 intact, then that would work. Same for the single-publication rule so people can eliminate data that "refreshes."

Anonymous content should be subject to a notice-and-takedown provision when the host or search engine cannot identify the original author,. etc.

The US is the only country that has Section 230. The rest of the world and its internet do fine without it.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re:

Why should all anonymous posts be left up only so long as nobody on earth objects to them and files a notice? I assure you, in such an anarcho-syndicalist regime your own posts’ half-lives would be measured in seconds.

The United States is the only country that has been able to foster projects like Google Search, Yelp, Wikipedia, the Open Directory–name another country that has produced anything of similar significance. Twitter and Facebook are red herrings–they are rich enough to survive loss of legal protection, and trivial enough to be done in your choice of anarcho-syndicalist regimes.

Techdirt forums, and numerous other valuable little specialty forums, would have to be closed because the Real People, the Little People, the non-megacorporations couldn’t afford the legal fees required to prove their innocence. If you hate Techdirt so much, that’s fine–just leave. I was an Open Directory editor for years; some of my anonymous posts at Groklaw were promoted to articles. In my search for medical professionals I’ve found invaluable information on sites that SOME doctors would much prefer wasn’t known. All these things are what make the internet useful to me.

I’ve left Twitter, because I don’t care what they moderate or what they promote. Life is better without it. Facebook, ditto. Red herrings, both of them. It’s the VALUABLE sites–honest Yelp reviews, more-often-than-not informed Wikipedia articles, lively conversations on with informed people on highly-technical subjects.

I’ve helped moderate specialty forums, so I can say from experience: you know nothing. You obviously have no experience. You will never learn anything until you shut up and start listening to people who have been-there-done-that.

Techdirt is a good place to start doing that. BECAUSE there are informed technical people in the conversation, Techdirt is NOT a good place to gather a following of ignoramuses by making nonsensical asseverations with the utter confidence of the utterly clueless.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Re:

Anonymous content should be subject to a notice-and-takedown provision when the host or search engine cannot identify the original author,. etc.

Said the AC…

The US is the only country that has Section 230. The rest of the world and its internet do fine without it.

Borrowing from someone else’s comment on this, how many other countries have legal precedent that if a site moderates some content they could be held liable for everything? Because that’s why 230 exists and if other countries don’t have that incredibly stupid view of liability then they don’t need 230.

Anonymous Coward says:

Re: Re:

Funny you keep begging for Section 230 to die so you can kill off anonymous comments you dislike, Jhon Smith – especially considering that half your time signed in as horse with no name or Whatever or MyNameHere was spent bitching and griping that the mods were delaying and hiding your posts… despite people responding to you anyway.

How’s Paul Hansmeier’s quest to honeypot troll from prison coming along? Did you have to fluff a lot of cocks to get someone to represent him? Or were you too busy weeping salty tears for the demise of a copyright giant?

Darkness Of Course (profile) says:

First off is facebook's claim

Facebook is not, nor have they ever been a tech company. They have a collection of spaghetti code that That Zucker urged them to break on a regular basis.

I would suggest that Mr. Wheeler spend a week in the moderators box. Just watch for at least a day or two.

Then, he can code up a few lines to moderate the miniscule slice of content that he saw.

As if.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...