Now It's Harvard Business Review Getting Section 230 Very, Very Wrong

from the c'mon-guys dept

It would be nice if we could have just a single week where some major “respected” publication could do the slightest bit of fact checking on their wacky articles on Section 230. It turns out that’s not happening this week. Harvard Business Review has now posted an article saying It’s Time to Update Section 230 written by two professors — Michael Smith of Carnegie Mellon and Marshall Van Alstyne at Boston University. For what it’s worth, I’ve actually been impressed with the work and research of both of these professors in the past — even though Smith runs a program funded by the MPAA, that publishes studies about the internet and piracy, his work has usually been careful and thorough. Van Alstyne, on the other hand, has published some great work on problems with intellectual property, and kindly came and spoke at an event we helped to run.

Unfortunately, this piece for HBR does not do either Smith or Von Alstyne any favors — mainly because it just gets so much wrong. It starts out, like so many of these pieces, with some mythmaking, that Section 230 was passed due to “naive” techno-optimism. This is just simply wrong, even if it sounds like a good story. It then (at least) does highlight some of the good that social media has created (Arab Spring, #MeToo, #BlackLivesMatter, and the ice bucket challenge). But then, of course, it pivots to all the “bad” stuff on the internet, and says that “Section 230 didn’t anticipate” how to deal with that.

So, let’s cut in and point out this is wrong. Section 230’s authors have made it abundantly clear over and over again that they absolutely did anticipate this very question. Indeed, the very history of Section 230 is the history of web platforms trying to figure out how to deal with the ever-changing, ever-evolving challenge of “bad” stuff online. And the way that 230 does that is by allowing websites to constantly experiment, innovate, and adapt without fear of liability. Without that, you create a much worse situation — one in which any “false” move by the website could lead to liability and ridiculously costly litigation. Section 230 has enabled a wide variety of experiments and innovations in content moderation to figure out how to keep platforms functioning for users, advertisers, and more. But, this article ignores all that and pretends otherwise. That’s doing a total disservice to readers, and presenting a false narrative.

The article goes through a basic recap of how Section 230 works — and concludes:

These provisions are good ? except for the parts that are bad.

Amusingly, that argument applies to lots of content moderation questions as well. Keep all the stuff that’s good, except for the parts that are bad. And it’s that very point that highlights why Section 230 is actually so important. Figuring out what’s “good” and what’s “bad” is inherently subjective, and that’s part of the genius of Section 230, is that it allows companies to experiment with different alternatives in figuring out how best to deal with things for their own community, rather than trying to comply with some impossible standard.

They then admit that there are other, non-legal, incentives that have helped keep websites moderating in a reasonable way, though they imply that this doesn’t work any more (they don’t explain why or how):

When you grant platforms complete legal immunity for the content that their users post, you also reduce their incentives to proactively remove content causing social harm. Back in 1996, that didn?t seem to matter much: Even if social media platforms had minimal legal incentives to police their platform from harmful content, it seemed logical that they would do so out of economic self-interest, to protect their valuable brands.

Either way, from there, there article goes completely off the rails in ways that are kind of embarrassing for two widely known professors. For example, the following statement is entirely unsupported. It is disconnected from reality. Hilariously, it is the very “misinformation” that these two professors seem so upset about.

We?ve also learned that platforms don?t have strong enough incentives to protect their brands by policing their platforms. Indeed, we?ve discovered that providing socially harmful content can be economically valuable to platform owners while posing relatively little economic harm to their public image or brand name.

I know that this is out there in the air as part of the common narrative, but it’s bullshit. Pretty much every company of any size lives in fear of stories of “bad” content getting through on their platform, and causing some real world harm. It’s why companies have invested so much in hiring thousands of moderators, and trying to find any kind of technological solution that will help in combination with the ever growing ranks of human moderators (many of whom end up being traumatized by having to view so much “bad” content). The idea that Facebook’s business isn’t harmed by its failures on this front or that the “socially harmful content” is “valuable” to Facebook is simply not supported by reality. There are huge teams of people within Facebook pushing back against that entire narrative. Facebook also didn’t set up the massive (and massively expensive) Oversight Board out of the goodness of its heart.

What Smith and Van Alstyne apparently fail to consider is that this is not a problem of Facebook not having the right incentives. It’s a problem of it being impossible to do this well at scale, no matter what incentives are in place, combined with the fact that many of the “problems” they’re upset about actually being societal problems that governments are blaming on social media to hide their own failings in fixing education, social safety nets, criminal justice reform, healthcare, and more.

This paragraph just kills me:

Today there is a growing consensus that we need to update Section 230. Facebook?s Mark Zuckerberg even told Congress that it ?may make sense for there to be liability for some of the content,? and that Facebook ?would benefit from clearer guidance from elected officials.? Elected officials, on both sides of the aisle, seem to agree: As a candidate, Joe Biden told the New York Times that Section 230 should be ?revoked, immediately,? and Senator Lindsey Graham (R-SC) has said, ?Section 230 as it exists today has got to give.? In an interview with NPR, the former Congressmen Christopher Cox (R-CA), a co-author of Section 230, has called for rewriting Section 230, because ?the original purpose of this law was to help clean up the Internet, not to facilitate people doing bad things.?

First off, Facebook is embracing reforms to Section 230 because it can deal with them and it knows the upstart competitors it faces cannot. This is not a reason to support 230 reform. It’s a reason to be very, very worried about it. And yes, there is bipartisan anger at 230, but they leave out that it’s for the exact opposite reasons. Democrats are mad that social media doesn’t take down more constitutionally protected speech. Republicans are mad that websites are removing constitutionally protected conspiracy theories and nonsense. The paragraph in HBR implies, incorrectly, that there’s some agreement.

As for the Cox quote, incredibly, this was taken from a few years ago, in which Cox appeared to have a single reform suggestion: clarifying that the definition of an Information Content Provider covers companies that are actively involved in unlawful activity done by users. And, notably (again, skipped over by Smith and Van Alstyne) that interview occurred just after FOSTA was passed by Congress — and now it’s widely recognized how FOSTA has a complete disaster for the internet, and has put tons of people in harm’s way. That seems kinda relevant if we’re talking about how to update the law again.

But Smith and Van Alstyne don’t even mention it!

Instead, the fall back on tired, wrong, or debunked arguments.

Legal scholars have put forward a variety of proposals, almost all of which adopt a carrot-and-stick approach, by tying a platform?s safe-harbor protections to its use of reasonable content-moderation policies. A representative example appeared in 2017, in a Fordham Law Review article by Danielle Citron and Benjamin Wittes, who argued that Section 230 should be revised with the following (highlighted) changes: ?No provider or user of an interactive computer service

that takes reasonable steps to address known unlawful uses of its services that create serious harm to others shall be treated as the publisher or speaker of any information provided by another information content provider in any action arising out of the publication of content provided by that information content provider.?

Of course, as we’ve explained, this is a solution that only a law professor who has never had to run an actual website could love. The problems with the “takes reasonable steps” argument are myriad. For one, it would mean that websites would constantly need to go to court to defend their content moderation practices — a costly and ridiculous experience, especially when you have to defend it to people who don’t understand the intricacies and trade-offs of content moderation. I saw this first hand just a couple months ago, in watching a print-on-demand website lose a court fight, because the plaintiff insisted that any mistake in its content moderation practices proved its efforts weren’t “reasonable.”

At best such a setup would mean that all content moderation would become standardized, following exactly whatever plan was chosen by the first few companies to win such lawsuits. You’d wipe out pretty much any attempt at creating new, better, more innovative content moderation solutions, because the only way you could do that is if you were willing to spend a million dollars defending it in court. And that would mean that the biggest companies (once again) would control everything. Facebook could likely win such a case, screwing over tons of competitors, and then everyone else would have to adopt Facebook’s model (hell, I wouldn’t put it past Facebook to offer to “rent” its content moderation system out to others) in such a world. The rich get richer. The powerful get more powerful. And everyone else gets screwed.

The duty-of-care standard is a good one, and the courts are moving toward it by holding social media platforms responsible for how their sites are designed and implemented. Following any reasonable duty-of-care standard, Facebook should have known it needed to take stronger steps against user-generated content advocating the violent overthrow of the government.

This is also garbage and taken entirely out of context. It doesn’t mention just how much content there is to moderate. Facebook has billions of users, posting tons of stuff every day online. This supposes that Facebook can automatically determine “content advocating the violent overthrow of the government.” But it does nothing whatsoever to help define what that content actually looks like, or how to find it, or how to explain those rules to every content moderator around the globe in a manner in which they’ll treat content in a fair and equitable way. It doesn’t take into account context. Is it “advocating the violent overthrow of the government” when someone tells a joke hoping President Trump dies? Is it failing a duty of care standard for someone to suggest that… an authoritarian dictatorship should be overthrown? There are so many variables, and so many issues here that to just toss out the idea that it’s obvious a duty of care was not taken to allow for “content advocating the violent overthrow of a government” that is just shows how ridiculously naive and ignorant both Smith and Van Alstyne are about the actual issues, trade-offs, and challenges of content moderation.

They then try to address these kinds of arguments by setting up a very misleading strawman to knock down:

Not everybody believes in the need for reform. Some defenders of Section 230 argue that as currently written it enables innovation, because startups and other small businesses might not have sufficient resources to protect their sites with the same level of care that, say, Google can. But the duty-of-care standard would address this concern, because what is considered ?reasonable? protection for a billion-dollar corporation will naturally be very different from what is considered reasonable for a small startup.

Yeah, but you only find that out after you’re dead, spending a million dollars defending it in court.

And then… things go from just bad and informed, to actively spreading misinformation:

Another critique of Section 230 reform is that it will stifle free speech. But that?s simply not true: All of the duty-of-care proposals on the table today address content that is not protected by the First Amendment. There are no First Amendment protections for speech that induces harm (yelling ?fire? in a crowded theater), encourages illegal activity (advocating for the violent overthrow of the government), or that propagates certain types of obscenity (child sex-abuse material).

Yes, that’s right. They trotted out the fire in a crowded theater trope, which already is wrong, and then they apply it incorrectly. It’s flat out wrong to say that there is no 1st Amendment protection in speech that induce harm. Much such content is absolutely protected under the 1st Amendment. The actual exceptions to the 1st Amendment (which, you know, maybe someone at HBR should have looked up) in this area are for “incitement to imminent violence” or “fighting words,” both of which are very, very, very narrowly defined.

As for child sex-abuse material, that’s got nothing to do with Section 230. CSAM content already violates federal criminal law and Section 230 has always exempted federal criminal law.

In other words, this paragraph is straight up misinformation. The very kind of misinformation that Smith and Van Alstyne seem to think websites should be liable for hosting.

Technology firms should embrace this change. As social and commercial interaction increasingly move online, social-media platforms? low incentives to curb harm are reducing public trust, making it harder for society to benefit from these services, and harder for legitimate online businesses to profit from providing them.

This is, again, totally ignorant. They have embraced this change, because the incentives already exist. It’s why every major website has a “trust & safety” department that hires tons of people and does everything they can to properly moderate their websites. Because getting it wrong leads to tons of criticism from users, from the media, and from politicians — not to mention advertisers and customers.

Most legitimate platforms have little to fear from a restoration of the duty of care.

So long as you can afford the time, resources, and attention required to handle a massive trial to determine if you met the “duty of care.” So long as you can do that. And, I mean, it’s not like we don’t have examples of how this plays out in other arenas. I already talked about what I saw in court this summer in the trademark field (not covered by Section 230). And we have similar examples of what happens in the copyright space as well (not covered by Section 230). Perhaps Smith and Van Alstyne should go talk to the CEO of Veoh… oh wait, they can’t, because the company is dead, even though it won its lawsuit on this very issue a decade ago.

A duty of care standard only makes sense if you have no clue how any of this works in practice. It’s an academic solution that has no connection to reality.

Most online businesses also act responsibly, and so long as they exercise a reasonable duty of care, they are unlikely to face a risk of litigation.

I mean, this is just completely disconnected from reality as we’ve seen. That trial I witnessed in June is one of multiple cases brought by the same law firm against online marketplace providers, more or less trying to set up a business suing companies for failing to moderate trademark-related content to some arbitrary standard.

What good actors have to gain is a clearer delineation between their services and those of bad actors.

They already have that.

A duty of care standard will only hold accountable those who fail to meet the duty.

Except for all the companies it kills in litigation.

This article is embarrassingly bad. HBR, at the very least, should never have allowed the blatantly false information about how the 1st Amendment works, though all that really serves to do is discredit both Smith and Van Alstyne.

I don’t understand what makes otherwise reasonable people who clearly have zero experience with the complexities of social media content moderation to assume they’ve found the magic solution. There isn’t a magic solution. And your solution will make things worse. Pretty much all of them do.

Filed Under: , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Now It's Harvard Business Review Getting Section 230 Very, Very Wrong”

Subscribe: RSS Leave a comment
53 Comments
This comment has been deemed insightful by the community.
That One Guy (profile) says:

One day we shall get to two digits...

Well, time to reset the ‘Days since supposed legally experienced person makes wildly incorrect and/or dishonest statements about 230′ timer back to zero I see.

I’ve said it before and I’ll say it again, if 230 really was this terrible law causing all this harm then you’d think it would be easy to present an honest argument against it, and yet to date none have appeared even when people who really should know better decide to jump on the ‘let’s attack 230’ bandwagon.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Remember, Section 230 has given websites vast chance to innovate and the freedom to properly moderate. This is why sites like KiwiFarms don’t exist anymore, where the site’s owner supports and participates in targeted harassment campaigns that its users start that have led to their victims committing suicide.

Oh, wait…

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:

KiwiFarms has been around and doing shit that’s not protected under Section 230 for years. They’re a well-known site with a blood-stained reputation that precedes them. They’ve faced zero tangible consequences. This points to something about our current legal framework being vastly broken.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Automatic Gatekeeping

Anyone who wants to proclaim that FaceTwitGramApp need to "do more" must go through Content Moderator training and spend two full weeks on the front line.

Then and only then, will I listen to you blather about how obvious it is what they should be doing.

Anonymous Coward says:

It would be interesting to see how many of these expert professors misreading the law are tenured (you know, the thing that prevents you for being fired for dragging your profession and institution into the mud) vs. non-teneured academics… One allows you to have your incorrect opinions bought off and never questioned by your employer, the other is an honest job where you’re accountable for your statements

This comment has been flagged by the community. Click here to show it.

Koby (profile) says:

Additional Appreciation

As for child sex-abuse material, that’s got nothing to do with Section 230. CSAM content already violates federal criminal law and Section 230 has always exempted federal criminal law.

I appreciate you pointing out that CSAM is not an opinion, but is a criminal activity, and isn’t even something protected by Section 230 or the 1st amendment.

-Getting censored proves that your opinion is the strongest.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:

Getting censored proves that your opinion is the strongest.

Does this maxim apply to Critical Race Theory, Pride flags, and any other speech or expression conservatives have sought to ban over the years? Or does it only apply to conservative views (you know the ones)?

That One Guy (profile) says:

Re: Re: Re:2 Re:

I do so love how even after people have shoved their face in how stupid their little throwaway line is and how it’s led to them supporting terrorists organizations(among other things) they still trot it out, I guess Koby can be lumped in with Woody as someone who just loves being publicly humiliated.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:4 Re:

Good thing none of us are the government then, since 1A expressly denies the government to censor your vile opinions.

Good to know that Critical Race Theory is still one of the strongest opinions, though. Your FBI handler should get a commendation for forcing you to use that tagline.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:4

death threats from terrorist organizations are also not political opinions

Expressions of their political ideologies are, though. And when those are censored, the logic of your little pissant maxim says their opinions immediately become the strongest.

Why do you support terrorist ideologies, Koby?

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Re: Re:4 By all means keep digging

Cribbing from Stephen here… And if that’s the only thing ISIS posted you might have a point but it’s isn’t so you don’t, leaving you right back to cheering on ISIS, critical race theory, homosexual and trans rights(and on the other side of the aisle bigots of all stripes though I doubt you have a problem with them), and a whole slew of other things.

-Repeatedly lying about being ‘censored’ because people keep showing you the door of their private property proves that you’re not just a person no-one wants to be around but a dishonest one who refuses to own their own words and deeds and instead blames others.

This comment has been deemed insightful by the community.
James Burkhardt (profile) says:

Re: Additional Appreciation

I appreciate you pointing out that CSAM is not an opinion, but is a criminal activity, and isn’t even something protected by Section 230 or the 1st amendment.

You say that as if Techdirt hasn’t made the point repeatedly that federal criminal crimes committed by the owner of a website are not protected by section 230 and therefore section 230 provides no protection against crimes committed by the owner of a website. What was your point?

-Getting censored proves that your opinion is the strongest.

By your definition of censor, People love to censor the opinion that pedophillia is fine. Guess that is the best opinion?

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Keepy lying to yourself if that's what it takes

-Getting censored proves that your opinion is the strongest.

-Repeatedly lying about being ‘censored’ because people keep showing you the door of their private property proves that you’re not just a person no-one wants to be around but a dishonest one who refuses to own their own words and deeds and instead blames others.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Keepy lying to yourself if that's what it takes

I guess repeatedly being censored, shit on, and abused for centuries make Black and First Nations peoples the best, stronest peoples ever, in the American context at least.

Perhaps they should be in charge of this hemisphere.

That One Guy (profile) says:

Re: Re: Re: Keepy lying to yourself if that's what it takes

Yeah, it’s nice of Koby to, when not supporting ISIS, make clear that they wholeheartedly support and believe in the superiority of various minority groups and/or non-white races.

Here I’d been thinking that they were cheering on the scum of the internet, the trolls and bigots of all flavors, when it turns out that in fact they were/are huge fans of the superiority of non-heterosexuality and non-white races and were just too shy to say so out loud.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Additional Appreciation

Getting censored proves that your opinion is the strongest.

Why do you refuse to tell us what opinions are being censored? I constantly ask you to tell us what conservative opinions are being moderated on social media, but you REFUSE to answer.

Basically, that tells me one of two things:

You are full of shit and are getting paid to troll,

OR – since you refuse to answer,

The alternative, and more likely scenario, is that you will not admit that you are a Nazi, racist, homophobic, bigoted, xenophobic asshole and are constantly pissed that people keep kicking your ass out of their social media platforms and you feel that you have the strongest opinions that are being censored.

So what is it Koby, tell us what conservative opinions are being censored, or admit that you are a Nazi racist asshole who is into kiddie porn.

ECA (profile) says:

Corp vs corp

Lawyers are supposed to help those that wish to pursue legal things.
This is More as a corp vs corp thing in the long run.
But for some strange reason our Gov. Thinks(or is being paid) to fight this battle.

Its strange that after giving Corps Human rights(it didnt happen that long ago), we stopped regulating them. We are letting them run wild.
Who is trying to resend a law that All corps already have? LLC is proof of that. Where even the Owners and CEO of a corp are not responsible. But there are ways to TAKE a corp from another. You can Sue the other corp and force the owners to quit. Blackmail by court.

The internet forums and Chats have been asked to Curtail Hate speech and a few other things, By many of the governments. But the Sites also try to protect themselves from Legal disputes with the OTHER CORPS. Kim Dotcom(?) got in the middle of all of this and found out the hard way, DONT DEAL with the corps.
With all of this, and Fosta, ‘its for the Children’, and the Key Name calling (communism and socialism) just to CLOUD what is happening.
It comes down to the Old rich want Some of what the NEW rich have. The Bill collectors want what has been Built by others, only to have more bills to collect, and NEVER develop anything else.

This comment has been deemed funny by the community.
Derek Kerton (profile) says:

Try Harder...Charles Harder

"Yeah, but you only find that out after your dead, spending a million dollars defending it in court."

Oh, come on, Mike. Quit being so dramatic. What do you know about a small company facing death because of some frivolous lawsuit trying to stifle the websites right to free speech by ruining it with legal costs and distraction?

Could never happen. The Law and the Courts are perfect, and could never by abused in such a way.

Try Harder. I’m Gawking at your Hulking Hoagie hyperlinks in Teal. This is like getting a Shiv-a prison knife- SLAPPed across your genuine articles.

This comment has been deemed insightful by the community.
Anonymous Coward says:

This sounds like the political arguments against poverty: they can just work harder because they’re not being incentived to work.

Its an argument made by people who have never experienced it, who don’t think the government should be helping out, and that all attempts (no matter what the existing evidence says) have made the problem worse.

This comment has been deemed insightful by the community.
Anonymous Coward says:

  • "These provisions are good — except for the parts that are bad."

Amusingly, that argument applies to lots of content moderation questions as well.*

Amusingly, that argument applies to bloody everything.

When you grant platforms complete legal immunity for the content that their users post,

Yeah, about that: No, that’s the First Amendment.

We’ve also learned that platforms don’t have strong enough incentives to protect their brands by policing their platforms. Indeed, we’ve discovered that providing socially harmful content can be economically valuable to platform owners while posing relatively little economic harm to their public image or brand name.

Lol, wait to you get the misinformation tag applied to your ranting. Maybe you should be shadowbanned or suspended?

What Smith and Van Alstyne apparently fail to consider is
the world is full of people who are the same as they’ve always been, and they use these internet communications platforms.

So, who’s at fault for failing to moderate reality for the last 10 ky or so?

Darkness Of Course (profile) says:

Fire in the theater

The Atlantic has a reasonable one here

https://www.theatlantic.com/national/archive/2012/11/its-time-to-stop-using-the-fire-in-a-crowded-theater-quote/264449/

Which references a Popehat discourse on why the phrase is not only wrong re 1st Amend rights, but definitely wrong as Holmes was all about censoring, not freedom of speech

https://www.popehat.com/2012/09/19/three-generations-of-a-hackneyed-apologia-for-censorship-are-enough/

Marshall Van Alstyne says:

A solution with a better critique

Greetings Mike, I’m a fan of your writings so when they include a critique, I pay attention. Thanks also for acknowledging our prior work and also even prior praise (https://www.techdirt.com/articles/20090219/0248373834.shtml).

None of your criticisms, however, address the fundamental question: how do you hold a platform accountable for misinformation that it amplifies? The problem with S230 is that by providing (almost) absolute immunity to being an accessory to a crime, it “accessorizes” a lot more crime. The infodemic of antivaxx misinformation is a case in point. Platforms don’t produce this content but they have given it reach and influence and they have monetized the engagement that has attended it.

Paraphrasing your conclusion, you mostly assert the downside of changing S230 outweighs the upside. Still, you don’t assert that there’s no problem.

As a tech (or econ or legal) designer, we should always ask the question “can we do better?” Is there a superior design that accomplishes these mutually conflicting goals?

So let me poke a hole in one of your best arguments, that it’s “impossible to do this well at scale”. We agree that checking every single message just isn’t feasible. But, that doesn’t mean no better design exists. Let me propose one:

If we recognize the “infodemic” as a pollution problem, then we take statistical samples just like we sample factory air for the presence of sulphur dioxide or water for the presence of DDT. We don’t measure every cubic centimeter of effluent as that’s just not practical. A doctor doesn’t check your cholesterol by checking all your blood, she/he takes a sample.

The beauty here is that by a property of the central limit theorem from statistics, we can be extremely confident how much pollution afflicts a given platform. Do we want 90% confidence? 95% confidence? 99% confidence? We just take bigger samples to be sure. Even if folks disagree on the falseness or harm of a specific claim, people will agree on average. One study found 95% agreement among fact checking organizations (https://science.sciencemag.org/content/359/6380/1146). In fact, in computer science, it’s possible to create highly accurate assessments with much lower agreement among deciders than this.

Under a modified S230, with a duty of care, we just hold platforms accountable for pollution levels above a reasonable threshold. Facebook, for example, already reports such things as incidence of cancer misinformation on its platform (https://www.facebook.com/AMJPublicHealth/posts/3316836535095688). Now, we just hold them publicly accountable. This isn’t impossible at all — we just need to connect the existing dots.

We’ve tried to think carefully about such issues and avoid polemics. I have a partial working paper “Platforms, Free Speech & The Problem of Fake News” with more nuance (https://www.dropbox.com/s/ypphlhw43efnslj/Platforms%2C%20Free%20Speech%20%26%20the%20Problem%20of%20Fake%20News%20v0.3%20-%20dist.pdf?dl=0). Honestly, I have not shared it widely yet outside friends and family as there is much more to be done but this hue and cry prompts me to disclose it earlier than I’d planned. Your further thoughts are welcome and invited.

To succeed, a good critique needs to convince us that (a) no problem exists and (b) no better design exists. Respectfully, the above critiques fall short on both counts.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re:

What you’re having a problem with is the first amendment, not 230. 230 does not ‘allow’ moderation or a platform to host ‘misinformation’, the first amendment does, all 230 does ultimately is make it so that platforms can afford to exercise that right and not be sued into the ground because people don’t like how they’re using it.

Don’t like people saying misinformation then go after them, not the platform hosting them, and let a judge explain to you why you can’t do that.

This comment has been deemed insightful by the community.
Toom1275 (profile) says:

Re: A solution with a better critique

To succeed, a good critique needs to convince us that (a) no problem exists and (b) no better design exists. Respectfully, the above critiques fall short on both counts.

To be convinced by Mike’s critique, you need to (a) understand the subject well enough to comprehend the rebuttal you’ve been given, and (b) be in acting in good faith. With all due respect you clearly fall short on both counts.

Mike Masnick (profile) says:

Re: A solution with a better critique

None of your criticisms, however, address the fundamental question: how do you hold a platform accountable for misinformation that it amplifies?

I’ve addressed that numerous times. The problem with YOUR piece is it assumes, totally incorrectly, that the only way to hold a platform accountable is… by law. It’s not. Users migrating away from garbage dumps, advertisers refusing to advertise next to conspiracy theories, have been shown to be much more effective in pressuring companies to curb their behavior.

Even more to the point, holding a platform accountable for misinformation is a recipe for disaster. How do you define misinformation? How do you define it in a way that doesn’t make mistakes? How do you deal with the information that is inevitably not caught? All you’re doing is creating a massive liability minefield. End result? LESS EXPERIMENTATION, LESS INNOVATION, and LESS ABILITY TO ADAPT TO BAD ACTORS. Why would you want to do that?!?

The infodemic of antivaxx misinformation is a case in point. Platforms don’t produce this content but they have given it reach and influence and they have monetized the engagement that has attended it.

Fox News, OAN, and Newsmax have given just as much air to those things. Indeed, Yochai Benkler’s research shows that the info doesn’t go viral on Facebook until after it airs on cable news.

And YOU haven’t answered the more pressing question: what is illegal about antivax misinfo? We agree that it’s problematic. But (contrary to what you claim) it’s pretty much all constitutionally protected speech. There is no underlying cause of action. "Facebook shouldn’t share this" is not a legal argument.

If we recognize the “infodemic” as a pollution problem, then we take statistical samples just like we sample factory air for the presence of sulphur dioxide or water for the presence of DDT. We don’t measure every cubic centimeter of effluent as that’s just not practical. A doctor doesn’t check your cholesterol by checking all your blood, she/he takes a sample.

Marshall, this all sounds neat and sciency, but PROTECTED SPEECH IS NOT POLLUTION. And that’s where your entire argument breaks down. You can’t ignore the fact that we’re talking about speech.

Under a modified S230, with a duty of care, we just hold platforms accountable for pollution levels above a reasonable threshold.

Too much constitutionally protected speech above a certain level cannot violate the law. That fundamentally sinks your entire argument. You really ought to have spoken to at least someone who understands the 1st Amendment.

To succeed, a good critique needs to convince us that (a) no problem exists and (b) no better design exists. Respectfully, the above critiques fall short on both counts.

You leave out that YOUR suggestion is easily proven as (c) a MUCH worse design with SIGNIFICANT downsides you ignore or don’t understand. I made that argument, and I stand by it because it’s correct. If it were only (a) and (b) as you lay out then you’ve done a classic "we must do something, this is something, we will do it," ignoring that your solution will make things significantly worse (as I DID show).

I agree that there are problems, but you’re barking up the wrong tree for a solution.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...