Further Thoughts On Moderation v. Discretion v. Censorship
from the playing-semantics dept
Welcome back to Techdirt's favorite faux game show, Playing Semantics! This week, we're diving back into the semantics of moderation, discretion, and censorship. As a reminder, this bit is what we were arguing about last time:
Moderation is a platform operator saying "we don't do that here." Discretion is you saying "I won't do that there." Censorship is someone saying "you can't do that anywhere" before or after threats of either violence or government intervention.
Now, if we're all caught up, let's get back into the game!
A Few Nits to Pick
In my prior column, I overlooked a couple of things that I shouldn't have. I'll go over them here to help everyone get on the same page as me.
-
anywhere — In re: "you can't do that anywhere", this refers to the confines of a given authority or government. It also refers to the Internet in general. Censors work to suppress speech where it matters the most (e.g., within a given country). Such censors often carry the authority necessary to censor (e.g., they work in the government).
-
violence — "Violence" refers to physical violence. I hope I don't have to explain how someone threatening to harm a journalist is a form of censorship.
-
government — This refers to any branch of any level of government within a given country. And anyone who uses the legal system in an attempt to suppress speech becomes a censor as well. (That person need not be an agent of the government, either.)
From here on out, I'll be addressing specific comments — some of which I replied to, some of which I didn't.
One such comment brought up the idea of a headmaster as a censor. Lexico defines "headmaster" as "(especially in private schools) the man in charge of a school." We can assume a headmaster is the highest authority of the school.
In a reply to that comment, I said the following:
If the headmaster is a government employee, they're a censor. If they're the head of a private institution, they're a "censor" in a merely colloquial sense. The privately owned and operated Liberty University (henceforth Liberty U), for example, has engaged in what I'd normally call "moderation" vis-á-vis its campus newspaper — which, despite it being a frankly immoral and unethical decision, Liberty U has every right to do as a private institution. (Frankly, I'd be tempted to call such people censors outright, but that would kinda go against my whole bit.)
But the example I used gave me pause to reconsider. Jerry Falwell Jr. (the "headmaster" of Liberty U) and free speech have often come to metaphorical blows. I noted this through a link to an article from the blog Friendly Atheist. The article has a quote from a former editor for Liberty U's school newspaper, who describes how Falwell's regime ran the paper:
[W]e encountered an "oversight" system — read: a censorship regime — that required us to send every story to Falwell's assistant for review. Any administrator or professor who appeared in an article had editing authority over any part of the article; they added and deleted whatever they wanted.
That raises the important question: Is that censorship or editorial discretion?
After reading the Washington Post article from which that quote comes, I would refer to this as censorship. I'll get into the why of my thinking on that soon enough. But suffice to say, "editorial discretion" doesn't often involve editors threatening writers with lawsuits or violence.
But though I call that censorship, some people might call it "moderation" or "editorial discretion". Falwell is, after all, exercising his right of association on his private property. What makes that "censorship" are the at-least-veiled threats against "dissenters".
Censorship Via Threats
Speaking of threats! Another comment took issue with how I defined censorship:
Why should it be "censorship" to threaten someone with a small financial loss (enforced by a court), but not to kick them off the platform they use to make the bulk of their income (independent of the government)? Is "you can speak on some other platform" fundamentally less offensive than "you can speak from another country", or is that merely a side-effect of the difficulty of physical movement?
To answer this as briefly as I can: A person can find a new platform with relative ease and little-to-no cost. No one can say the same for finding their way out of a lawsuit.
But that raises another important question: Does any kind of threat of personal or financial ruin count as censorship?
As I said above, the Liberty U example counts as censorship. As for the why? The following quotes from that WaPo article should help explain:
Student journalists must now sign a nondisclosure agreement that forbids them from talking publicly about "editorial or managerial direction, oversight decisions or information designated as privileged or confidential." … Faculty, staff and students on the Lynchburg, Va., campus have learned that it's a sin to challenge the sacrosanct status of the school or its leaders, who mete out punishments for dissenting opinions (from stripping people of their positions to banning them from the school).
School leaders don't have the power of government to back their decisions. But they can still use their power and authority to coerce other people into silence. ("Stop writing stories like this or I'll kick you out of this school and then what will you do.") Even if someone can move to another platform and speak, a looming threat could stop them from wanting to do that.
And the threat need not be one of financial or personal ruin. Someone who holds a journalist at knife point and says "shut up about the president or else" is a censor. The violent person doesn't need government power; their knife and the fear it can cause are all they need.
Money and Speech
A comment I made about companies such as Mastercard and Visa elicited a reply that pointed out how they, too, are complicit in censorship:
I cited Visa and Mastercard specifically because they are at the top of the chain and it's effectively impossible to create a competitor. If they say something's not allowed it isn't unless you want to lose funding. Paypal has been notoriously bad about banning people for innocuous speech over the years, but there are other downstream providers that aren't Paypal (although if all of them throw someone off, it still erases the speech). I am of the opinion that high-level banks should be held to neutrality standards like ISPs should due to their position of power. Competitors would be preferable, but the lack of either is frightening.
They make a good point. Companies like Visa can legally refuse to do business with, say, an adult film studio. So can banks. This becomes censorship when all such companies cut off access to their services. An artist who creates and sells adult art can end up in a bad place if PayPal cuts the artist off from online payments.
As the comment said, creating a competitor to these services is nigh impossible. Get booted from Twitter and you can open a Mastodon for instance; get booted from PayPal and you're fucked. That Sword of Damocles–esque threat of financial ruin could be (and often is) enough to keep some artists from creating adult works.
It's-A Me, Censorship!
Ah, Nintendo and its overzealous need to have a "family-friendly" reputation. Whatever would we do without it~?
Remember when Nintendo of America removed, or otherwise didn't allow objectionable material in their video games until Mortal Kombat came about and there were Congressional hearings and then the ESRB was formed?
Would you call what Nintendo did censorship or moderation? There's an argument for moderation in that it was only within their purview and only on their video game systems, but there's also an argument for censorship in that once the video games went outside of the bounds set by Nintendo of America, they were subpoenaed by the Government with threats of punishment. The ESRB made their censorship/moderations policies moot, but it's an interesting question. What do you think, Stephen?
This example leads to another good question: Do Nintendo, Sony, etc. engage in censorship when they ask a publisher to remove "problematic" material?
Nintendo can allow or deny any game a spot on the Switch library for any reason. If the company had wanted to deny the publication of Mortal Kombat 11 because of the excessive violence, it could've done so without question. To say otherwise would upend the law. But when Nintendo asks publishers to edit out certain content? I'd call that a mix of "editorial discretion" and "moderation".
Nintendo has the right to have its systems associated with specific speech. Any publisher that wants an association with Nintendo must play by Nintendo's rules. Enforcing a "right to publication" would be akin to the government compelling speech. We shouldn't want the law to compel Nintendo into allowing (or refusing!) the publication of Doom Eternal on the Switch. That way lies madness.
Oh, and the ESRB didn't give Nintendo the "right" to allow a blood-filled Mortal Kombat II on the SNES. Nintendo already had that right. Besides, Mortal Kombat II came out on home consoles one week before the official launch of the ESRB. (The first game to receive the "M" rating was the Sega 32X release of DOOM.) The company allowed blood to stay because the Genesis version of the first game — which had a "blood" code — sold better.
That's All, Folks!
And thus ends another episode of Playing Semantics! I'd like to thank everyone at home for playing, and if you have any questions or comments, please offer them below. So until next time(?), remember:
Moderation is a platform/service owner or operator saying "we don't do that here." Personal discretion is an individual telling themselves "I won't do that here." Editorial discretion is an editor saying "we won't print that here," either to themselves or to a writer. Censorship is someone saying "you won't do that anywhere" alongside threats or actions meant to suppress speech.
(untitled comment)
A government forcing Google to subsidize journalism by way of a link tax will not solve that problem.
(untitled comment)
That’s never stopped greedy motherfuckers before.
(untitled comment)
Then we can laugh at two evils trying to cancel one another out. Win-freakin’-win, baby!
(untitled comment)
Congress should also convict the now-former 45th president of inciting an insurrection to make sure the next lawmaker who would want to repeat the Redcap Riot won’t even think to try, raise the federal minimum wage to something that would guarantee a living wage for the average American, abolish the filibuster so lawmakers can actually vote on bills, secure and expand voting rights for all Americans to prevent state GOP lawmakers from contracting and making harder the usage of those rights, and generally act in the best interests of both American democracy and the American people.
It should do all those things (and more). But sadly, it won’t. Which is why I have no hopes for 230 surviving as it is today.
(untitled comment)
I’m not exactly what you’d call a “fan” of Republicans and their Dear Leader. But good lord, dude, you sound like a bad parody of “leftists” written by a right-wing whackjob who thinks everyone one step left of center talks like that.
(untitled comment)
I’d bet on something along the lines of “Twitter owes me an audience”.
(untitled comment)
No wonder people aren’t renewing their memberships there.
(untitled comment)
He tried to make one with Parler. Turns out, his history of making deals that benefit only him apparently (and finally) bit him on his flabby orange ass.
(untitled comment)
I can think of a few, but they all end the same way, and saying what that ending is would put me on FBI watchlists.
(untitled comment)
He wants as large an audience as possible for his inanity. Gab could never give him anything close to what he had when he was on Twitter.
(untitled comment)
“Twitter couldn’t afford to handle lawsuits when it was a startup, which is why we need to kill 230 now: To make sure Twitter can’t be replaced by a new startup!” That’s you. That’s you right now.
That…that’s what 230 does. It helps make sure liability for third party speech goes where it belongs: on the shoulders of the third-party user.
Per the on-the-Congressional-record words of Chris Cox, who helped craft 47 U.S.C. § 230: “[O]ur amendment will do two basic things: First, it will protect computer Good Samaritans, online service providers, anyone who provides a front end to the Internet, let us say, who takes steps to screen indecency and offensive material for their customers. It will protect them from taking on liability such as occurred in the Prodigy case in New York that they should not face for helping us and for helping us solve this problem. Second, it will establish as the policy of the United States that we do not wish to have content regulation by the Federal Government of what is on the Internet, that we do not wish to have a Federal Computer Commission with an army of bureaucrats regulating the Internet because frankly the Internet has grown up to be what it is without that kind of help from the Government.”
Well…yeah. You don’t have the right to make a soapbox out of private property that you don’t own. Play by the rules or get kicked out — it’s your choice.
Yes, we do. Like I’ve said before: Nobody is entitled to use Twitter, Facebook, etc. If you don’t like those sites — or you’re banned from those sites — you can find another one to use. The government has no business in ensuring you a spot on those services.
We have such places. They’re called “actual public property”. Requiring any privately owned service such as Twitter to host speech it doesn’t want to host is communism. The reasoning for that requirement is irrelevant.
I’ve explained this approach before. You obviously didn’t get it, so I’ll explain it again.
I use those extreme examples because they’re examples of the extreme speech that a good number of services already prohibit. (Twitter once suspended me for saying an anti-queer slur in the context of mocking someone else’s view of queer people.) To say Twitter must become a “public square” and allow all speech “or else” is to say Twitter must host that extreme speech.
Whenever I ask a question that mentions this speech, it is intentionally provocative. Talking about generalized “bad speech” is one thing. Drilling “bad speech” down to a more specific example (e.g., racial slurs) makes people confront the actual speech and, by proxy, defend the forcing of that speech upon others. That you can’t deal with such a confrontation isn’t my fault. I’m not the one running from the question.
People turning social media services into 4chan clones is not happening en masse right now, and I’m not cheerleading it regardless of whether it is.
Parler isn’t a “major network”. And last I checked, it’s back up and running again.
Why do you think I am a supporter of keeping 230 as-is? I know that if reform/repeal passes, Facebook, Twitter, Google, etc. will be the only games left in town.
Keeping 230 in place isn’t about “sides”. (Don’t you think I’d be supporting Democrats and their efforts to reform 230 if it were?) It’s about ensuring that websites aren’t forced to host speech they don’t want to host and aren’t sued to oblivion for their moderation efforts. It’s about making sure “the next Twitter” can become the next Twitter instead of a flash-in-the-pan service that dies only because it couldn’t handle a lawsuit that 230 should’ve stopped.
I’ll leave this here and hope you get the point without my having to explain it to you like you’re a five-year-old: http://leftycartoons.com/2018/08/01/i-have-been-silenced/
(untitled comment)
You got anything other than fearmongering that can be expanded to any communication technology more complex than, and able to communicate faster and wider than, pen-and-paper letters? Because I’ve heard this song before and it hasn’t gotten any better.
(untitled comment)
The lawyers representing Smartmatic likely understand it. If this suit ever gets to discovery, the Smartmatic team will probably push hard for discovery of the people they’re suing — at which point you’ll likely see settlements happen. If you think Smartmatic actually has something to hide, imagine what the defendants would want to keep out of court, especially given some of their close connections to the most thoroughly corrupt president in modern history (and their roles in pushing that asinine “stop the steal” lie).
(untitled comment)
Here’s the issue with that idea: You’re saying IAPs and search engines are “spread[ing] defamation” as if the operators of those services are knowingly, willingly, and actively spreading defamatory content. Unless you can prove that they’re doing that shit themselves, they shouldn’t be on the hook for “distributor liability” — no matter how much you or anyone else wants both revenge and an easy target for it.
The rest of the world doesn’t have a Section 230. They still have to deal with defamation cases involving the Internet. But those countries don’t need a 230 equivalent because they already have laws and traditions in place that preclude the need for a 230 equivalent.
And as for the assertion that a lack of 230 would prevent Guy Babcock from being defamed? You’re sort of right…because without 230, U.S.-based services of any size likely wouldn’t host any third party speech at all to avoid any and all legal liability for it.
Half-right. People who want 230 gone want it gone mostly because they want to either sue someone into the ground or force their speech onto a platform that told them “we don’t allow that here”. The “kill 230” position is, with rare exception, about one thing: legalized vengeance.
If he gets a court order to that effect, good. But we shouldn’t put shortcuts in jurisprudence, place potentially innocent people on the hook, and allow people to seek life- and service-destroying vengeance via the courts because someone thinks they were defamed (regardless of whether they actually were defamed). That way lies madness.
(untitled comment)
That sounds like a Trump rally.
(untitled comment)
I’m sure you’re likely aware of this, so please don’t take this comment as a potshot at you. It’s more a general grammar lesson for anyone and everyone who isn’t sure how to pluralize “attorney general”.
Think of the phrase “mother-in-law”. Assuming you had to pluralize that, you wouldn’t say “mother-in-laws”. That sounds weird no matter what. In that same vein, “attorney generals” sounds weird if you’re referring to someone with the job of Attorney General instead of military generals who happen to be attorneys.
In “attorney general”, “general” tells us the kind of attorney we’re talking about. Thus, you pluralize the main part rather than the descriptive part — which is how we get “attorneys general” (and, per the paragraph above, “mothers-in-law”).
And FYI: Since “quarter pounder” is technically a compound noun, “quarter pounders” is the correct pluralization. Except in Europe, I think. 😁
[TheMoreYouKnow.mp4]
(untitled comment)
230 didn’t harm Guy Babcock. An asshole with a computer, Internet access, and way too much free time did that. Hold that person liable instead of destroying a perfectly good law because you want to file SLAPP suits without 230 preëmptively shutting them down.
(untitled comment)
Except for, y’know, people who didn’t make defamatory statements. They’re entitled to that immunity. Or would you like to kill that legal principle, too?
(untitled comment)
And you’ve already lost me.
First, you’ll have to define “single-publication rule” with an Internet context in mind. Is it a “secondary” publication if a user retweets a defamatory tweet — and if so, should everyone who retweeted that tweet be on the same hook for defamation as the person who wrote the initial tweet? In some cases, that could be thousands of people.
Should “bots” (i.e., automated accounts) that retweet defamatory tweets put their creators on the hook? Should the owners/operators of a search engine that has no way of knowing without being told (ostensibly by a court of law) whether a given bit of speech is defamatory be on the hook if their search engine automatically indexes that speech?
For what reason, other than revenge and greed, should literally anyone other than the person(s) who made the defamatory statements be held legally liable for making those statements?
A decent idea, but I still can’t think of a good reason to do that.
This sounds like “small claims court, but for defamation”, which leads me to believe this is a bad idea.
Most sites generally don’t ignore court rulings that say “this speech is defamatory, take it down plz”.
That runs into the “libel tourism” problem, especially if more plaintiff-friendly laws end up governing multi-district defamation suits. Man, are you full of shitty ideas designed for petty revenge.
I can’t even put into words how ridiculous this idea is.
Again: a decent idea, but also loaded with possible issues, especially when you bring public data into the mix. How can it be “doxxing” if the data is available to anyone and it isn’t being kept secret(-ish)? For what reason should pointing out publicly available information about someone be made illegal? Would it count as “doxxing” if someone posts the publicly available contact information of a business that did something heinous (e.g., deny service to gay people because they’re gay) with the intent of letting people contact that business for the sake of civil protest? I’m sure other questions could be asked by people far smarter than I am, but those should make for a good start.
And if we had all of the above, we’d also have a court system clogged with baseless defamation cases filed to silence people/services by way of a legal war of attrition (which could be won without the other side firing a single shot). Hell, threats of a lawsuit/legal action would be enough to make people take down speech even if it isn’t defamatory. And all so you can get some measure of petty revenge against someone who probably doesn’t even know you exist outside of this comments section.
I know this is rich coming from me, but goddamn, son — get a fucking life.
(untitled comment)
Reading comprehension is not your strong suit. The article notes that a concerted effort from both sides of the political aisle — albeit led by the left wing — did everything they could to prevent not a certain outcome, but “an election so calamitous that no result could be discerned at all, a failure of the central act of democratic self-governance that has been a hallmark of America since its founding”.
If anything, the “secret bipartisan campaign” talked about in the article was about preventing the election from being rigged or fucked with, especially by “an autocratically inclined President”. The word “rigged” is never once used in the article to suggest the election was rigged. Nothing in the article suggests the election was rigged. Maybe if you read the actual article instead of the one you wanted it to be, you wouldn’t have fucked up this badly.
More comments from Stephen T. Stone >>
Techdirt has not posted any stories submitted by Stephen T. Stone.
Submit a story now.
Tools & Services
TwitterFacebook
RSS
Podcast
Research & Reports
Company
About UsAdvertising Policies
Privacy
Contact
Help & FeedbackMedia Kit
Sponsor/Advertise
Submit a Story
More
Copia InstituteInsider Shop
Support Techdirt