Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Social Media Upstart Parler Struggles To Moderate Pornography (2020)

from the not-so-easy-to-be-clean dept

Summary: Upstart social network Parler (which is currently offline, but attempting to come back) has received plenty of attention for trying to take on Twitter — mainly focusing on attracting many of the users who have been removed from Twitter or who are frustrated by how Twitter?s content moderation policies are applied. The site may only boast a fraction of the users that the social media giants have, but its influence can’t be denied.

Parler promised to be the free speech playground Twitter never was. It claimed it would never “censor” speech that hadn’t been found illegal by the nation’s courts. When complaints about alleged bias against conservatives became mainstream news (and the subject of legislation), Parler began to gain traction.

But the company soon realized that moderating content (or not doing so) wasn’t as easy as it hoped it would be. The problems began with Parler’s own description of its moderation philosophy, which cited authorities that had no control over its content (the FCC), and the Supreme Court, whose 1st Amendment rulings apply to what the government may regulate regarding speech, but not private websites.

Once it became clear Parler was becoming the destination for users banned from other platforms, Parler began to tighten up its moderation efforts, resulting in some backlash from users. CEO John Matze issued a statement, hoping to clarify Parler’s moderation decisions.

Here are the very few basic rules we need you to follow on Parler. If these are not to your liking, we apologize, but we will enforce:

– When you disagree with someone, posting pictures of your fecal matter in the comment section WILL NOT BE TOLERATED
– Your Username cannot be obscene like “CumDumpster”
– No pornography. Doesn’t matter who, what, where,

Parler’s hardline stance on certain content appeared to be more extreme than the platforms (Twitter especially) that Parler?s early adopters decried as too restrictive. In addition to banning content allowed by other platforms, Parler claimed to pull the plug on the sharing of porn, even though it had no Supreme Court/FCC precedent justifying this act.

Parler appears to be unable — at least at this point — to moderate pornographic content. Despite its clarification of its content limitations, Parler does not appear to have the expertise or the manpower to dedicate to removing porn from its service.

A report by the Houston Chronicle (which builds on reporting by the Washington Post) notes that Parler has rolled back some of its anti-porn policies. But it still wishes to be seen as a cleaner version of Twitter — one that caters to “conservative” users who feel other platforms engage in too much moderation.

According to this report, Parler outsources its anti-porn efforts to volunteers who wade through user reports to find content forbidden by the site’s policies. Despite its desires to limit the spread of pornography, Parler has become a destination for porn seekers.

The Post’s review found that searches for sexually explicit terms surfaced extensive troves of graphic content, including videos of sex acts that began playing automatically without any label or warning. Terms such as #porn, #naked and #sex each had hundreds or thousands of posts on Parler, many of them graphic. Some pornographic images and videos had been delivered to the feeds of users tens of thousands of times on the platform, according to totals listed on the Parler posts.

Parler continues to struggle with the tension of upholding its interpretation of the First Amendment and ensuring its site isn’t overrun by content it would rather not host.

Decisions to be made by Parler:

  • Does forbidding porn make Parler more attractive to undecided users?
  • Do moderation efforts targeting content allowed on other platforms undermine Parler’s assertions that it’s a “free speech” alternative to Big Tech “censorship”?
  • Can Parler maintain a solid user base when its moderation decisions conflict with its stated goals?

Questions and policy implications to consider:

  • Does limiting content removal to unprotected speech attract unsavory core users?
  • Is it possible to limit moderation to illegal content without driving users away?
  • Does promising very little moderation of pornography create risks that the platform will also be filled with content that violates the law, including child sexual abuse material?

Resolution: Parler?s Chief Operating Officer responded to these stories after they were published by insisting that its hands-off approach to pornography made sense, but also claiming that he did not want pornographic ?spam.?

After this story was published online, Parler Chief Operating Officer Jeffrey Wernick, who had not responded to repeated pre-publication requests seeking comment on the proliferation of pornography on the site, said he had little knowledge regarding the extent or nature of the nudity or sexual images that appeared on his site but would investigate the issue.

?I don?t look for that content, so why should I know it exists?” Wernick said, but he added that some types of behavior would present a problem for Parler. ?We don?t want to be spammed with pornographic content.?

Given how Parler?s stance on content moderation of pornographic material has already changed significantly in the short time the site has been around, it is likely to continue to evolve.

Originally posted to the Trust & Safety Foundation website.

Filed Under: , , ,
Companies: parler, twitter

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Social Media Upstart Parler Struggles To Moderate Pornography (2020)”

Subscribe: RSS Leave a comment
14 Comments
DB (profile) says:

This story starts out by sliding a false premise — that Parler wouldn’t be censoring speech.

Parler quite clearly banned accounts that were left-leaning, sometimes after only a single post. It was run as a right-wing echo chamber, a case study in how you can get volunteer moderators to continuously reinforce a move to radical extremes. Perhaps that is even the default behavior.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

Parler’s CEO all but guaranteed the service wouldn’t censor anyone:

We’re a community town square, an open town square, with no censorship[.] … If you can say it on the street[s] of New York, you can say it on Parler.

Either he was lying through his teeth when he said that or he had to change his mind on the subject when confronted with reality. But he made the initial statement regardless. Saying he did isn’t a “false premise”.

Scary Devil Monastery (profile) says:

Re: Re: Re:

"Either he was lying through his teeth when he said that or he had to change his mind on the subject when confronted with reality."

The second more or less requires him to be staggeringly inept at his purported job since the need for moderation isn’t exactly new or unknown. Particularly so given the audience demographic Parler was aiming for.

So we can take our pick on whether Parler’s CEO is inept or outright evil, but the idea that he suddenly discovered factual reality and had to change his mind is actually the less plausible option.

Stephen T. Stone (profile) says:

Re: Re: Re: In re: ineptitude and the Internet

“it’s really impressive how the internet has been a wholly mainstream communications channel for at least 15 years now and there are still people in positions of serious influence who fundamentally Don’t Get It”

https://twitter.com/MaxKriegerVG/status/1354460445864833024

Anonymous Coward says:

Re: Re: Re: Re:

I mean, Parler’s network security has been described as "This is like a Computer Science 101 bad homework assignment, the kind of stuff that you would do when you’re first learning how web servers work." I’m not so sure that "completely, laughably inept" is out of the question.

https://thenewstack.io/how-parlers-data-was-harvested/

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Caught with their pants down- no not that way

I don’t look for that content, so why should I know it exists?" Wernick said, but he added that some types of behavior would present a problem for Parler. “We don’t want to be spammed with pornographic content.”

Oh I dunno, maybe because no less than Parler’s CEO made clear that that content wasn’t allowed?

Here are the very few basic rules we need you to follow on Parler. If these are not to your liking, we apologize, but we will enforce:

No pornography. Doesn’t matter who, what, where,

‘Terms such as #porn, #naked and #sex each had hundreds or thousands of posts on Parler, many of them graphic. Some pornographic images and videos had been delivered to the feeds of users tens of thousands of times on the platform, according to totals listed on the Parler posts.

Either your platform allowed porn in which case you should probably clue your boss in on that, or it does not and it was your gorram job to look for that sort of stuff to remove it.

With such sloppy moderation it’s pretty clear that they had no gorram clue what they were getting themselves into, and were left scrambling to play catch-up when they could be bothered to act at all. Given their response to porn, something explicitly called out as prohibited Amazon’s claims that they didn’t take moderation seriously enough just becomes all the more believable.

Scary Devil Monastery (profile) says:

Re: Caught with their pants down- no not that way

"Either your platform allowed porn in which case you should probably clue your boss in on that, or it does not and it was your gorram job to look for that sort of stuff to remove it."

Come, come, be fair. All of Parler’s staff more or less have to be affiliated with the party for personal responsibility. He simply took the same kind of personal responsibility republicans are known – even infamous – for…by sticking his head in the sand and pretending the problem was all a liberal hoax.

Anonymous Coward says:

Parler would’ve been interesting had it actually been the promised "town square" where everyone has free speech. That’s something the incumbents in the social media space don’t offer.

Forbidding porn (or shitpics) wasn’t so much about the porn itself. It was losing that distinguishing feature. Adding the industry standard "stuff we don’t like" -list reduced it to a pointless Twitter copycat that at best could only ever be a rebound option for users and content banned from Twitter itself.

crade (profile) says:

To be more fair to parler.. It’s major issues stem from being popular so fast.. Twitter and facebook initially had lofty ideals too but the reality how "horrible" scales on the internet wasn’t shoved down their throats all in one month.

"Here are the very few basic rules we need you to follow on Parler"
these are just completely arbitrary examples not much for rules, he’s basically saying their decisions are arbitrary based on gut feelings and they are examples of the sort of stuff the twitter rules were designed to handle…

How does parler even verify whether the fecal matter in question belonged to the poster?

Anonymous Coward says:

Porn is illegal in Belize.

Since their servers are now in Belize, they have to abide by the laws of Belize.

That could be why they are struggline with that.

Still, being in Belize is a wise choice, because they no longer have to follow any United States laws.

The United States has no jurisdiction over a server in Belize.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow