Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Decentralized Social Media Platform Mastodon Deals With An Influx Of Gab Users (2019)

from the decentralized-content-moderation-challenges dept

Summary: Formed as a more decentralized alternative to Twitter that allowed users to more directly moderate the content they wanted to see, Mastodon has experienced slow, but steady, growth since its inception in 2016.

Unlike other social media networks, Mastodon is built on open-source software and each “instance” (server node) of the network is operated by users. These separate “instances” can be connected with others via Mastodon’s interlinked “fediverse.” Or they can remain independent, creating a completely siloed version of Mastodon that has no connection with the service’s larger “fediverse.”

This puts a lot of power in the hands of the individuals who operate each instance: they can set their own rules, moderate content directly, and prevent anything the “instance” and its users find undesirable from appearing on their servers. But the larger “fediverse” — with its combined user base — poses moderation problems that can’t be handled as easily as those presenting themselves on independent “instances.” The connected “fediverse” allows instances to interact with each other, allowing unwanted content to appear on servers that are trying to steer clear of it.

That’s where Gab — another Twitter alternative — enters the picture. Gab has purposely courted users banned from other social media services. Consequently, the platform has developed a reputation for being a haven for hate speech, racists, and bigots of all varieties. This toxic collection of content/users led to both Apple and Google banning Gab’s app from their app stores.

Faced with this app ban, Gab began looking for options. It decided to create its own Mastodon instance. With its server now technically available to everyone in the Mastodon “fediverse,” those not explicitly blocking Gab’s “instance” could find Gab content available to its users — and also allow for Gab?s users to direct content to their own users. It also allowed Gab to utilize the many different existing Mastodon apps to sidestep the app bans handed down by Google and Apple.

Decisions to be made by Mastodon:

  • Should Gab (and its users) be banned from setting up “instances,” given that they likely violate the Mastodon Server Covenant?

  • Is it possible to moderate content across a large number of independent nodes?

  • Is this even an issue for Mastodon itself to deal with, given that the individuals running different servers can decide for themselves whether or not to allow federation with the Gab instance?

  • Given the open source and federated nature of Mastodon, would there reasonably be any way to stop Gab from using Mastodon?

Questions and policy implications to consider:

  • Will moderation efforts targeting the “fediverse” undercut the independence granted to “instance” owners?

  • Do attempts to attract more users create moderation friction when the newly-arriving users create content Mastodon was created to avoid?

  • If Mastodon continues to scale, will it always face challenges as certain instances are created to appeal to audiences that the rest of the ?fediverse? is trying to avoid?

  • Can a federated system, in which unique instances choose not to federate with another instance, such as Gab, work as a form of ?moderation-by-exclusion??

Resolution: Mastodon’s founder, Eugen Rochko, refused to create a blanket ban on Gab, leaving it up to individual “instances” to decide whether or not to interact with the interlopers. As he explained to The Verge, a blanket ban would be almost impossible, given the decentralized nature of the service.

On the other hand, most “fediverse” members would be unlikely to have to deal with Gab or its users, considering the content contained in Gab’s “instance” routinely violates the Mastodon “covenant.” Violating these rules prevents instances from being listed by Mastodon itself, lowering the chances of other “instance” owners inadvertently adding toxic content and users to their server nodes. And Rochko himself encouraged users to preemptively block Gab’s “instance,” resulting in ever fewer users being affected by Gab’s attempted invasion of the Mastodon fediverse.

But running a decentralized system creates an entirely new set of moderation issues, which has turned Mastodon itself into a moderation target. Roughly a year after the Gab “invasion,” Google threatened to pull Mastodon-based apps from its store for promoting hate speech, after users tried to get around the Play Store ban by creating apps that pointed to Mastodon ?instances? filled with hateful content. Google ultimately decided to leave Mastodon-based apps up, but appears ready to pull the trigger on a ban in future.

Originally posted to the Trust & Safety Foundation website.

Filed Under: , , ,
Companies: gab, mastodon

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Decentralized Social Media Platform Mastodon Deals With An Influx Of Gab Users (2019)”

Subscribe: RSS Leave a comment
11 Comments
Stephen T. Stone (profile) says:

As a Masto user who was around when the Gab transition went down: Yeah, the reaction from a large number of instances was “fuck all that then” and a near-immediate block on Gab (and Gab-based instances). Gargamel—sorry, Gargron didn’t need to institute a block on Gab; the Fediverse largely did that job on its own.

A similar “job” continues to this day with the #fediblock hashtag, which alerts users across the Fediverse to other users and instances that violate what are considered the general norms of said Fediverse. Since the Fediverse is largely queer-friendly and (at least in spirit, if not in effect) anti-racist, you can imagine what kind of assholes get mentioned in #fediblock posts.

Andrew Pam (profile) says:

Also Diaspora*

Similar issues have arisen with Diaspora* pods, also part of the Fediverse.

Plus there have been individual toxic users who have repeatedly signed up for new accounts on multiple different pods, several per day, in order to get around moderation. The most toxic individual frequently fakes the identity of people he dislikes when signing up at new pods. This makes open registration problematic and raises difficult questions about cross-pod reputation management.

christenson says:

An impossibility Lemma

I always thought the Gabs of the world would end up in something like Mastodon — they want to support generally obnoxious, racist content, they are going to find many businesses and software suppliers refusing to do business with them, so they will have to go to those that allow anonymous, unpaid relationships. That’s open source! (And a common carrier ISP result if you start talking about individuals not being able to rent hosting space and therefore working off of their individual internet connections)

As to blocking policy, well, that has to be largely up to the other, independent mastodon instances — the difficulty arises, of course, when a very considerate friend on a favorite website or mastodon instance says "Look what horrible things they are doing on the Gab mastodon instance! (instructions on how to find it)", or perhaps more indirectly, "The Gab Mastodon instance is doing horrible things…check out Techdirt!(link)"

Context is everything!

Stephen T. Stone (profile) says:

Re:

the difficulty arises, of course, when a very considerate friend on a favorite website or mastodon instance says "Look what horrible things they are doing on the Gab mastodon instance! (instructions on how to find it)", or perhaps more indirectly, "The Gab Mastodon instance is doing horrible things…check out Techdirt!(link)"

Wordfilters and the Content Warning system built into Mastodon largely alleviate such issues. Posts using the #fediblock hashtag will call out instances by their domain name, but such posts rarely offer a direct link to those instances, and offending instances are never called out without context.

Ninja (profile) says:

Ultimately the target should be the awful individuals congregating over Gab instance and the ones running the infrastructure behind it. When somebody uses a gun to commit crimes we go after the gun manufacturer or the reseller? Unless they violate some law to sell the weapon to the hipothetical criminal they shouldn’t be targeted. Same with mastodon. They cannot be responsible for how people use tools that are completely open source and easily replicated but the host for Gab’s instance can be targeted if they knowingly host such speech.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow