Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Spam "Hacks" in Among Us (2020)

from the very-sus dept

Summary: From August to October of 2020, as the COVID-19 pandemic had no end in sight and plenty of people were still stuck at home, on lockdown, unable to gather with others, the video game Among Us became incredibly popular as a kind of party game when there were no parties. The game had already been out for a while, but for unclear reasons it became the go-to game during the pandemic. It was so popular that the company behind it, InnerSloth, cancelled its plans for a sequel, promising instead to focus on fixing up the existing game and dealing with some of the bugs that were popping up from such widespread usage.

Among the bugs that InnerSloth had to deal with was the ability to hack the game with various apps and tools that allowed users to possess more powers in the game than they should be able to have.

This came to a head in late October of 2020, when the game was apparently overrun by spam promoting a YouTuber named “Eris Loris.” Some of the spam had political messaging, but all of it told people to subscribe to that user’s YouTube account. Sometimes it came with vaguely worded threats of hacking if you didn’t subscribe. Other times it just told people to subscribe.

While this attack was variously described as both a “hack” and a “spammer,” it appears that it was a combination of both at work. The end result was spamming players in the game and making it impossible to keep playing, but it was also carried out via a hack that filled the game with bots designed to spread the message. The person who goes by the name Eris Loris told the website Kotaku that he did it because he thought it was funny:

“I was curious to see what would happen, and personally I found it funny,” Loris told Kotaku in a DM. “The anger and hatred is the part that makes it funny. If you care about a game and are willing to go and spam dislike some random dude on the internet because you can’t play it for 3 minutes, it’s stupid.” — “Eris Loris” to Kotaku reporter Nathan Grayson

InnerSloth admitted that it was aware of the problem and asked players to “bare with us” [sic] and only play private games or with players they knew and trusted until updates were made to the server. A developer for the game separately warned users that he was rolling out changes using a “faster method than I’ve done before” and, as such, that things might break.

Company Considerations:

  • How much effort should be put towards preventative measures to try to block spamming, even before an app or service becomes wildly popular?
  • At what level does spamming reach a point that it is critical to change the code of a game, perhaps even using “faster” and less reliable methods to combat the spamming than would normally be used?
  • How do you balance resource allocations between having engineers improving the product and adding new features as compared to fighting back against malicious actors?

Issue Considerations:

  • When something becomes popular, there are always those with nefarious intentions who want to take advantage of the platform’s popularity. Should companies proactively prepare for the unintended consequences of success? What can companies put in place to anticipate the actions of bad actors?
  • Spammers and hackers sometimes go hand in hand with popular games and platforms. What are other risks (beyond just losing players/customers) if companies allow, or are slow at the removal, of those bad actors from the platform?
  • Many developers leave platforms somewhat open to encourage third party developers to build on additional tools and services that make a game or service more useful. How does a developer determine the trade-offs between an open system to promote innovation and someone abusing that openness?

Resolution: The rapid updates Among Us developers made to the Among Us servers appeared to do the trick, and the Eris Loris spam quickly diminished soon after. There were some questions about whether or not there would be legal consequences for whoever was behind the attacks, but to date, nothing has happened.

There still remain a number of Among Us hacks out there, and some people have attempted to follow in the footsteps of Eris Loris — including someone going by the name Sire Soril (Eris Loris backwards) — but it appears that none of these have had much success at all, suggesting that InnerSloth’s initial fix was pretty successful in limiting the kinds of attacks that overwhelmed the system in October of 2020.

Originally posted to the Trust & Safety Foundation website.

Filed Under: , , , , ,
Companies: innersloth

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Spam "Hacks" in Among Us (2020)”

Subscribe: RSS Leave a comment
4 Comments
Anonymous Coward says:

The rapid updates Among Us developers made to the Among Us servers appeared to do the trick, and the Eris Loris spam quickly diminished soon after.

How do you account for control? how do you know it didn’t just reach its peak organically, or that other factors outside of the problem area weren’t the cause of the drop (like say, changes in YouTube or maybe reaching the desired sub count)?

christenson says:

Re: Reading carefully.... "**appeared** to do the trick"!

For Techdirt, a site with a delightful post called "lies, damned lies, and audience metrics", almost the entire readership knows that "appeared to" means only "happened shortly after" and not "was actually caused by", and that choice of wording was quite deliberate.

It’s gonna take some footwork to gather evidence for causation, and some really clever tricks or a natural experiment to finally prove it beyond reasonable doubt.

PaulT (profile) says:

Re: Re:

I’m sure that only the developers have access to that information, and they would be able to see from their logs how many attempts were happening and how many were blocked by their activity. The will know, even if they’re not about to directly share all that information or admit that they didn’t really do anything if that were the case.

From an outside point of view, you can only look at the timing correlation and the fact that future attempts seem to have been mitigated to conclude that what they did at least had some effect.

TheDumberHalf says:

This is QAs job

Back when I worked in QA 20 years ago, we tested for spam, hacks, penetration and the likes. The QA on games today is non-existent compared to those times. Many games ship with zero QA on payroll and crowdsource testing to Alpha testers – for free. Many games are vulnerable to spam, nowadays. It’s sad, but the reality of indie games.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow