Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Twitter Acts To Remove Accounts For Violating The Terms Of Service By Buying/Selling Engagement (March 2018)

from the fake-followers dept

Summary: After an investigation by BuzzFeed uncovered several accounts trafficking in paid access to “decks” — Tweetdeck accounts from which buyers could mass-retweet their own tweets to make them go “viral” — Twitter acted to shut down the abusive accounts.

Most of the accounts were run by teens who leveraged the tools provided by Twitter-owned Tweetdeck to provide mass exposure to tweets for paying customers. Until Twitter acted, users who saw their tweets go viral under other users’ names tried to police the problem by naming paid accounts and putting them on blocklists.

Twitter’s Rules expressly forbid users from “artificially inflating account interactions?. But most accounts were apparently removed under Twitter’s anti-spam policy — one it beefed up after BuzzFeed published its investigation. The biggest change was the removal of the ability to simultaneously retweet tweets from several different accounts, rendering these “decks” built by “Tweetdeckers” mostly useless. Tweetdeckers responded by taking a manual approach to faux virality, sending direct messages requesting mutual retweets of posted content.

Unlike other corrective actions taken by Twitter in response to mass abuse, this cleanup process appears to have resulted in almost no collateral damage. Some users complained their follower counts had dropped, but this was likely the result of near-simultaneous moderation efforts targeting bot accounts.

Decisions to be made by Twitter:

  • Do additional moderation efforts — AI or otherwise — need to be deployed to detect abuse of Twitter Rules?
  • How often do these efforts mistakenly target legitimately “viral” content?
  • Will altering Tweetdeck features harm users who aren’t engaged in the buying and selling of “engagement?”
  • Will power users or those seeking to abuse the rules move to other third-party offerings to avoid moderation efforts?
  • Is there any way to neutralize “retweet for retweet” requests in direct messages without raising concerns about user privacy?

Questions and policy implications to consider:

  • Does targeting spam more aggressively risk alienating advertisers who rely on repetitive/scheduled posts and active user engagement?
  • Does spam (in whatever form — including the manufactured virality seen above) still provide some value for Twitter as a company, considering it relies on active users and engagement to secure funding and/or sell ad space to companies?
  • Do viral posts still add value for Twitter users, even if the source of the virality is illegitimate?
  • Will increased moderation of spam reduce user engagement during events where advertising efforts and user engagement are routinely expected to increase (elections, sporting events, etc.)?  

Resolution: Twitter moved quickly to disable and delete accounts linked to the marketing of user engagement. It chose to use its anti-spam rules as justification for account removals, even though users were allegedly engaged in other violations of the terms of service. The buying and selling of Twitter followers — along with retweets and likes — continues to be a problem, but Twitter clearly has a toolset in place that is effective against the behavior seen here. Due to the reliance on spam rules, the alterations to Tweetdeck — a favorite of Twitter power users — appear to have done minimal damage to legitimate users who enjoy the advantages of this expanded product.

Filed Under: , , , ,
Companies: twitter

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Twitter Acts To Remove Accounts For Violating The Terms Of Service By Buying/Selling Engagement (March 2018)”

Subscribe: RSS Leave a comment
1 Comment
catsmoke (profile) says:

Why pay to re-tweet?

I’m unable to divine why Person A would pay money to Person B so that Person B would perform the service of taking a tweet written by Person C (who has no previous relationship with Person A, and whose tweets would not seem to have any applicability to the goals of Person A) and sharing that tweet with a large audience.

If Person C had written a tweet that had said “Send money to my bank account #1234567” and Person A happened to have access to that bank account, then I could see Person A wanting to broadcast that tweet as widely as possible. But it’s unlikely such a situation would arise.

Is the explanation that the tweet is some re-usable cash-grab instrument? I can imagine something such as “If you like this tweet, then send $50 to bank account #8901234” but there is no tweet that is such an effective tool, it cannot be imitated. So why would there be a need for Person A to steal a tweet from Person C, for dissemination by Person B?

It doesn’t seem to make sense.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow