Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020)

from the clarification-needed dept

Summary: Three weeks before the presidential election, the New York Post published an article that supposedly detailed meetings Hunter Biden (son of presidential candidate Joe Biden) had with a Ukrainian energy firm several months before the then-Vice President allegedly pressured Ukraine government officials to fire a prosecutor investigating the company.

The “smoking gun” — albeit one of very dubious provenance — provided ammo for Biden opponents, who saw this as evidence of Biden family corruption. The news quickly spread across Twitter. But shortly after the news broke, Twitter began removing links to the article.

Backlash ensued. Conservatives claimed this was more evidence of Twitter’s pro-Biden bias. Others went so far as to assert this was Twitter interfering in an election. The reality of the situation was far more mundane.

As Twitter clarified — largely to no avail — it was simply enforcing its rules on hacked materials. To protect victims of hacking, Twitter forbids the distribution of information derived from hacking, malicious or otherwise. This policy was first put in place in March 2019, but it took an election season event to draw national attention to it.

The policy was updated after the Hunter Biden story broke, but largely remained unchanged. The updated policy explained in greater detail why Twitter takes down links to hacked material, as well as any exceptions it had to this rule.

Despite many people seeing this policy in action for the first time, this response was nothing new. Twitter had exercised it four months earlier, deleting tweets and suspending accounts linking to information obtained from law enforcement agencies by the Anonymous hacker collective and published by transparency activists Distributed Denial of Secrets. The only major difference was this involved acknowledged hackers and had nothing to do with a very contentious presidential race.

Decisions to be made by Twitter:

  • Does the across-the-board blocking of hacked material prevent access to information of public interest?
  • Does relying on the input of Twitter users to locate and moderate allegedly hacked materials allow users to bury information they’d rather not seen made public?
  • Is this a problem Twitter has handled inadequately in the past? If so, does enforcement of this policy effectively deter hackers from publishing private information that could be damaging to victims? 

Questions and policy implications to consider:

  • Given the often-heated debates involving releases of information derived from hacking, does leaving decisions to Twitter moderators allow the platform to decide what is or isn’t newsworthy?
  • Is the relative “power” (for lack of a better term) of the hacking victim (government agencies vs. private individuals) factored into Twitter’s moderation decisions? 
  • Does any vetting of the hacked content occur before moderation decisions are made to see if released material actually contains violations of policy?

Resolution: The expanded version of Twitter’s rules on hacked material remain in force. The additions to the policy in response to questions about its takedown of the Post article more clearly state what is or isn’t allowed on the platform. The expanded rules presumably also make it easier for moderators to make informed decisions, rather than simply remove any information that may appear to be the result of hacking.

Originally posted to the Trust & Safety Foundation website.

Filed Under: , , , , ,
Companies: twitter

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020)”

Subscribe: RSS Leave a comment
9 Comments
This comment has been deemed insightful by the community.
Baron von Robber says:

"a Ukrainian energy firm several months before the then-Vice President allegedly pressured Ukraine government officials to fire a prosecutor investigating the company."

The comical part is that then-VP pressured the Ukraine government to fire the prosecutor because he was NOT investigating the company. The prosecutor was corrupt and was refusing to investigate corruption of the companies CEO.

https://theintercept.com/2019/09/25/i-wrote-about-the-bidens-and-ukraine-years-ago-then-the-right-wing-spin-machine-turned-the-story-upside-down/

Anonymous Coward says:

I can see where Twitter would want to have nothing to do with "hacked" (sic) materials regardless of newsworthiness. There are other places to get news.

If, on the other hand, Twitter chose to point to multiple sources for analysis of hacked, leaked, or otherwise supposedly illicitly obtained information, that’s fine too.

In this particular case, while i think Hunter Biden is a giant douchenozzle, i don’t particularly trust the provenance of the materials. Even casual hacker and security groups and outfits generally do a better job with provenance and chain of custody issues than this "guy with a shop" and his sudden big-time backers.

Bobvious says:

Preying on gullibility

It would be easy to use a "Stop The STEAL!!" link of various forms, including the (potential) type that Twitter blocked here, and send it to the gullible, and do exactly the type of malicious intent mentioned in the warning. Unfortunately, people who WANT to believe this is true are particularly likely to fall for it.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Techdirt just loves to cover for Biden

Sad really, that Techdirt publishes articles from writers who never read prior articles from Techdirt, written by Mike Masnick himself, about all the corruption Biden was involved with when serving as VP for Obama.

Funny how Twitter came up with the "hacked materials" just to cover up Bidens sons corruption recovered by a tech repair shop that turned the evidence over to media and law enforcement. Even funnier, how Hunter went on national TV and lied with the statements, "I have no idea if those documents are real", "they could have been hacked by the Russians".

Anyone dumb enough to believe Hunter or Biden over anything involving investigations involved with is corrupt family is a moron. But I guess the author of this article is a moron.

Mike Masnick (profile) says:

Re: Techdirt just loves to cover for Biden

Sad really, that Techdirt publishes articles from writers who never read prior articles from Techdirt, written by Mike Masnick himself, about all the corruption Biden was involved with when serving as VP for Obama.

We’ve covered Biden problems. And we’ve covered other problems. This story has nothing to do with any of that. Most non-crazy people can understand that Twitter’s policies are unrelated to any opinions we have of the Bidens.

Funny how Twitter came up with the "hacked materials" just to cover up Bidens sons corruption recovered by a tech repair shop that turned the evidence over to media and law enforcement.

Except, AS THIS ARTICLE ITSELF POINTS OUT, they did not do so. They had the hacked materials policy in place before that, and we had criticized them for it when they used it to take down DDoSecrets.

Sad, really, that Techdirt commenters can’t comprehend basic concepts.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow