Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Twitter Freezes Accounts Trying To Fact Check Misinformation (2020)

from the misinformation-vs-fact-checking dept

Summary: President Trump appeared on Fox News? ?Fox & Friends? and made some comments that were considered by many experts to be misinformation regarding the COVID-19 pandemic. One quote that particularly stood out was: “If you look at children, children are almost — and I would almost say definitely — but almost immune from this disease. They don’t have a problem. They just don’t have a problem.” This is false. While it has been shown that children are less likely to get seriously ill or die from the disease, that is very different from being ?immune.?

In response to this both Twitter and Facebook decided to remove clips of the video including those posted by the Trump Campaign. Given both platforms? aggressive policies regarding COVID-19 disinformation (and the criticism that both have received for being too slow to act) this was not all that surprising. For additional context, just a week and half earlier there was tremendous controversy over a decision to remove a video of some doctors giving speeches in front of the Supreme Court that also presented misleading information regarding COVID-19. While the major platforms all blocked the video, they received criticism from both sides for it. Some argued the video should not have been taken down, while others argued it took the platforms too long to take it down.

Thus it was not surprising that Facebook and Twitter reacted quickly to this video, even though it was statements made by the President of the United States. However, more controversy arose because in taking down those video clips, Twitter also ended up removing reporters, such as Aaron Rupar, who were fact checking the claims, and activists, like Bobby Lewis, who were highlighting the absurdity of the clip.

Decisions to be made by Twitter:

  • How aggressively should content moderation rules be applied to statements from the President of the United States?
  • How important is it to remove potentially harmful information regarding health and immunity to a disease like COVID-19?
  • Is it better to have such videos taken down too quickly, or too slowly?
  • How do you determine who is fact-checking or debunking a video and who is spreading the misinformation?
  • How do you handle situations where different people are sharing the same video for divergent purposes (some to spread misinformation, some to debunk it)?

Questions and policy implications to consider:

  • Should the President?s speech receive special consideration?
  • The same content can be used by different users for different reasons. Should content moderation take into account how the content is being used?
  • Counterspeech can often be useful in responding to disinformation. What role is there in content moderation to promote or allow counterspeech?

Resolution: The Twitter accounts that were temporarily suspended removed the ?offending? tweets in order to continue tweeting. There is no indication of any change in policy so far from Twitter, which has focused on removing all such videos, regardless of context of the tweet surrounding them.

Filed Under: , , , ,
Companies: twitter

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Twitter Freezes Accounts Trying To Fact Check Misinformation (2020)”

Subscribe: RSS Leave a comment
15 Comments
Anonymous Coward says:

It’s kind of garbage moderation, but if one is interested in countering misinformation, there isn’t necessarily a good reason to present the misinformation in full.

For the public record, and newsworthiness, etc., yes absolutely all that misinformation should be preserved somewhere, but maybe twitter isn’t the place.

I don’t know, but i feel as if it is bad moderation (if maybe unavoidable) as i said above. But there is also that inconvenient phenomenon where people remember and believe the bullshit, and not the debunking.

Anonymous Coward says:

Re: Re:

But there is also that inconvenient phenomenon where people remember and believe the bullshit, and not the debunking.

Content moderation is something that the person receiving the content is supposed to do, not the source or distributor.

If you believe that yourself or your dependent(s) shouldn’t consume such content that is your decision to make for yourself or your dependents. You should not be able to force that decision onto others who choose to consume it willingly.

As for idiots believing anything they are told, that’s an issue that has plagued humanity long before the internet existed, and will continue to do so long after it is gone. The tried and true solution to that problem is better education and there is no substitute. Unfortunately, there are many people out there that have a vested interest in making sure education never reaches the masses. In addition to people willing to go along with said schemes.

This comment has been deemed insightful by the community.
Rocky says:

Re: Re: Re:

Content moderation is something that the person receiving the content is supposed to do, not the source or distributor.

That’s an extremely simplistic take on the problem of being inundated with content that is pure lies, spam, hate-speech, off-topic or graphic in nature.

If you believe that yourself or your dependent(s) shouldn’t consume such content that is your decision to make for yourself or your dependents. You should not be able to force that decision onto others who choose to consume it willingly.

Which is the point of a platform moderating things, because people frequenting that platform has an expectation of not seeing certain types of content. Those who want that kind of content will gravitate towards the platforms having it. You wouldn’t expect the big social media-platforms to carry hardcore porn, right?

It all comes down to what the platform deems are fitting content for their target audience, if you don’t like their choices you have the choice of not to visit them since nobody is forcing you to use them.

Anonymous Coward says:

Re: Re: Re:

Content moderation is something that the person receiving the content is supposed to do, not the source or distributor.

Too bad for you, the First Amendment exists, and sites can moderate as they damn well please, whether we choose to critique it or not.

If you believe that yourself or your dependent(s) shouldn’t consume such content that is your decision to make for yourself or your dependents. You should not be able to force that decision onto others who choose to consume it willingly.

But, you know, you should force someone else to host your speech, right?

As for idiots believing anything they are told, that’s an issue that has plagued humanity long before the internet existed

Well, those "idiots" are most of humanity, and the internet is irrelevant except so far as these services hosting speech at their discretion are internet companies.

The tried and true solution to that problem is better education and there is no substitute.

No bloody argument with this.

Anonymous Coward says:

Twitter, like any other non-government platform, can do whatever the hell they like. This article reads like a complaint that Twitter isn’t moderating the way we want them to while in other nearby articles derides people who argue Twitter, et. al., aren’t moderating the way they want them to. Damned if they do, damned if they don’t.

Mike Masnick (profile) says:

Re: Re:

Twitter, like any other non-government platform, can do whatever the hell they like.

Yes. Have we ever argued otherwise?

This article reads like a complaint that Twitter isn’t moderating the way we want them to

What? The case studies are written, deliberately, in neutral language. Laying out the issues, and highlighting the questions facing the websites/services in question. It makes no statement, nor even hints at, a "proper" way to moderate.

Damned if they do, damned if they don’t.

That’s kind of the point of this series. To show that content moderation choices are much harder than most people think they are.

Funny that you seem to have no read it that way. Maybe try reading again more slowly?

Anonymous Coward says:

Re: Re: Re:

The point is that, despite already well establishing that Twitter, et. al., are free to do whatever they like on their platforms, we’re still discussing what they should be doing and how they should be doing it. While the article above doesn’t explicitly state a position, the inclusion of a list of "decisions" Twitter should make implies that Twitter is not "doing it right" and still has work to do.

They are and they don’t.

We should all stop talking about what these companies should do with their property and focus on getting everyone else to stop using that self-same discussion as an inappropriate wedge to get what they want out of 230. Section 230 and the epidemic of dumb surrounding it are what all of this boils down to. With so many irrelevant, pointless periphery talking points it’s too easy to lose sight of the real, actual, solvable problems.

This comment has been deemed insightful by the community.
Mike Masnick (profile) says:

Re: Re: Re: Re:

The point is that, despite already well establishing that Twitter, et. al., are free to do whatever they like on their platforms, we’re still discussing what they should be doing and how they should be doing it.

Because thinking through and understanding the challenges of content moderation are important. If you just say "they can do whatever they want and no one should ever discuss it" that’s not particularly useful is it?

While the article above doesn’t explicitly state a position, the inclusion of a list of "decisions" Twitter should make implies that Twitter is not "doing it right" and still has work to do.

Um. It’s a fucking case study. Every case study we do includes those questions. Not to influence Twitter, but to get everyone to think about putting themselves in the position of the internet website to think through how they would handle this situation.

https://www.techdirt.com/blog/contentmoderation/

It’s not to say that Twitter needs to do something different. It’s to get everyone else to recognize that these are difficult decisions. The whole point is to show there is no "right" answer, not to say that Twitter isn’t doing it right.

I really think you should maybe slow down and reread.

We should all stop talking about what these companies should do with their property and focus on getting everyone else to stop using that self-same discussion as an inappropriate wedge to get what they want out of 230.

Uh, no. We should absolutely be talking about the challenges of content moderation, because otherwise everyone pushing to change 230 thinks that it’s easy. The point of this series is to show that every option has tradeoffs and questions and issues.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
JoeCool (profile) says:

Republican mantra

One quote that particularly stood out was: "If you look at children, children are almost — and I would almost say definitely — but almost immune from this disease. They don’t have a problem. They just don’t have a problem."

There’s a Republitard running ads here in NC where he slams the NC governor for not making kids go back to school by claiming the CDC has stated that kids do not catch and cannot spread the corona virus. Listening to his stupid and harmful lies on the radio every day driving to work frankly makes me sick.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow