Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Studies: Misleading Information From Official Sources (2020)

from the protest-edition dept

Summary: With news breaking so rapidly, it?s possible that even major newspapers or official sources may get information wrong. Social media sites, like Twitter, need to determine how to deal with ?news? tweets that later turn out to be misleading — even when coming from major news organizations, citing official government organizations.

With widespread protests around the United States calling attention to police brutality and police activity disproportionately targeting the black community, the NY Post tweeted a link to an article discussing an internal communication by the NY Police Department (NYPD) warning of ?concrete disguised as ice cream cups? that were supposedly found at some of the protests, with the clear implication being that this was a way to disguise items that could be used for violence or property destruction.

The article was criticized widely by people who pointed out that the items in fact appear to be part of a standard process for testing concrete mixtures, with the details of each mixture written on the side of the containers. Since these were found at a construction site, it seems likely that the NYPD?s ?alert? was, at best, misleading.

In response to continuing criticism, the NY Post made a very minor edit to the story, noting only that the markings on the cups make them ?resemble concrete sample tests commonly used on construction sites.? However, the story and its title remained unchanged and the NY Post retweeted it a day later — leading some to question why the NY Post was publishing misinformation, even if it was accurately reporting the content of an internal police memo.

Questions for Twitter:

  • Should it flag potentially misleading tweets when published in major media publications, such as the NY Post?
  • Should it matter if the information originated at an official government source, such as the NYPD?
  • How much investigation should be done to determine the accuracy (or not) of the internal police report? How should the NY Post?s framing of the story reflect this investigation?
  • Does it matter that the NY Post retweeted the story a day after the details were credibly called into question?

Questions and policy implications to consider:

  • Do different publications require different standards of review?
  • Does it matter if underlying information is coming from a governmental organization?
  • If a media report accurately reports on the content of an underlying report that is erroneous or misleading, does that make the report itself misleading?
  • How much does wider context (protests, accusations of violence, etc.) need to be considered when making determinations regarding moderation?

Resolution: To date, Twitter has left the tweets up, and the NY Post article remains online with only the very minor edit that was added a few hours after the article received widespread criticism. The NY Post tweets have not received any fact check or other moderation to date. There are, however, many replies and quote tweets calling out what people feel to be misleading aspects of the story (as well as plenty from people taking the content of the story at face value, and worrying about how the items might be used for violence).

Filed Under: , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Studies: Misleading Information From Official Sources (2020)”

Subscribe: RSS Leave a comment
9 Comments

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

You got this one wrong

The testing of concrete that this is ‘alleged’ to be is called a cylinder break test. Look it up if you don’t believe me. The real test uses a metered cylinder (ie… consistent size and shape for each cylinder) in order to cure the cement for a specified amount of time prior to breaking. Typically more than one cylinder is taken from each batch and broke at x days and x+x days. I usually see 30, 60 and 90 days being common. The result of the test is to measure the compressive break strength of the concrete to see if it is equal to or greater than the concrete that was specified by the engineer/sold by the concrete plant. Bottom line is that it is a scientific test with controls. For this article to say that the examples shown in the photo are because of concrete testing is irresponsible at best and definitely shows that no journalistic effort went into the story. To continue there asinine assumption and then broad stroke twitter even makes them look more foolish.

Samuel Abram (profile) says:

The New York Post

The New York Post is basically the Fox Newspaper. I’d not trust anything that comes out of their filthy pages (which are literally filthy, as the ink can rub on one’s fingers and clothes. I’ve had it happen to me many times).

So just because it came from a newspaper doesn’t make it reliable. The NY Post is not reliable (neither is the daily mail, for that matter).

John85851 (profile) says:

Lazy journalism

I think the issue goes back to lazy journalism and it goes something like this:
The Onion published a satire story.
A Chinese newspaper picks up the story and thinks it’s real.
The Huffington Post reports that the Chinese newspaper is reporting a story.
The Washington Post reports that the Huffington Post is running a story based on a Chinese story.

So, where does the fact-checking come into it? The Washington Post relied on the Huffington Post to verify the facts and the Huffington Post assumed the Chinese newspaper did their own fact-checking.
Yet none of these people did their own research to see if the story was true and accurate.

Then this issue gets worse when there are people planting obvious misinformation that the media thinks is correct because of the "truthiness"- you know, it must be true because it sounds like it should be true.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow