Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Using Fact Checkers To Create A Misogynist Meme (2019)

from the content-moderation-inception dept

Summary: Many social media sites include fact checkers in order to either block or at least highlight information that is determined to be false or misleading. However, in some ways, that alone can create a content moderation challenge.

Alan Kyle, a privacy and policy analyst, noticed this in late 2019 when he came across a meme picture on Instagram from a meme account called ?memealpyro? showing what appeared to be a great white shark leaping majestically out of the ocean. When he spotted the image, it had been blurred, with a notice that it had been deemed to be ?false information? after being ?reviewed by independent fact checkers.? When he clicked through to unblur the image, next to the image there was a small line of text saying ?women are funny.? And beneath that the fact checking flag: ?See why fact checkers say this is false.?

The implication of someone coming across this image with this fact check is that the fact check is on the statement, leading to the ridiculous/misogynistic conclusion that women are not funny and that an independent fact checking organization had to flag a meme image suggesting otherwise.

As Kyle discusses, however, this seemed to be an attempt to rely on fact checkers checking one part of the content, in order to create the misogynistic meme. Others had been using the same image — which was computer generated and not an actual photo — and claiming that it was National Geographic?s ?Picture of the Year.? This belief was so widespread that National Geographic had to debunk the claim (though it did so by releasing other, quite real, images of sharks to appease those looking for cool shark images).

The issue, then, was that fact checkers had been trained to debunk the use of the photo, on the assumption it was being posted with the false claim that it was National Geographic?s ?Photo of the Year,? and Instagram?s system didn?t seem to expect that other, different claims might be appended to the same image. When Kyle clicked through to see the explanation, it was only about the ?Picture of the Year? claim (which was not made on this image), and (obviously) not on the statement about women.

Kyle?s hypothesis is that Instagram?s algorithms were trained to flag the picture as false, and then possibly send the flagged image to a human reviewer — who may have just missed that the text associated with this image was unrelated to the text for the fact check.

Decisions to be made by Instagram:

  • If the caption and a picture need to be combined to be designated as false information, how should Instagram fact checkers handle cases where that information is separated?
  • How should fact checkers handle mixed media content, in which text and graphics or video may be deliberately unrelated?
  • Should automated tools be used to flag viral false information in a way that might be gamed?
  • How much human review should be used for algorithmically flagged ?false? information?

Questions and policy implications to consider:

  • When there is an automated fact checking flagging algorithm, how will users with malicious intent try to game the system, as in the above example?
  • Is fact checking the right approach to ?meme?d? information that is misleading, but not in a meaningful way?
  • Would requiring fact checking across social media lead to more ?gaming? of the system as in the case above?

Resolution: As Kyle himself concludes, situations like this are somewhat inevitable, as the setup of content moderation works against those trying to accurately deal with content such as the piece described above:

There are many factors working against the moderator making the right decision. Facebook (Instagram?s parent company) outsources several thousand workers to sift through flagged content, much of it horrific. Workers, who moderate hundreds of posts per day, have little time to decide a post?s fate in light of frequently changing internal policies. On top of that, much of these outsourced workers are based in places like the Philippines and India, where they are less aware of the cultural context of what they are moderating.

The Instagram moderator may not have understood that it?s the image of the shark in connection to the claim that it won a NatGeo award that deserves the false information label.

The challenges of content moderation at scale are well documented, and this shark tale joins countless others in a sea of content moderation mishaps. Indeed, this case study reflects Instagram?s own challenged content moderation model: to move fast and moderate things. Even if it means moderating the wrong things.

Filed Under: , ,
Companies: instagram

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Using Fact Checkers To Create A Misogynist Meme (2019)”

Subscribe: RSS Leave a comment
23 Comments

This comment has been flagged by the community. Click here to show it.

OldMugwump (profile) says:

Fact-checker baiting

This is a great example of somebody successfully baiting "fact-checkers". I’m tempted to sympathize with the troll who posted it.

The larger problem is that "fact checkers" can’t really check most facts in any meaningful, objective way.

Just to use this image as an example, how is "women are funny" even a "fact" that can be true or false? Obviously some women are more "funny" than others (even for the many different meanings of the word "funny"). I don’t think it’s possible to rule such a statement as clearly true or false in the first place, nor should anyone try.

Are women, as a class, "funny"? Whatever answer you give, it’s an opinion, not a fact. And an opinion on a awfully vague and ill-defined question.

I’ve little patience with "fact checking" in general, except perhaps in the original context of internal checking within a publication, before printing a story. Most statements aren’t clearly even "facts" in the first place.

This comment has been deemed insightful by the community.
Thad (profile) says:

Re: Fact-checker baiting

…you…get that the point of this story is that the fact-checkers were not intentionally evaluating whether or not the statement "women are funny" was factually true or false, right?

They were evaluating whether or not the image of the shark was National Geographic’s Picture of the Year. Which, notwithstanding your handwringing, is a statement which can be objectively evaluated as true or false. It’s false. That’s not an opinion, it’s a fact. The fact-checkers checked it; they evaluated it as false. The reason they evaluated it as false is that it is false.

The only person in this story trying to evaluate "women are funny" as a factual statement is the troll who submitted the altered image. Who checks notes you just said you sympathzie with.

This comment has been flagged by the community. Click here to show it.

Lobelia 'Lobster' Sterling says:

Re: Re: Fact-checker baiting -- NO, it's Techdirt pretense!

…you…get that the point of this story is that

The actual purpose of the story HERE is to defecate all over the place so that "moderation" looks tough, while in practice, Masnick’s position is a flat and unqualified RIGHT to arbitrarily censor:

"And, I think it’s fairly important to state that these platforms have their own First Amendment rights, which allow them to deny service to anyone."

https://www.techdirt.com/articles/20170825/01300738081/nazis-internet-policing-content-free-speech.shtml

You’re short-sighted and WRONG as usual, "Thad, the Ant-Slayer".

Scary Devil Monastery (profile) says:

Re: Re: Re: Fact-checker baiting -- NO, it's Techdirt pretense!

"The actual purpose of the story HERE is to defecate all over the place…"

I don’t think the Copia Institute needs your help to shit all over a forum. God knows, you’ve been seriously incontinent around here for many years now.

So…did you have anything actually relevant to the OP you wanted to whine about or is it just more venting your spleen because Geigner called you a nasty name nine years ago and you’ve spent every waking moment since to demonstrate the truth of his statement?

This comment has been flagged by the community. Click here to show it.

Lobelia 'Lobster' Sterling says:

Techdirt examples abstruse edge cases to build cred, while...

… Masnick’s actual bottom line position is that corporations have total arbitrary control:

"And, I think it’s fairly important to state that these platforms have their own First Amendment rights, which allow them to deny service to anyone."

https://www.techdirt.com/articles/20170825/01300738081/nazis-internet-policing-content-free-speech.shtml

This comment has been flagged by the community. Click here to show it.

Lobelia 'Lobster' Sterling says:

Re: Techdirt examples abstruse edge cases to build cred, while..

New readers, if any: don’t be fooled by Masnick’s "it’s so hard to do the right thing" diversions such as this (paid for by Silicon Valley corporate "support" of his laughable "think tank").

The block quote is one of the few times that he’s been honest. Masnick believes corporations should have TOTAL ARBITRARY CONTROL without regard to The Public interest.

Scary Devil Monastery (profile) says:

Re: Re: Techdirt examples abstruse edge cases to build cred, whi

"Masnick believes corporations should have TOTAL ARBITRARY CONTROL without regard to The Public interest."

Obviously corporations should have control over who they allow on their own property. That you personally believe the legal concept of property should be abolished in favor of "The People" is, rather, the more deranged idea.

But hey, if you actually want to make that happen then here’s how;

1) Assemble a political party.
2) Sell 51% of the voters on the communist manifesto you keep taking your ideas from.
3) Win all the elections and rewrite the constitutional amendments preventing the government from nationalizing any sufficiently popular property.

Because the only thing you accomplish here is to make people occasionally laugh at your incomprehensible hysterics. The sum of your thousands of hours of labor remains that people flag your comment after beating you over the head with the latest sack of garbage you spilled over the forum.

That’s fucking sad, Baghdad Bob, and if you weren’t such a malicious mentally disabled person we’d all be inclined to show you some sympathy.

Scary Devil Monastery (profile) says:

Re: Re: Re:

"For what reason should the government have the right to make any interactive web service host all legally protected speech, even if the owners/operators of that service don’t want to host certain kinds of speech?"

If Baghdad Bob – or Koby, for that matter – had any honesty at all they’d just provide the answer. The argument is clearly outlined in both The Communist Manifesto and Mao’s little red book.

Admitting they’re quoting outright communist philosophy doesn’t fit their narrative, of course.

NoName says:

Misogynistic, really??

Branding this meme as misogynistic is an exaggeration. Sure, it’s fashionable to interpret any situation in a positive light for women, and a negative light for men, whenever possible (hence the only groans on last Saturday’s SNL were when Dave Chappelle dared to make a joke at the expense of women).

If the person who posted the meme had misogynistic intent, it would have read, "women are smart" or "women are strong" or something of that nature. But "women are funny" can be interpreted itself either as a positive or a negative statement about women. For instance, it could be perceived as meaning "women are strange," or "women can be good comedians."

And even if it was intended to be interpreted (in light of the fact-checking) as, "It is false that women can be good comedians," that hardly rises to the level of misogyny, which is defined as ‘hatred or mistrust of women.’

If you disagree with me on that, I presume, in that case, that you’d agree that the majority of gender-related articles in the media these days are misandristic.

John Pettitt (profile) says:

It's the label that's wrong

The image is not false, it’s manipulated. The issue here is that the "false" fact checking label is appropriate for fact checks on text eg "Trump won the election" but not for images. If they had labeled it "This image is not a real photograph" then the problem around implying the the text on the mage is false would not apply.

Can’t stay to chat longer, I have to go and photoshop political some comments onto shark images …

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Ha ha, this is awesome.

These types of trolls are doing the yeoman’s work of triggering Leftist sissies. Since Techdirt-type anti-Americans have decided to go all in on doing ‘fact checks’ to ensure no degenerate gets their fee-fees hurt, I’m glad to see trolls like these keep making Thought Police continue, over and over again, to publicly and embarrassingly step on their dicks.

See also: ‘Islam is Right about Women’; the OK hand signal; clovergender; etc. It’s damn beautiful. Leftists just can’t help themselves.

Techdirt-type Leftists, please keep humiliating yourselves! It brings Americans such joy.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow