Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Facebook's AI Continues To Struggle With Identifying Nudity (2020)

from the ai-is-not-the-answer dept

Summary: Since its inception, Facebook has attempted to be more “family-friendly” than other social media services. Its hardline stance on nudity, however, has often proved problematic, as its AI (and its human moderators) have flagged accounts for harmless images and/or failed to consider context when removing images or locking accounts.

The latest example of Facebook’s AI failing to properly moderate nudity involves garden vegetables. A seed business in Newfoundland, Canada was notified its image of onions had been removed for violating the terms of service. Its picture of onions apparently set off the auto-moderation, which flagged the image for containing “products with overtly sexual positioning.” A follow-up message noted the picture of a handful of onions in a wicker basket was “sexually suggestive.”

Facebook’s nudity policy has been inconsistent since its inception. Male breasts are treated differently than female breasts, resulting in some questionable decisions by the platform. Its policy has also caused problems for definitively non-sexual content, like photos and other content posted by breastfeeding groups and breast cancer awareness videos. In this case, the round shape and flesh tones of the onions appear to have tricked the AI into thinking garden vegetables were overtly sexual content, showing the AI still has a lot to learn about human anatomy and sexual positioning.

Decisions to be made by Facebook:

  • Should more automated nudity/sexual content decisions be backstopped by human moderators?
  • Is the possibility of over-blocking worth the reduction in labor costs?
  • Is over-blocking preferable to under-blocking when it comes to moderating content?
  • Is Facebook large enough to comfortably absorb any damage to its reputation or user goodwill when its moderation decisions affect content that doesn’t actually violate its policies?
  • Is it even possible for a platform of Facebook’s size to accurately moderate content and/or provide better options for challenging content removals?

Questions and policy implications to consider:

  • Is the handling of nudity in accordance with the United States’ more historically Puritianical views really the best way to moderate content submitted by users all over the world?
  • Would it be more useful to users is content were hidden — but not deleted — when it appears to violate Facebook’s terms of service, allowing posters and readers to access the content if they choose to after being notified of its potential violation?
  • Would a more transparent appeals process allow for quicker reversals of incorrect moderation decisions?

Resolution: The seed company’s ad was reinstated shortly after Facebook moderators were informed of the mistake. A statement from the company raised at least one more question as its spokesperson did not clarify exactly what the AI thought the onions actually were, leaving users to speculate what the spokesperson meant, as well as how the AI would react to future posts it mistook for, “well, you know.”

“We use automated technology to keep nudity off our apps,” wrote Meg Sinclair, Facebook Canada’s head of communications. “But sometimes it doesn’t know a walla walla onion from a, well, you know. We restored the ad and are sorry for the business’ trouble.”

Originally posted at the Trust & Safety Foundation website.

Filed Under: , ,
Companies: facebook

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Facebook's AI Continues To Struggle With Identifying Nudity (2020)”

Subscribe: RSS Leave a comment
13 Comments
John85851 (profile) says:

I'm an adult, I should see nudity if I want

Here’s an idea: how about if Facebook treats people like adults and has a "nudity" checkbox when they sign in: check yes if you don’t mind seeing nude images, check no to not see them. Then only show nude images to people who checked the box.
Then there’s no need for AI or automated moderation: if someone reports a nude image and they checked "yes", then Facebook rejects the report because the user opted-in.

Just imagine the kinds of groups that could form if they allowed nudity! And more groups mean more users on the site, which means more user engagement, which means higher ads rates, and so on.
Heck, Facebook could even mine people’s data just by seeing which groups with nudity they join (which they probably do already).
And continuing with this argument, how much money is Facebook leaving on the table by not allowing nudity and adult groups?

DonutAtwork.com (profile) says:

I second this

FB should continue their best on this to make it a pleasant social network for users of any age. Now, it is just leaving it to the power of users to report then take action accordingly. The current AI’s OCR is actually very capable and accurate enough. It really is up to them to do so.

On top of, FB should also be more stringent on allowing ‘incredible’ businesses to take up FB Ads. Its crazy to see so many ‘scam’ ads in FB looking to pick up their victims in the world’s largest network. Perhaps while we wait for FB’s solutions, FB users here, please report whenever you see nudity or scam looking ads in FB. Thank You!

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow