Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Google's Ad Policies Inadvertently Block Religious Organizations From Advertising On YouTube (2019)

from the unacceptable-content? dept

Summary: Google’s ad service offers purchasers access to millions of users, including those viewing videos on YouTube. But its policies — meant to prevent abuse, fraud, harassment, or targeting of certain demographics — sometimes appear to prevent legitimate organizations from doing something as simple as informing others of their existence.

Chad Robichaux, the founder of Christian veterans support nonprofit Mighty Oaks, wanted to reach out to veterans who might need his services. But his attempt to purchase YouTube ads was rejected by Google’s Ad service for a seemingly strange reason.

According to a screenshot posted by Robichaux to Twitter, Google forbade the use of “Christian” as a keyword. To Robichaux (and many responders to his tweet), this was evidence of Big Tech’s bias against Christians and conservatives.

But the real reason for this block was far less censorial or nefarious, if no more explicable. According to YouTube (which reached out directly to Robicheaux), the aim isn’t to keep Christians from advertising, but rather to prevent advertisers from targeting users on the basis of their religion. Unfortunately, Google’s policy doesn’t exactly make that clear, instead stating that ads cannot contain “religious basis” content if the purchaser is engaging in personalized advertising.

Decisions to be made by Google:

  • Does blocking certain keywords make some ads impossible to place, no matter what audience is targeted or where the content may appear?

  • Is it ok for advertisers to target these groups if the users have already self-identified as being members of these groups? Would it be ok if users could explicitly opt in to being targeted in this way?

  • Is clarification or simplification of the rules needed to ensure accidental blocking or further misunderstandings are avoided?

  • Should advertisers be given more guidance on how to craft ads/seek users to prevent violations?

Questions and policy implications to consider:

  • Does having control of a majority of the advertising market lower the quality of assistance users receive from Google given the limited options available to them elsewhere?

  • Does increasing the number of keyword restrictions result in fewer successful ad placements and lower ad sales?

  • Does “protecting” users from personalized ads using certain keywords result in users see more irrelevant ads?

Resolution: The confusion was (somewhat) cleared up by YouTube’s direct contact with the concerned ad buyer. But other confusion still remains since the policies guiding ad purchasing/ad construction are far from straightforward. Allegations of bias were off-base. Instead, it was simply Google enforcing its policies, which would have made it equally impossible to use any other religion as a keyword.

Originally posted on the Trust & Safety Foundation website.

Filed Under: , , ,
Companies: google, youtube

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Google's Ad Policies Inadvertently Block Religious Organizations From Advertising On YouTube (2019)”

Subscribe: RSS Leave a comment
10 Comments

This comment has been flagged by the community. Click here to show it.

Andrew Allen says:

There is nothing inadvertant about it!

I worked for a private Chrisitian school in Texas. We paid for Google accounts for everyone. Bith Google and PayPal flat out closed down anything we tried to do that was public facing, saying that Christian school were illegal.

There is nothing inadvertant about Big Tech’s discrimination against Christians.

Anonymous Coward says:

It is an interesting demonstration of unintended consequences. There would be a shitshown if religious targetting was allowed for less benign purposes starting with non-roommate scale housing advertisements (freedom of association lets you say "male/female/Mormon roommates only" but it is still a bad look to openly advertise).

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow