Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019)

from the reputation-management-through-lying dept

Summary: Google has long been responsive to court orders demanding the removal of content, if they’re justified. Google has fought back against dubious orders originating from “right to be forgotten” demands from outside the US, and has met no small amount of DMCA abuse head on. But, generally speaking, Google will do what’s asked if there’s a legal basis for the asking.

But not everyone approaching Google acts in good faith. First, there are any number of bad actors hoping to game the system to juice their Google search rankings.

And, beyond that, there are any number of shady “reputation management” firms willing to defraud courts to obtain orders demanding Google remove content that reflects poorly on their clients.

For a couple of years, these bad actors managed to make some search engine optimization (SEO) inroads. They were able to fraudulently obtain court orders demanding the removal of content. The worst of these companies didn’t even bother to approach courts. They forged court orders and sent these to Google to get negative listings removed from search results.

This new system opportunistically preyed on two things: Google’s apparent inability to police its billions of search results and the court system’s inability to vet every defamation claim thoroughly.

But the system — not the one operated by the US government or Google — prevailed. Those targeted by bogus takedown demands fought back, digging into court dockets and the people behind the bogus requests. Armed with this information, private parties approached the courts and Google and asked for content that had been removed illicitly be reinstated.

Decisions to be made by Google:

  • Should Google act as an intercessor on behalf of website operators or should it just act as “dumb” pipe that passes no judgment on content removal requests?
  • Does manual vetting of court orders open Google up to additional litigation?
  • Does pushing back against seemingly questionable court orders allow Google to operate more freely in the future?

Questions and policy implications to consider:

  • Given the impossibility of policing content delivered by search results, is it wrong to assume good faith on behalf of entities requesting content removal?
  • Is it possible to operate at Google’s scale without revamping policies to reflect the collateral damage it can’t possibly hope to mitigate?
  • If Google immunizes itself by granting itself more discretion on disputed content, does it open itself up to more direct regulation by the US government? Does it encourage users to find other sources for content hosting?

Resolution: Google chose to take more direct action on apparently bogus court orders fraudulently obtained or created by reputation management firms. It took more direct action on efforts to remove content that may have been negative, but not defamatory, in response to multiple (private) investigations of underhanded actions taken by those in the reputation management field. Direct moderation — by human moderators — appears to have had a positive effect on search results. Since this outburst back in 2016, shadier operators have steered clear of manipulating search results with bogus court orders.

Originally posted to the Trust & Safety Foundation website.

Filed Under: , , , , , , ,
Companies: google

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019)”

Subscribe: RSS Leave a comment
25 Comments

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: a one-liner lead-in again

WHAT? YOU say "in good faith" doesn’t matter!

But not everyone approaching Google acts in good faith.

In quoting the law arguing with ME, you simply DELETED the "in good faith" requirement!

https://www.techdirt.com/articles/20190201/00025041506/us-newspapers-now-salivating-over-bringing-google-snippet-tax-stateside.shtml#c530

You clearly intended to change the meaning of statute so that I wouldn’t point up the "in good faith" phrase. That was OUTRIGHT FALSIFYING.

And in a reply you deca-down to say "in good faith" wasn’t even to be considered!

This comment has been deemed insightful by the community.
Mike Masnick (profile) says:

Re: Re: a one-liner lead-in again

Why do you always misrepresent what we say?

The issue of "good faith" here is entirely different from good faith in the 230 context. As I have explained directly to you multiple times, in the 230 context "good faith" is an issue only in cases involving 230(c)(2). And there are precious few such cases out there.

The use of good faith here is not about any particular law.

So when I say that "good faith" rarely matters in 230 cases, that’s because that’s a factual statement. You here are ignoring the conditional that I’m talking about in cases involving Section 230.

The case study here is not about 230 at all. So… maybe stop being such a disingenuous asshole.

And, while you’re at it, STOP SPAMMING.

That One Guy (profile) says:

Re: Re: Re:2

Bans aren’t really a viable option on a platform that allows anonymous commenting and no account needed to post though no longer letting any of their comments caught by the spam filter through would have a similar result I suppose, and if there’s one thing they actually are good at it’s triggering the spam filter.

That One Guy (profile) says:

Re: Re: Re:4 Re:

Able to post without creating an account, and only obvious spam removed with it left up to the community to hide posts they find problematic behind a single mouse click? Yeah, I’m not sure I’ve yet come across a more permissive platform, sadly however some people have such an overwhelming sense of self-entitlement that unless they have everything they act as though they have nothing, unless they are allowed to say anything they wish with no consequence then it’s no different than being unable to speak at all.

Christenson says:

Re: Re: Re:4 Re:

Just to play persnickety devil’s advocate, you sure 4chan/8chan8kun (or pastebin) isn’t/wasn’t more permissive???

I also remember a small site where you could change your handle every time you posted if you felt like it.

I think the persnickety correct words are "the most permissive policy of any major, usable internet site."

Anonymous Coward says:

Re: Re: Re: a one-liner lead-in again

Mike, you should start just replying with sillyness. Rationality will get you nowhere with an unrational person that appears to be aroused by responses from you!

Just make nonsense posts "I love google..google should be subject to no laws and be allowed to murder kittens / orphans / war heroes live on pay-per-view if they want. Everyone should have to pay 51% of their income directly to Google" etc…..I’d love to see the stupids response to those……

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: a one-liner lead-in again

Here you downplay "good faith" recently in headline:

Therefore, the examination of "good faith" almost never comes up. Separately, there’s a fairly strong argument that courts determining whether or not a moderation decision was done in "good faith" would represent courts interfering with an editorial decision making process

https://www.techdirt.com/articles/20210324/23440246490/appeals-court-actually-explores-good-faith-issue-section-230-case-spoiler-alert-it-still-protects-moderation-choices.shtml

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: a one-liner lead-in again

When serves your purpose, you flip to requiring good faith.

is it wrong to assume good faith on behalf of entities requesting content removal?

You don’t require "good faith" from the mere hosts! You assert:

"And, I think it’s fairly important to state that these platforms have their own First Amendment rights, which allow them to deny service to anyone."

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: a one-liner lead-in again

Now, again, I CANNOT tell whether found a hole in your alleged "spam filter" or an Admin clicked to lower it. You won’t say. But you’re censoring out of sight unless I persist, NEVER let my comments out of the alleged "moderation", which mine never need, always on-topic and civil.

Christenson says:

Re: Re: a one-liner lead-in again

Sorry, me and a few dozen other Techdirt users think you are a troll…take a minute to do in Rome as the Romans do and talk some sense intelligently and we might stop flagging your crap.

The commentariat backs up the intelligently lazy admin..who is allowing you to open your mouth and to remove all doubt that you are a fool.

Techdirt, a link to some evidence for "But, generally speaking, Google will do what’s asked if there’s a legal basis for the asking. " would not hurt the discussion — though of course, Techdirt is not a law review! lol

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

THIS is interestingly Freudian:

Google has long been responsive to court orders demanding the removal of content, if they’re justified.

A) ELSE they’re invaded by armed men and people go to jail! — Or is that not possible in YOUR corporatist mind?

B) THEN you set up GOOGLE as the arbiter, not even adding the word "considers" or "regards" or "thinks", just flatly stating IF THEY’RE JUSTIFIED in the globalist corporation’s almight view.

Sheesh. What a megalomaniac corporatist, beyond all reckoning by ordinary people.

Anonymous Coward says:

Sad that Google has to deal with that crap.

Too bad the legal systems worldwide don’t seem capable of handling the correct answer.

You don’t want to see ABC being returned as a search result? Then have the website hosting ABC take it down. We are not and never will be "The Internet". We merely index what we see there and if ABC doesn’t exist, it can’t come up as a search result.

This comment has been deemed insightful by the community.
Tanner Andrews (profile) says:

Too bad the legal systems worldwide don’t seem capable of handling the correct answer.

Too bad the legal system in the U.S., where Google are to be found, is unable to provide a usable result. A usable result would be perjury convictions for a substantial portion of the persons swearing falsely under oath to support a DMCA takedown notice.

More dec actions to have material reinstated, brought against such perjurers, might also have some deterrent effect. But that is costly and uncertain, and in any event the statute is fairly toothless as to false takedown notices.

I do not recall hearing of any such action also seeking a fee shift under the copyright statutes. If that were done, it might ultimately be less costly for the person whose material was wrongfully taken down. Even clever use of the offer-of-judgment rules might make it somewhat less costly.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re:

Most of what you said it correct. But the reason I specified worldwide instead of US is because the problem is worldwide. DMCA is USA. But "right to be forgotten" isn’t. And in both cases, the proper solution is to remove the "offending" material from the server actually hosting the material.

Scary Devil Monastery (profile) says:

Re: Re:

"I do not recall hearing of any such action also seeking a fee shift under the copyright statutes. If that were done, it might ultimately be less costly for the person whose material was wrongfully taken down."

There’s been extensive debate around that. The way I understand it the DMCA was explicitly written in such a way that it does not penalize a wrongful accusation and takedown if it’s made in good faith, but does open the door on a slam-dunk litigation case if a takedown isn’t performed when such is merited.

Disproving "good faith" for the accuser is almost impossible without an actual written confession that the copyright troll in question knows they’re just spamming takedown notices at random.

The OP does not primarily discuss the very much looser restrictions around copyright claims, but under the ruleset around defamation. Different burden of proof applies here, which is how Google can actually muster a meaningful defense.

Tanner Andrews (profile) says:

Re: Re: Re:

it does not penalize a wrongful accusation and takedown if it’s made in good faith

Good faith is patently optional. You can tell that because many of the takedown notices are automated, which means that good faith never had a chance to enter into it.

Takedown notices for material which does not belong to the giver of notice (e.g. takedown for original material posted by author) are similarly necessarily devoid of good faith.

There might be a question of good faith where fair use is involved,. The guy claiming immunity for sending the takedown notice is going to have to demonstrate at least a modicum of consideration if he wants to rely on the defense.

Ultimately, however, good faith is unnecessary because the statute is essentially toothless as to bogus takedown notices. A robot can just spew such notices with no fear of consequences.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow