YouTube Filters At It Again: Pokemon YouTubers Have Accounts Nuked Over Child Porn Discussions That Weren't Occurring

from the filter-fail dept

It’s clear at this point that the automated filtering and flagging done by YouTube is simply not good. Whatever legitimacy the platform might want to have by touting its successes must certainly be diminished by the repeated cases of YouTube flagging videos for infringing content that isn’t infringing and the fact that the whole setup has been successfully repurposed by blackmailers that hold accounts hostage through strikes.

While most of these failures center around ContentID’s inability to discern actual intellectual property infringement and its avenues for abuse, YouTube’s algorithms can’t even suss out more grave occurrences, such as child exploitation. This became apparent recently when multiple Pokemon streamers had their accounts nuked due to discussions about child pornography that never occurred.

A trio of popular Pokemon YouTubers were among the accounts wrongly banned by Google over the weekend for being involved in “activity that sexualises minors”.

As the BBC report, Mystic7, Trainer Tips and Marksman all found their accounts removed not long after uploading footage of themselves playing Pokemon GO.

It’s believed the error occurred thanks to their video’s continued use of the term “CP”, which in Pokemon GO refers to “Combat Points”, but which YouTube’s algorithm assumed was “Child Pornography”.

That’s pretty stupid and it certainly seems like the reliance for a ban of an entire Google account based on the use of an acronym ought to have come with a review from an actual human being. That human would have immediately understood the context of the use of “CP” in a way the automated system apparently could not. And, to be clear, this wasn’t a YouTube ban. It was the elimination of each streamers entire Google account, email and all.

Now, once the backlash ensued, Google got them their accounts back, but that simply isn’t good enough. As there is more and more pressure to ramp up automated policing of the internet, at some point, everyone pushing for those solutions needs to realize that the technology just isn’t any good.

Filed Under: , , , , , ,
Companies: youtube

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “YouTube Filters At It Again: Pokemon YouTubers Have Accounts Nuked Over Child Porn Discussions That Weren't Occurring”

Subscribe: RSS Leave a comment
91 Comments
Anonymous Coward says:

Re: Re:

You joke about that, but this is a thing that has really happened. Back in the day, when AOL and CompuServe ruled the consumer Internet, one of them (don’t remember which) decided to implement vulgarity filters on their chatrooms, banning a bunch of obscene and naughty words, including discussion of breasts and other, more sophomoric euphemisms for breasts.

This made it extremely awkward for medical professionals attempting to use online services to discuss breast cancer research!

Anonymous Coward says:

Re: Re: Re:

It was worse than that – child molestation survivor support chats were being banned and filtered out. And this was just over their naming and not say context free analysis of raw accounts of their experiences being raped. The later would still be bad but it would be more understandable as ‘inappropriate’ even if it has an important therapeutic role.

The very children who they were claiming to try to protect with their censorship.

Anonymous Coward says:

Re: Re:

"CP" is already a way to avoid a direct reference to "child pornography", so you’ll have to stay ahead of the filter—it’s the euphemism treadmill, at Internet speed. Let’s hope Google doesn’t look at past messages in these decisions; otherwise trolls will have all kinds of fun targeting people, by causing Google’s AI to think messages written long ago refer to banned topics.

Anonymous Coward says:

Getting the accounts back is fine. Nothing says that this has to be done by bots. That’s what Google chose to do, and if it runs afoul of something, well, that’s a problem with their business model that needs to be fixed.

YouTube has a big problem with pedophiles latching onto videos of young children in the comments section (like this should surprise anyone).

This comes down to the internet versus copyright, and which one should trump the other.

Anonymous Coward says:

Re: Re: Re: Re:

He’s indirectly attacking Article 11/13, SOPA, etc. or any system of automatic filtering.

That’s some serious projection there. But even if true, so? He’s not wrong.

If automatic filtering doesn’t work, Google built its business wrong and some other business will get it right, either with better bots or humans.

Then you don’t understand how automatic filtering works, or how bad it is at understanding context. All the filter cares about is "does this content contain the tags/IDs/keywords I’m programmed to look for and block?". If no, it ignores it, if yes, it takes it down. But as seen here, it was programmed to look for the acronym "CP", not understanding that there are many other contexts that the acronym "CP" could be used in that in no way relates to the content it is looking for.

And you’re never going to fix that because computers and software as a whole are equally bad at context. They are good at concrete facts that can be asserted as unambiguously true or false. They don’t do "true but unrelated" or "false but still counts".

PaulT (profile) says:

Re: Re: Re:2 Re:

https://en.wikipedia.org/wiki/Scunthorpe_problem

Exactly, they will never be as good as a human at understanding meaning, yet humans can never be as quick as a computer. There will always be mistakes, and if YouTube are given the option between losing Pokemon fans for silly mistakes by overcensoring and losing advertisers for not censoring enough, they’ll favour the people actually paying them. This should surprise nobody, but if you need to blame someone blame the legislators who seem to think that magic wands can catch these things and the dinosaurs who hired them.

Anonymous Coward says:

Re: Re:

Aside from the life-destroying accusation and sudden deprivation of Google services, people generally are unable to appeal decisions like these unless there is a lot of backlash. This is in part due to Google’s enormous size – they are unable to scale manual inspections of videos like these.

Rekrul says:

Re: Re: Re:

Aside from the life-destroying accusation and sudden deprivation of Google services, people generally are unable to appeal decisions like these unless there is a lot of backlash. This is in part due to Google’s enormous size – they are unable to scale manual inspections of videos like these.

Yeah, after all, it’s not as if a company that makes multibillion dollar profits off the backs of users could be expected to actually hire people to provide customer service to those people!

What’s next? Will people expect to able to be to call a company like Comcast, AT&T or VISA and actually speak to a human being? The ridiculousness of such an idea boggles the mind…

Anonymous Coward says:

Re: Re: Re:3 Re:

They can monitor their own content just fine. It’s the terabytes and petabytes and more of user generated content that is impossible to monitor by a reasonable amount of human beings. More content is being uploaded by users to Youtube in a single minute than a single human being could watch in two weeks.

Especially in the US, the law says platforms aren’t responsible for the actions or content of their users. If you don’t like it, well, too bad for you.

Rekrul says:

Re: Re: Re:2 Re:

Just how cheap do you think hiring 75,000+ people to do nothing but watch new Youtube uploads 8/5 would be?

I never suggested that they should hire people to watch every video that gets uploaded. I suggested that they should hire people to provide customer service so that when their system incorrectly flags a video, or someone files a false copyright strike against someone’s account, they can actually contact a live human and get them to look into the situation.

Anonymous Coward says:

Re: Re: Re:3 Re:

So, there is this video of someone playing a song and the person looking into it has not heard the song before. How are they meant to determine whether the uploader or the company making the claim is the copyright holder?

How in general is a low paid employee meant to determine who the actual copyright holder is, or where the boundaries of fair use are?

PaulT (profile) says:

Re: Re: Re:3 Re:

Which, at the rate that the **AAs are sending takedowns will still require a lot of people to be hired, especially as they would then presumably be required to take into accounts things like fair use and the identity of the uploader rather than just the content of the video. Each investigation therefore taking far, far longer than the automated one, since if they had a human just do the same cursory inspection they would more liable for legal comeback than if they just claim an error in an algorithm.

I know what you’re saying, but it’s the volume that’s the problem and introducing humans into the mix will make things worse not better.

Rekrul says:

Re: Re: Re:4 Re:

Relying on automation to run your company with virtually no human oversight is a crappy way to do business. They want the benefits of running a huge platform full of user generated content, but they can’t be bothered to make sure that their platform doesn’t screw over the users.

Would you use a vending machine if you knew there was a chance it would just take your money and not give you what you requested, and if that happened, there was nobody to contact to get your money back?

What if some piece of construction equipment accidentally created a huge pothole in the road, right at the end of your driveway and there was no way to contact the city and get it fixed?

What if the post office decided that you were some type of scammer and removed your address from their system so that you would no longer receive your mail and there was no appeal process?

Anonymous Coward says:

Re: Re: Re:5 Re:

Relying on automation to run your company with virtually no human oversight

Asserts facts not in evidence. There is absolutely human oversight on these things as noted by many news articles and reports. The problem is not oversight or no oversight, the problem is the automation is just bad at what it’s being asked to do. And this is true for all computers and software. They are being asked to determine context and they just can’t do that effectively.

is a crappy way to do business.

No, it’s a new way to do business. A comparable analogy would be car factories where cars are built mostly by automated robot. The process has human oversight, but mostly just to make sure the robots don’t run wild. Finished cars are spot checked to insure quality but that’s it. It’s not a bad way to do business, just different and somewhat new. Most businesses today wouldn’t be in business if they didn’t rely on some sort of automation, be it software applications, robotics, or otherwise.

They want the benefits of running a huge platform full of user generated content, but they can’t be bothered to make sure that their platform doesn’t screw over the users.

Well, human oversight is technically what screwed over the users in this case. They made a change and it impacted innocent users. Technically the system was working fine before it.

Would you use a vending machine if you knew there was a chance it would just take your money and not give you what you requested, and if that happened, there was nobody to contact to get your money back?

Not even remotely similar to what’s going on here. And there is someone to contact in this case to get resolution, as evidenced that they all got resolution and got their stuff back.

What if some piece of construction equipment accidentally created a huge pothole in the road, right at the end of your driveway and there was no way to contact the city and get it fixed?

Again, there was someone to contact, they did, and stuff got fixed. The fact that the people they contacted weren’t willing to listen until there was a big backlash is a completely separate issue.

What if the post office decided that you were some type of scammer and removed your address from their system so that you would no longer receive your mail and there was no appeal process?

Again, same as above. Your analogies are not the same and not relevant.

Rekrul says:

Re: Re: Re:2 Re:

Arent these free services? If so, I dont see why they should have to invest to monitor every tom dick and harry.

They are free for people to use, but they are how Google makes money. They want people to use their services, but when something goes wrong and one or more of those users gets screwed, they don’t want to be bothered trying to fix the problem.

They want their money making service to run on autopilot so that they don’t actually have to make any effort to deal with the people who are helping them make money.

Anonymous Coward says:

Re: Re:

Nothing says that this has to be done by bots.

Nothing, of course… except for sheer practicality and issues of scale.

Viacom, for one, insisted that videos they uploaded themselves infringed on their own copyright.

If even the original copyright holder can’t tell whether something uploaded violates their own copyright then what fucking chance does another human have?

Anonymous Anonymous Coward (profile) says:

Re: Re: Re:

Except those videos did not violate Viacom’s copyright, since they were the ones uploading those videos. The problem was that whatever said (was it Content ID) it was a violation did not have all the facts, including the one that the rights holder was the uploader.

Of course, if there was a database of all copyrighted material linked to the actual rights holders, and the uploaders were identified as the rights holders then this wouldn’t be an issue. So, where are those databases and the crosschecking software that denotes A=A?

That One Guy (profile) says:

Re: Re: Re: Re:

Of course, if there was a database of all copyrighted material linked to the actual rights holders, and the uploaders were identified as the rights holders then this wouldn’t be an issue.

So long as you ignore that pesky ‘fair use’ bit, sure. Even if there was such a database, that accurately listed every copyrighted work, who owns it, who has a license to use it and how, you’d still be stuck with the problem of fair use providing false positives as people who didn’t own a work and didn’t have a license to use it would still be able to legally do so.

That One Guy (profile) says:

Re: Re: Re:3 Re:

Oh no question, being able to at least make sure that only the owner[1] could make claims would cut down on a lot of bogus or outright fraudulent claims, assuming the system was accurate(false positives are of course a given, though it wouldn’t take much to get better than current), which would drastically lower the number of cases which would even reach the ‘now, is it fair use?’ point, so on those grounds I could certainly see your suggestion being a good idea, if perhaps not exactly viable at the moment, given the sheer scope of works you’d be talking about and the… less than ideal filter systems we currently have.

[1]Or those legally under contract to make claims on their behalf, which would, ideally, not shield the original owner from liability should the second party screw up, to motivate accuracy in who they hire rather than just ‘who can file the most claims?’

PaulT (profile) says:

Re: Re: Re: Re:

"Except those videos did not violate Viacom’s copyright, since they were the ones uploading those videos. The problem was that whatever said (was it Content ID) i"

Lying, or just wrong, yet again?

Those videos had nothing to do with Content ID, they were named in the lawsuit by Viacom, and then removed when it was noticed that they had actually uploaded them.

"So, where are those databases and the crosschecking software that denotes A=A?"

It’s not possible for that to exist, unless you wish to go back to the days where people have to register for each copyright they wish to hold rather than it being automatically assigned upon creation. Companies like Google would love that, because it means that they are not held to any standard beyond "is this a match in the database", rather than the wooly shit they have to deal with now.

Anonymous Coward says:

Re: Re: Re:4 Re:

It wasn’t porn, and it wasn’t individuals who were sent a notice by me. It was a large, mass-piracy website which had changed the title, author’s name, and cover art so that people didn’t even realize it was my work.

I’m not a copyright troll, just a supporter of strong enforcement so that I don’t have to send DMCA notices in the first place.

Anonymous Coward says:

Re: Re: Re:2 Re:

No one is calling for that, moron. Nice strawman, though.

Objectionable content is a small minority of "all content on YouTube," and human review of material that gets flagged is possible.

Let the algorithms run and try to find things that are bad. Nothing wrong with that. What is wrong is letting an algorithm that lacks the competence to make the final judgment actually make that final judgment. Pass that along to a moderator and problems like these go away.

Stephen T. Stone (profile) says:

Re: Re: Re:5 Re:

Question: How can someone tell the difference between an “uncoördinated” report bombing (where people are reporting a video independently of each other because it breaks the TOS) and a “coördinated” report bombing (where people are reporting a video as part of a pre-arranged campaign to attack the person who posted the video)?

Anonymous Coward says:

Re: Re: Re:3 Re:

Objectionable content is a small minority of "all content on YouTube,"

Not according to the RIAA. And given that many copyright holders would rather fair use did not exist, under the assumption that fair use is not a defense against getting flagged, the amount of content considered objectionable under rightsholder definitions would increase immensely.

human review of material that gets flagged is possible

See above. Rightsholders regularly complain that the algorithms let content slip through the cracks anyway.

What is wrong is letting an algorithm that lacks the competence to make the final judgment actually make that final judgment

This is exactly what rightsholders have been demanding: an algorithm that doesn’t require human review to wipe target videos off the site. It’s called "notice and staydown", you’ve probably heard of it.

Pass that along to a moderator and problems like these go away.

Here’s the thing: such a system already exists! If it didn’t, HBO’s own site would have been nuked from orbit after HBO flagged it as a pirate site.

Eventually Google is going to get tired of putting up with and burning resources trying to program around the stupidity of copyright enforcement.

Anonymous Coward says:

So what sort of idiot set up these systems without realizing that acronyms can have multiple meanings, particularly in specialized jargon?

I work in the medical industry, and it’s not uncommon to have a discussion in which the rather unlikely combination of "D&D" and "MAGA" comes up, and neither of those means what an outsider might think they mean.

Max (profile) says:

It would certainly complete this article mentioning that the whole hullabaloo started with a certain Youtuber publishing an appropriately enraged video about how Youtube keeps serving you seemingly innocent videos of kids conveniently annotated (in the comments) to semi-erotic timestamps, accidentally showing something kinda-sorta relevant to a paedophile. Allegedly one gets a lot of these suggested once you "train" the Youtube algorithm starting with the "right" search term and by repeatedly following the "right" suggested material. The ensuing nuclear blast on anything remotely resembling the term "C P" in any sense is merely collateral damage…

Anonymous Anonymous Coward (profile) says:

Re: Re:

Isn’t the problem really that Content ID is not learning from its mistakes? One would think that each time Content ID makes an error, someone, or something (machine learning) would make some correction to the Content ID algorithm, which would lead someone else to believe that it would be getting better.

It isn’t. So there is no someone, or something making corrections to the Content ID algorithm. Shame on Google. Then again, those corrections might make Content ID worse. Still shame on Google. That each and every request for review isn’t sent to some third party with no vested interests to determine, at least initially (there could and should be follow up proceedings in courts of law), whether something nefarious has taken place is yet another, shame on Google.

PaulT (profile) says:

Re: Re: Re: Re:

Exactly. Anyone who thinks this is an easy task doesn’t understand the problem. Anyone who thinks that ContentID should or even can be perfect doesn’t understand how much of a mess copyright law is, and how much is covered by context that Google doesn’t have direct access to or cannot fully determine (for example, fair use which is both a defence to standard copyright claims and highly subjective in its application).

If it were merely a case that there’s a list of protected works and Google had to match against that list, this would not be a problem. But, it’s massively more complex than that, and people would rather blame Google than admit the entire copyright system is massively flawed at a fundamental level.

That One Guy (profile) says:

'This time'

Now, once the backlash ensued, Google got them their accounts back, but that simply isn’t good enough.

Especially when you consider that:

A) Pokemon is kinda popular, with lots of people talking about it.

and

B) Talking about Pokemon has a good chance of mentioning ‘CP’, it being part of the game.

These three got their accounts back after the matter went public, how many people were given the axe, and will be given the axe, who won’t be so lucky?

Anonymous Coward says:

If these youtubers...

If these youtubers just realised that the consequence of discussing obvious codewords like cheese pizza is naturally gonna result in youtube sending gun-toting maniacs round to their place to free the kids from their nonexistent basement as part of their new automated fostaID system, they’d not get into this kind of trouble!

Rog S. says:

its corporate feminism 101

Empowering Women and Girls isnt just a motto…it has real world activity associated with it.

Women arent generally as hot and bothered by this as men are, and, it is primarily women raising, and encouraging these girls in the videos, as well as encouraging such entreprenuerial endeavors.

Its men (and shitloads of neocons and gender lesbians) who view these videos and find them steamy, sexual, and offensive.

Its amazing that in the year 2019, the mere sight of young girls doing what young girls do can still make (men, neocons, and gender lesbians) apoplectic, and calling for censorship.

This guy Matt Watson is nearly Busting a Nut of Fury over it, he was so disturbed by viewing these girls:

https://m.youtube.com/watch?v=O13G5A5w5P0

Even more hilarious that we tolerate men ranting and raving over being demonetized by corporations who on one hand “empower women and girls,” and on the other hand….never mind….

I wonder what Freud would have to say about transference, and projection in these absurdist internet dramas?

Oh, yeah, thats right: internet influence operations and mass mind control are a "conspiracy theory. ”

SA Church Lady says:

A primer in monetized feminism

Re: empowering Women and Girls isnt just a motto…it has real world activity associated with it.

In feminist theory /Barthes styled deconstruction, they focus on the issue of “the gaze ”if the viewer.

Male gaze v female gaze are purportedly different, with male gaze being deviant, and prurient, and female gaze always pure, and wholesome.

Even a cursory peek at feminist literature reveals that neither is true.

Jen Derlez PhD says:

its corporate feminism 101

Empowering Women and Girls isnt just a motto…it has real world activity associated with it.

Women arent generally as hot and bothered by this as men are, and, it is primarily women raising, and encouraging these girls in the videos, as well as encouraging such entreprenuerial endeavors.

Its men (and shitloads of neocons and gender lesbians) who view these videos and find them steamy, sexual, and offensive.

Its amazing that in the year 2019, the mere sight of young girls doing what young girls do can still make (men, neocons, and gender lesbians) apoplectic, and calling for censorship.

This guy Matt Watson is nearly Busting a Nut of Fury over it, he was so disturbed by viewing these girls:

https://m.youtube.com/watch?v=O13G5A5w5P0

Even more hilarious that we tolerate men ranting and raving over being demonetized by corporations who on one hand “empower women and girls,” and on the other hand….never mind….

I wonder what Freud would have to say about transference, and projection in these absurdist internet dramas?

Oh, yeah, thats right: internet influence operations and mass mind control are a "conspiracy theory. ”

Jane Vaginahat says:

…A primer in monetized feminism

Re: empowering Women and Girls isnt just a motto…it has real world activity associated with it.

In feminist theory /Barthes styled deconstruction, they focus on the issue of “the gaze ”if the viewer.

Male gaze v female gaze are purportedly different, with male gaze being deviant, and prurient, and female gaze always pure, and wholesome.

Even a cursory peek at feminist literature reveals that neither is true.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...