Once Again: Content Moderation Often Mistakes Reporting On Bad Behavior With Celebrating Bad Behavior

from the and-it-will-always-do-so dept

On Monday, the Twitter account Right Wing Watch — which is famous for highlighting some of the nuttier nonsense said by Republicans — announced that its YouTube account had been permanently banned.

As you can see, that ban was initially put in place over a claim that the videos violated YouTube’s Community Guidelines. RWW appealed, and was told that YouTube had “decided to keep your account suspended” even after the appeal.

This sort of thing happens all the time, of course. For over a decade, we’ve highlighted how demands that social media take down “terrorist” content resulted in the company shutting down accounts that tracked evidence of war crimes. Because the very same videos that might serve as terrorist propaganda can also serve as an archive and evidence of war crimes.

In short, context matters, and that context goes way beyond the content of a video.

And this seems to be the same sort of case. Lots of people (including, somewhat ironically, the Right Wing Watch account itself) have been demanding that social media websites be more aggressive in moderating the accounts of conspiracy theorists and propagandists peddling nonsense about elections and the pandemic and the like. But, in highlighting the examples of extremists promoting that nonsense, RWW is showing the same content itself.

Not surprisingly, after this story started going viral, YouTube said it had been a mistake and reinstated the account:

?Right Wing Watch?s YouTube channel was mistakenly suspended, but upon further review, has now been reinstated,? a YouTube spokesperson told The Daily Beast on Monday afternoon. The social-media site also suggested that the issue was a mistake due to high volume of content and that they attempted to move quickly to undo the ban.

Right Wing Watch also confirmed that YouTube informed the site on Monday afternoon that their channel was back online.

?We are glad that by reinstating our account, YouTube recognizes our position that there is a world of difference between reporting on offensive activities and committing them,? Right Wing Watch director Adele Stan said in a statement after the reinstatement. ?Without the ability to accurately portray dangerous behavior, meaningful journalism and public education about that behavior would cease to exist.?

And, indeed, it is true that there is a world of difference, but the important point is that it’s not easy to tell that difference when you’re a content moderation reviewer just looking at the content. They won’t have the context, and it’s almost impossible to get them the proper context in an easy to understand manner. Someone not familiar with the RWW account is not going to understand what it’s doing without understanding a much wider context in which that account operates.

And, this is just one of many, many, many reasons why content moderation at scale is impossible to do well.

Filed Under: , ,
Companies: youtube

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Once Again: Content Moderation Often Mistakes Reporting On Bad Behavior With Celebrating Bad Behavior”

Subscribe: RSS Leave a comment
64 Comments
That One Guysays:

Re: Re: This makes me laugh...

That seems to be one of the newer ‘conservative’ arguments, that fact-checking someone is no different than stopping them from talking entirely.

Gotta say as arguments go it’s a strange one, I mean why would a person/group be so against fact checking what they say, it’s not like truth has anything to fear from someone verifying it or anything…

Anonymoussays:

Re: Re: Re: Re: This makes me laugh...

They were actively calling for speakers to be deplatformed.

Koby: We conservatives all need to gather our 2-A arms and ammo, attack the capitol, and kill all the libtards.

RWW: Here’s conservative Koby actively encouraging a violent attack on our capitol, maybe <insert social media company here> should suspend his account.

Can you see the difference in what RWW is doing by calling for others to be "censored" vs actually being "censored?"

If you can not see the difference, then maybe, just maybe, the problem is with you.

sumgaisays:

Re: Re: Re: Re: Re: This makes me laugh...

If you can not see the difference, then maybe, just maybe, the problem is with you.

That’s pretty large of you, giving him that undeserved benefit of the doubt.

In point of fact, Koby has always been capable of rational thought, he’s just being obstinate for his own personal shits and grins. Personally, I consider him to be our token Demosthenes, going up against a collective Locke. And failing at it as we should expect. But he keeps coming back, so either he has a damnably thick skin, an even thicker skull, or he’s really a high-bandwidth individual on par with any other malicious genius you might encounter in this wide, wide world. (You can pick any two, but not all three.)

sumgaisays:

Re: Re: Re: Re: Re: Re: Re: This makes me laugh...

According to my readings, Locke was the mild-mannered one who countered the "more strident" speech of Demosthenes. Both were very well reasoned, but Demosthenes seemed to go for the "shout ’em down" approach.

Koby is hoping to keep hammering at us, and that we’ll eventually give up out of sheer frustration. Were I him, I wouldn’t take much of that to the bank. ­čśë

Scary Devil Monasterysays:

Re: Re: Re: Re: Re: Re: Re: Re: This makes me laugh...

"Koby is hoping to keep hammering at us, and that we’ll eventually give up out of sheer frustration."

Well, if he thinks he’ll succeed where Baghdad Bob failed after ten+ years then he’s welcome to try. The only thing he really succeeds at is giving us all advance warning of the bullshit peddled by the alt-right.

Scary Devil Monasterysays:

Re: This makes me laugh...

"Censors get censored."

Fact-checking is today, in the eyes of the alt-right, "censorship", apparently.

When that’s what you lead with, whatever else you had to say can safely be ignored as nonsense.

I guess you’re letting your own side off the hook because their "fact-checking" isn’t based on actual facts?

Anonymoussays:

Once again, "But for the bad publicity being generated" RWW would not have been reinstated. They even lampshaded, "we get out stuff from these other accounts. Ban us, ban them too."

Sure, content moderation at scale is difficult. But when the court of public appeal serves as the Supreme Court of Moderation, it means that it wasn’t just the moderators that failed, but the appeals judges fell down on their job too.

Anonymoussays:

Re:

But when the court of public appeal serves as the Supreme Court of Moderation, it means that it wasn’t just the moderators that failed, but the appeals judges fell down on their job too.

Welp, you can always go create your own fucking community, with it’s own fucking standards, and post all the fucking bullshit you want. There’s nothing stopping you guys apart from wanting to force everyone else to listen.

Go where people actually want to hear what you have to say. You really should consider that some folks don’t find that what you nuts are peddling is worth watching.

I don’t go to a Trump rally to spread my word to those simple minded rubes. Why don’t you self-entitled jerks reciprocate?

Kobysays:

Unqualified to Moderate

Similar to the problem of expressing sarcasm on internet written forums, it appears more and more that platforms are having problems with the "context" problem. Someone merely retelling a story of what someone else did, in an effort to bring others up to speed, is viewed as offering support and approval of the original event, and perhaps even equated as performing the original event. At least in the eyes of the platform moderation system.

Combined with the explosion of content, it seems to me that noone outside of a sub-community is in a position to understand the context, short of hiring full time employees specifically assigned to gadfly particular subgroups. A daunting task that automated systems will probably never be able to understand.

Anonymoussays:

Re: Unqualified to Moderate

Combined with the explosion of content, it seems to me that noone outside of a sub-community is in a position to understand the context, short of hiring full time employees specifically assigned to gadfly particular subgroups

What you mean is that only people who buy into your bullshit should be allowed to decide whether it stays up or down.

Listen Koby, I’m guessing no one told you, or the whiny fucks like you, that life just isn’t fair. There’s no such thing as ‘fair and balanced’ with respect to facts, and I see no reason to entertain or give equal time to the functional equivalent of ‘flat-earthers’ when I know goddamn well that the fucking planet is round.

ECAsays:

Translation, interpretation

Understanding what is said with what is meant, and by WHOM is saying it.
Its like a lair, telling the truth and no one believes them.
Then the person who we subscribe to tell us the truth, Lies.
The hard thing to see, is when something created, gets built over mnay times.
Like burying the truth, little things on top, eventually Hide it and make things harder and harder to see.
How many specks on a window, until you cant see out of it.

Anonymoussays:

Easy Ban

Come on Mike, that was an easy one.

Considering there is this major "anti-conservative" bias in moderation on social media, the fact that their account had the words "Right Wing" as part of their title, it was a no brainer that they should be "censored."

It’s like you haven’t been following along this whole time.

amirite?

Samuel Abramsays:

Re: Easy Ban

Conservative: I have been censored for my conservative views
Me: Holy shit! You were censored for wanting lower taxes?
Con: LOL no?no not those views
Me: So?deregulation?
Con: Haha no not those views either
Me: Which views, exactly?
Con: Oh, you know the ones

(All credit to Twitter user @ndrew_lawrence.)

ECAsays:

Re: Easy Ban

well lets see.
Blame the president? Wrong person, go look up your state and fed representatives FIRST.
Have you told them your opinion? Did they ever reply?
Did you add reasoning and Facts to your comments?

Deregulation means?
8 hour work days, Gone.
Over time, GONE.
Social security, GONE
OHSA, gone.
Over time? GONE.
Min wage? GONE.
These are all social constructs built by the Fed. want to get rid of a few? GO FOR IT, IN YOUR STATE FIRST. Watch all the manufacturing come over and treat you like 1920’s 16 hour workers, 7 days per week. AND OSHA cant help you.

Scary Devil Monasterysays:

Re: Easy Ban

"Considering there is this major "anti-bigot" bias in moderation on social media…"

Fixed That For You.

I guess we’ll just have to remind the shitwit brigade of the "alt-right" that being a nazi, racist or bigot doesn’t mean you’re a "conservative". It just means you’re an asshole not welcome in any major social groups.

But hey, there’s an easier and more appropriate answer to you people getting thrown out of social platforms than insisting the government should remove the property owner’s ability to evict you for being an asshole.
Just stop being an asshole. It really is that simple.

And if that’s beyond you then you just need to understand that the people you’re trying to be an asshole to aren’t obligated to host you.

That One Guysays:

Re:

As the article itself notes there’s just a bit of a difference between someone posting content with the goal of presenting it as a good thing and someone posting it to point out why it’s very much not a good thing.

If someone posts a video talking about the health benefits from drinking a refreshing glass of bleach every morning and I post a clip from that video with a comment telling people to absolutely not do that the video may be the same but the context means the intended message is anything but.

Anonymoussays:

Re: Re:

But as the article also notes, the moderation algorithms can’t parse that context so without human review the choice is between banning it all or not.

Or I suppose they could insert the warning to absolutely not do that, if thats what would make all the difference. Like Twitter used to do to Trumps lies.

Stephen T. Stonesays:

It needs to be exposed in the context of why it?s dangerous. Someone saying ?gays are abominations? in and of itself isn?t dangerous per se, but the actions that belief can justify are often harmful to gay people. Unless you?re on the side of people who want the right to harm gay people without consequence or remorse, exposing that speech and explaining its potential for harm is a goddamn good thing to do.

PaulTsays:

Re:

Ah, so the current tactic is to pretend that context doesn’t matter?

You should talk to the History Channel right now, since obviously all their documentaries about World War 2 are promoting Nazism since they include all those shots of Hitler. Then go for all the news channels, since obviously when they were showing footage of 9/11 they were promoting terrorism, by your standard.

Scary Devil Monasterysays:

Re:

"Bit of a contradiction they have going on there. They demand the deletion of this content, because they say it’s dangerous, yet post copies of it themselves to expose it."

No more so than that history books can describe world war 2 in detail without necessarily calling for another Holocaust. If you see this as a contradiction then I’m compelled to point out the handle of a 12-lb sledge hammer sticking out of the shattered remains of your sense of logic.

Rekrulsays:

"Right Wing Watch’s YouTube channel was banned by the AI that now runs Google. Said AI also denied their appeal. We were perfectly fine with this and we really don’t care what the AI does as long as we don’t have to actually do any work. However, once the story went viral, we poked one of our interns and told them to go fix this before we get any more bad publicity. Rest assured that we will make zero changes because of this and it’s certain to happen again in the future because we can’t be bothered to pay humans to actually get involved in such things, unless it’s making us look bad on a national level."

Anonymoussays:

Re:

and it’s certain to happen again in the future because we can’t be bothered to pay humans to actually get involved in such things,

And that tell me that you do not understand the problems of scale when moderating a site that has a significant part of the human race posting on it. The example that make the news are easy to decide, once you have some associated context. however as part of thousands of decision made per hour, without context, getting it right becomes much more difficult, and needs algorithms to deal with problems.

Insisting on human review of appeals is the same as insisting on human moderation, and calling for smaller sites is not a solution, because the same decision have to be made and would require even more effort because of an uneven time spread and volume variations on every site.

Rekrulsays:

Re: Re:

And that tell me that you do not understand the problems of scale when moderating a site that has a significant part of the human race posting on it.

I understand that effectively managing and moderating a huge business requires significant investment, and I also understand that Google doesn’t even want to try.

Insisting on human review of appeals is the same as insisting on human moderation

Google won’t even take the bare minimum steps to improve their process.

A couple years ago, out of the blue, I received an mailed that I had been banned from posting comments on YouTube due to violating their community guidelines regarding spam/advertising. I have NEVER posted any kind of ad, nor have I ever posted spam. What triggered this? I have no idea because they didn’t bother to reference what comments they were referring to. The only thing I can figure is that the night before I had posted comments on about 10-15 videos from the same channel. I had just stumbled across the channel (retro games), watched a bunch of the videos and posted my nostalgic thoughts on some of them. Nothing controversial or inflammatory, just talking about old games from the 80s. All the comments were unique and on-topic for the video they were posted to.

How hard would it be for Google to add a feature where when a complaint is lodged or the spam filter triggered, links to the offending comments are logged and sent to the user if official action is taken? It would be all automated. The notification email would say "You have been banned because of these comments…" and then you would know exactly what they had a problem with. But they can’t be bothered to do that.

In my case, I filed an appeal denying that I had posted any spam and naturally it was denied. I posted on the help forum and someone there said that they couldn’t make any promises, but that they would "ping" a Google employee about this. A week later, I got an email saying that upon closer inspection, they decided that I did not violate their community standards and my account was re-instated. No apology, no explanation of what triggered it, nothing.

Another thing, when this happened, I went to my channel’s dashboard and it said I had NO strikes for either copyright (I’ve never posted videos publicly) or violating community guidelines.

So for some mysterious reason I got permanently banned even though I had done nothing wrong and I had no strikes of any kind, my appeal was denied and the ban would have been permanent if someone hadn’t agreed to bring it to Google’s attention.

If Google wants the benefits of running hugely popular sites, they should be willing to invest the time and money into making sure that they’re managed effectively, rather than just letting AI make all the decisions and only getting involved when there’s negative publicity.

Anonymoussays:

Re: Re: Re:

I understand that effectively managing and moderating a huge business requires significant investment, and I also understand that Google doesn’t even want to try.

How many people with a common approach to moderation, would be required to moderate all conversations in all bars and cafes in the U.S. as that is near the scale of the problem of moderating YouTube.

The bit of YouTube that you can look at is a cup of sand from the beach of sand that is YouTube. Where you look at a grain of sand, the YouTube moderation effort has a truck load of sand to look at.

Rekrulsays:

Re: Re: Re: Re:

How many people with a common approach to moderation, would be required to moderate all conversations in all bars and cafes in the U.S. as that is near the scale of the problem of moderating YouTube.

Now imagine if all bars and cafes were completely automated so that when you had a legitimate problem, there was nobody you could complain to.

Say you get food poisoning at a cafe from bad seafood, you fill out the automated complaint form demanding compensation for your time and suffering. and it comes back "After careful review we have determined that your food was fine. Denied." And that’s the end of your options. Unless you can get national media attention and make the company look bad.

When you run a business that interacts with the public, you have to actually deal with the public on occasion.

PaulTsays:

Re: Re: Re: Re: Re:

That’s not really a good comparison. First of all, food poisoning is something that’s way less common than copyrights related complaints, and it really is in the venue’s best interests to deal with. It’s a very serious issue that could potentially lead to serious illness or even death. Your venue gets a single one of these cases, you may well need to look into your practices (and the government will look into them if you fails to take proper precautions) because it’s a fundamentally damaging thing to your business. If a restaurant gets a reputation for people getting sick when they eat there, they won’t have many customers.

On the other hand copyright violations are not something a social media platform is concerned about on a business level. YouTube are bombarded with huge numbers of complaints every day, some fake, some not, and it’s impossible for them to hire enough people to deal with them to any degree of accuracy, so they have to automate it. People will come to YouTube whether or not they deal with them accurately, and nobody’s placed in danger except for a few poor souls who have decided to based their entire business around that single service supplier and are having problems with complaints..

They could be better at dealing with complaints that need more than an automated process to properly examine, but there’s absolutely nothing to compare them to a single venue restaurant.

sumgaisays:

Once Again: Content Moderation Often Mistakes Reporting On Bad Behavior With Celebrating Bad Behavior

Ouch! That my my head explode. I’d’ve rather that the headline read:

Once Again: Content Moderation Often Mistakenly Equates Reporting On Bad Behavior With Celebrating Bad Behavior

Much less time to parse and understand that latter iteration, doncha think?

Anonymoussays:

The social-media site also suggested that the issue was a mistake due to high volume of content and that they attempted to move quickly to undo the ban.

Well someone slipped up and exposed one of the variables measured by the banning/review algorithms. Whoever said that will probably lose their jobs or be delegated to a non-public position. One of the greatewst sins in content moderation is to let anyone see behind the curtain.

techflawssays:

And, indeed, it is true that there is a world of difference, but the
important point is that it’s not easy to tell that difference when
you’re a content moderation reviewer just looking at the content.

But we can certainly rest assured the moderators who banned RWW also banned each and all referenced media from that channel. Right?

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop ┬╗

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
13:40 It's Great That Winnie The Pooh Is In The Public Domain; But He Should Have Been Free In 1982 (Or Earlier) (35)
12:06 Norton 360 Now Comes With Crypto Mining Capabilities And Sketchy Removal Process (28)
10:45 Chinese Government Dragnet Now Folding In American Social Media Platforms To Silence Dissent (14)
10:40 Daily Deal: The 2022 Ultimate Cybersecurity Analyst Preparation Bundle (0)
09:29 A Fight Between Facebook And The British Medical Journal Highlights The Difficulty Of Moderating 'Medical Misinformation' (9)
06:29 Court Ruling Paves The Way For Better, More Reliable Wi-Fi (4)
20:12 Eighth Circuit (Again) Says There's Nothing Wrong With Detaining Innocent Minors At Gunpoint (15)
15:48 China's Regulatory War On Its Gaming Industry Racks Up 14k Casualties (10)
13:31 Chinese Government Fines Local Car Dealerships For Surveilling While Not Being The Government (5)
12:08 Eric Clapton Pretends To Regret The Decision To Sue Random German Woman Who Listed A Bootleg Of One Of His CDs On Ebay (29)
10:44 ICE Is So Toxic That The DHS's Investigative Wing Is Asking To Be Completely Separated From It (29)
10:39 Daily Deal: The 2022 Complete Raspberry Pi And Arduino Developer Bundle (0)
09:31 Google Blocked An Article About Police From The Intercept... Because The Title Included A Phrase That Was Also A Movie Title (24)
06:22 Wireless Carriers Balk At FAA Demand For 5G Deployment Delays Amid Shaky Safety Concerns (16)
19:53 Tenth Circuit Denies Qualified Immunity To Social Worker Who Fabricated A Mother's Confession Of Child Abuse (35)
15:39 Sci-Hub's Creator Thinks Academic Publishers, Not Her Site, Are The Real Threat To Science, And Says: 'Any Law Against Knowledge Is Fundamentally Unjust' (34)
13:32 Federal Court Tells Proud Boys Defendants That Raiding The Capitol Building Isn't Covered By The First Amendment (25)
12:14 US Courts Realizing They Have A Judge Alan Albright Sized Problem In Waco (17)
10:44 Boston Police Department Used Forfeiture Funds To Hide Purchase Of Surveillance Tech From City Reps (16)
10:39 Daily Deal: The Ultimate Microsoft Excel Training Bundle (0)
09:20 NY Senator Proposes Ridiculously Unconstitutional Social Media Law That Is The Mirror Opposite Of Equally Unconstitutional Laws In Florida & Texas (25)
06:12 Telecom Monopolies Are Exploiting Crappy U.S. Broadband Maps To Block Community Broadband Grant Requests (7)
12:00 Funniest/Most Insightful Comments Of 2021 At Techdirt (17)
10:00 Gaming Like It's 1926: Join The Fourth Annual Public Domain Game Jam (6)
09:00 New Year's Message: The Arc Of The Moral Universe Is A Twisty Path (33)
19:39 DHS, ICE Begin Body Camera Pilot Program With Surprisingly Good Policies In Place (7)
15:29 Remembering Techdirt Contributors Sherwin And Elliot (1)
13:32 DC Metro PD's Powerful Review Panel Keeps Giving Bad Cops Their Jobs Back (6)
12:11 Missouri Governor Still Expects Journalists To Be Prosecuted For Showing How His Admin Leaked Teacher Social Security Numbers (39)
10:48 Oversight Board Overturning Instagram Takedown Of Ayahuasca Post Demonstrates The Impossibility Of Content Moderation (10)
More arrow
This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it