Facebook's Post-Insurrection Purge Catches A Bunch Of Left Wing Accounts In Its AI Net

from the fine-deleted-accounts-on-both-sides dept

Facebook has often been accused of having an anti-conservative bias. But its efforts to clean up its platform following the January 6th attack on the Capitol building indicate it just has an ongoing (and probably unsolvable) moderation problem.

Shortly after the DC riot, it announced it would be removing content containing certain slogans (like “stop the steal”) as an “emergency measure” to stop the spread of misinformation or encourage similar election-based attacks on other government buildings.

It’s not clear what other switches were flipped during this post-riot moderation effort, but it appears groups diametrically opposed to Trump and his views were swept up in the purge.

Facebook said it had mistakenly removed a number of far-left political accounts, citing an “automation error”, triggering uproar from socialists who accused the social media platform of censorship.

Last week, the social media company took down a cluster of groups, pages and individuals involved in socialist politics without explanation. These included the Socialist Workers party and its student wing, the Socialist Worker Student Society in the UK, as well as the International Youth and Students for Social Equality chapter at the University of Michigan and the page of Genevieve Leigh, national secretary of the IYSSE in the US.

Moderation is tough even when it’s only hundreds of posts. Facebook is struggling to stay on top of billions of daily posts while also answering to dozens of governments and thousands of competing concerns about content. Moderation without automation isn’t optional. And that’s how things like this happen.

Granted, it’s a less than satisfying explanation for what went wrong. It doesn’t give anyone any assurance it won’t happen again. And it’s pretty much guaranteed to happen again because it’s already happened before. Activists associated with the Socialist Workers Party saw their accounts suspended and content deleted following another Facebook moderation effort that took place in early December 2020.

Facebook has disabled the accounts of over 45 left wing activists and 15 Facebook pages in Britain. The individuals and pages are all socialist and left wing political activists and organisations who campaign against racism and climate change, and in solidarity with Palestine.

Facebook has given no reason for disabling the accounts, and has not given any genuine way of appealing what has happened.

The SWP was left to guess why these accounts and pages were targeted. One theory is that Facebook moderation was purging the site of pro-Palestinian content, which sometimes is linked to bigotry or terrorist activity. Or it could be the new AI was wary of any political postings dealing with sensitive subjects and began nuking content somewhat indiscriminately.

Or it could be part of a purge that began last August, when Facebook expanded its internal list of “Dangerous Individuals and Organizations.” Anything viewed by AI as “promoting violence” was fair game, even if context might have shown some targeted posts and groups were actually decrying violence and outing “dangerous” individuals/organizations. During that enforcement effort, Facebook took down left-wing pages, including some attempting to out white supremacists and neo-Nazis.

This probably was an automation error. And the automation will continue to improve. But if the automation isn’t backstopped by human moderators and usable options to challenge content removal, things like this will continue to happen, and on an increasingly larger scale.

Filed Under: , ,
Companies: facebook

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Facebook's Post-Insurrection Purge Catches A Bunch Of Left Wing Accounts In Its AI Net”

Subscribe: RSS Leave a comment
14 Comments
Ninja (profile) says:

If moderation is basically impossible without collateral damage, the absence of it has shown its huge problems for the last 4 years (and more for those who were paying attention). Damned if you moderate, damned if you don’t. And I’m not even talking about Facebook here, it’s the society as a whole who is facing this dilemma. I’m inclined to say it’s been worse without it. At least it is clear to me that lies, public health attacks (anti-vax for instance) need to be contained as much as possible. What isn’t clear is how you do it without heavy casualties.

One thing would be to err to the side of caution therefore hitting quite a few legitimate speech while providing reliable means of requesting human review. Thousands of bots won’t be able to request human review and won’t be able to interact with human reviewers, at least not now. This could also serve as some sort of "karma" system where accounts that are punished by mistake are less likely to be flagged in future incidents, it becomes more and more trusted.

This idea has its flaws and I’m sure it could be improved. But my point is: yes, moderation at scale is impossible but needs to be done and will incur in collateral damage. What can we do to lessen such problems?

Upstream (profile) says:

Re: Re:

I think the best solution is to basically avoid the "moderation at scale" problem entirely and go with the "protocols, not platforms" idea that Mike Masnick has been promoting for quite some time. The issue then becomes: How is this best accomplished? It would be a difficult prospect in the best of situations, but even more so in the face of the entrenched interests like Facebook, Twitter, et al, who have made it quite clear that they will stop at nothing to maintain their market positions.

Anonymous Coward says:

Re: Re: Re:2 Re:

The point would be that the protocol doesn’t moderate – the user does. You can look at email to see an example, there a ton of companies that offer anti-spam services (proofpoint as an example) and of course Outlook and gmail use their own built-in filters as well.

Layers of moderation that the user chooses and controls built on top of an open protocol.

Anonymous Coward says:

Re: Re: Re:3 Re:

I sent an email to you, but you didn’t reply.

Did it get caught in your spam filter? Did it get automatically deleted?

Did your spam filter trigger because I used the word "Palestine", or "Hate Speech", because I included a link to a 5g COVID conspiracy page, or perhaps because of the one to the insurrection news aggregation site?

… The user can choose layers of (automated) moderation, and be the "human backup" to the moderation instead of eg Twitter or Facebook. But you’ve merely taken on all the problems that the service had without any of the benefits. … and you may well have arranged that you never see the moderation errors, and thus cannot correct them.

Stephen T. Stone (profile) says:

Re: Re: Re:

I also think protocols but even then, how do you moderate?

Mastodon manages it well enough. In addition to moderation controls on a given instance, said instance can also choose whether it will federate with any other instances. (And those other instances can, in turn, choose to federate with the initial instance.) In this way, an instance known for what the broader Fediverse considers “bad behavior” (e.g., supporting fascists, not moderating racist speech) can be “defederated” from the “behaved” instances. Those defederated instances will still exist, sure. But nobody will have to interact with them unless they choose to. (Assuming the instance you’re on didn’t fully silence the “bad” instance, anyway.) And that’s without getting into the per-post privacy and “sensitive content” settings.

No moderation is perfect. Even the “behaved” Masto instances get it wrong every now and again. That said: Masto moderation works well enough that it can be a starting point for discussion of new ideas.

Anonymous Coward says:

Re: Re:

One idea: Provide a way to ask for review. No avenue for appeal is frequently the bigger problem.

On the other end, people need to get used to things being taken down, at least until they appeal. If providers were more open about "hey, automation (and people) can screw up", and provide a method for appeal other than "wow we are being hammered for something in enough places on the internet and may be the news that we actually noticed a thing".

Anonymous Coward says:

Mindless algorithms are reasonably good at keeping the blue pill ads and Nigerian princes out of our email inboxes. They can just barely keep teenagers from typing the F-word in videogame chat…sometimes. For anything requiring consideration of context and nuance, they’re garbage.

It’s simply not possible to filter out all the bad speech from the world (even if people would agree on what speech is bad to begin with, which they won’t) without indiscriminately nuking every mention of a controversial topic.

Anonymous Coward says:

Re: Re:

All programming problems depend on what you are trying to define. It is easy to define ‘bad speech’ as ‘what we don’t want to see’ as seen with spam filters – go to any start up board and you will see salty spammers complaining about Google blocking their ‘business emails’. I suspect Facebook and Twitter would be more pleasant to use if you could train it like a spam filter when you see a post and decide you don’t like it. That would by definition lead to a massive echo chamber that only tells you what you want to hear.

That nobody can define ‘bad speech’ remotely clearly and yet there is so much agreement assuming there should be some universal standard should fill you with suspicion – as that implies that everything is complete bullshit telling them what flatters them.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...