Militias Still Recruiting On Facebook Demonstrates The Impossibility Of Content Moderation At Scale

from the people-will-always-find-a-way dept

Yesterday, in a (deliberately, I assume) well-timed release, the Tech Transparency Project released a report entitled Facebook’s Militia Mess, detailing how there are tons of “militia groups” organizing on the platform (first found via a report on Buzzfeed). You may recall that, just days after the insurrection at the Capitol, that Facebook’s COO Sheryl Sandberg made the extremely disingenuous claim that only Facebook had the smarts to stop these groups, and that most of the organizing of the Capitol insurrection must have happened elsewhere. Multiple reports debunked that claim, and this new one takes it even further, showing that these groups are (1) still organizing on Facebook, and (2) Facebook’s recommendation algorithm is still pushing people to them:

TTP identified 201 Facebook militia pages and 13 groups that were active on the platform as of March 18. These included ?DFW Beacon Unit? in Dallas-Fort Worth, Texas, which describes itself as a ?legitimate militia? and posted March 21 about a training session; ?Central Kentucky Freedom Fighters,? whose Facebook page posts near-daily content about government infringing on people?s rights; and the “New River Militia” in North Carolina, which posted about the need to ?wake up the other lions? two days after the Capitol riot.

Strikingly, about 70% (140) of the Facebook pages identified by TTP had ?militia? in their name. That?s a hard-to-miss affiliation, especially for a company that says its artificial intelligence systems are successfully detecting and removing policy violations like hate speech and terrorist content.

In addition, the TTP investigation found 31 militia-related profiles, which display their militia sympathies through their names, logos, patches, posts, or recruiting efforts. In more than half the cases (20), the profiles had the word ?militia? in their name.

And, this stuff certainly doesn’t look great:

Facebook is not just missing militia content. It?s also, in some cases, creating it.

About 17 percent of the militia pages identified by TTP (34) were actually auto-generated by Facebook, most of them with the word ?militia? in their names. This has been a recurring issue with Facebook. A TTP investigation in May 2020 found that Facebook had auto-generated business pages for white supremacist groups.

Auto-generated pages are not managed by an administrator, but they can still play a role in amplifying extremist views. For example, if a Facebook user ?likes? one of these pages, the page gets added to the ?about? section of the user?s profile, giving it more visibility. This can also serve as a signal to potential recruiters about pro-militia sympathies.

Meanwhile, Facebook?s recommendation algorithm is pushing users who ?like? militia pages toward other militia content.

When TTP ?liked? the page for ?Wo Co Militia,? Facebook recommended a page called ?Arkansas Intelligent citizen,? which features a large Three Percenter logo as the page header. (The ?history? section in the page transparency shows that it was previously named ?3%ERS ? Arkansas.?)

Of course, this certainly appears to be a strong contrast with what Facebook itself is claiming. In Mark Zuckerberg’s testimony today before Congress on dealing with disinformation, he again suggests that Facebook has an “industry-leading” approach to dealing with this kind of content:

We remove Groups that represent QAnon, even if they contain no violent content. And we do not allow militarized social movements?such as militias or groups that support and organize violent acts amid protests?to have a presence on our platform. In addition, last year we temporarily stopped recommending US civic or political Groups, and earlier this year we announced that policy would be kept in place and expanded globally. We?ve instituted a recommendation waiting period for new Groups so that our systems can monitor the quality of the content in the Group before determining whether the Group should be recommended to people. And we limit the number of Group invites a person can send in a single day, which can help reduce the spread of harmful content from violating Groups.

We also take action to prevent people who repeatedly violate our Community Standards from creating new Groups. Our recidivism policy stops the administrators of a previously removed Group from creating another Group similar to the one removed, and an administrator or moderator who has had Groups taken down for policy violations cannot create any new Groups for a period of time. Posts from members who have violated any Community Standards in a Group must be approved by an administrator or moderator for 30 days following the violation. If administrators or moderators repeatedly approve posts that violate our Community Standards, we?ll remove the Group.

Our enforcement effort in Groups demonstrates our commitment to keeping content that violates these policies off the platform. In September, we shared that over the previous year we removed about 1.5 million pieces of content in Groups for violating our policies on organized hate, 91 percent of which we found proactively. We also removed about 12 million pieces of content in Groups for violating our policies on hate speech, 87 percent of which we found proactively. When it comes to Groups themselves, we will remove an entire Group if it repeatedly breaks our rules or if it was set up with the intent to violate our standards. We took down more than one million Groups for violating our policies in that same time period.

So, on the one hand, you have a report finding these kinds of groups still on the site, despite apparently being banned. And, on the other hand, you have Facebook talking about all of the proactive measures it’s taken to deal with these groups. Both of them are telling the truth, but this highlights the impossibility of doing things well.

First, note the scale of the issue. Zuckerberg notes that Facebook has removed more than one million groups. The TTP found 13 militia groups, and 201 militia pages. At the kind of scale of Facebook some things that should be removed are always going to slip through. Some might argue that if the TTP could find these pages, then clearly Facebook could as well. But that raises two separate issues. First, what exactly are they looking for. There are so many things that could violate policies, that I’m sure Facebook trust & safety folks are constantly doing searches like these — but just because they don’t do the exact same search as the TTP does, it doesn’t mean that they’re not looking for this stuff. Indeed, one could argue that finding just 13 such groups is pretty good.

On top of that, what exactly is the policy violation? Facebook says that it bans militia groups “that support and organize violent acts amid protests.” But that doesn’t mean every group that refers to itself as a “militia” is going to violate those policies. You can easily see how many might not. On top of that, assuming that these groups recognize how Facebook has been cracking down, it’s quite likely that many will simply try to “hide” behind other language to make it more difficult for Facebook to find (indeed, the TTP report points to one example of a “militia” group saying it needs to change the name of the group. In fact, in that example, it says that local law enforcement was who suggested changing the name:

So, there’s always going to be some element of cat-and-mouse on these kinds of things, and some level of subjectivity in determining whether a group is actually violating Facebook’s policies or not. It’s easy to play a “gotcha” game and find groups like this, but that’s because at scale it’s always going to be impossible to be correct 100% of the time. Indeed, it’s also quite likely that these efforts likely over-blocked in some cases, and took down groups that it should not have. Any effort at content moderation, especially at scale, is going to run into both Type I (false positive) and Type II (false negative) mistakes. Finding and highlighting just a few of those mistakes doesn’t mean that the company is failing overall — though it may provide some suggestions on how and where the company can improve.

Filed Under: , , ,
Companies: facebook

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Militias Still Recruiting On Facebook Demonstrates The Impossibility Of Content Moderation At Scale”

Subscribe: RSS Leave a comment
61 Comments
This comment has been deemed insightful by the community.
That One Guy (profile) says:

Hold them to their own standards

On the one hand it’s perfectly reasonable that at Facebook’s size there will be some things that slip through(though the recommendations are less defensible), so on it’s own that wouldn’t really be a reason to chastise them if they seemed to be working to address the issue, yet at the same time if they’re trying to present themselves as this amazing social media site that should be treated as the ideal, that only Facebook has the resources to deal with the problems facing social media and therefore they should be able to direct how those efforts go it’s entirely reasonable to call them on on failures like that, pointing out that even they can screw up and might not be as perfect as they’d like people to think.

If they want to make the argument that only they are capable of handling a social media platform correctly and responsibly then it’s entirely fair to hold up examples of their failure to do so, to show that even they can botch things up and maybe shouldn’t be allowed to set or help craft the rules for everyone else.

This comment has been flagged by the community. Click here to show it.

Koby (profile) says:

Hidden In Plain Sight

it’s quite likely that many will simply try to "hide" behind other language to make it more difficult for Facebook to find

For awhile now, many creators on other platforms have been producing content, unrelated to militias or race, that use code words and lingo. It usually works to avoid demonetization or censorship. I bet it’s actually helping these communities to grow, much to the chagrin of those that want to limit their reach. They get to be an edgy underground rebel, instead of a conformist.

This comment has been deemed insightful by the community.
That Anonymous Coward (profile) says:

Is this like the FBI thinking that the bad guys are required to wear black hats, so they only look at people wearing black hats?

The 3 percenter logo should be easy for a computer to recognize.
Having the word militia in the name should be easy to recognize.

One would think that having the computer output a list of groups to take a peek at wouldn’t take very long.

Am I alone in getting that feeling that the claims of we removed 12 million groups is on par with the FBI parading one of their entrapped mentally challenged radicals in front of the media?

After 911 people accepted a lot of stupid things, and around the time anyone decided to try and push back there were always reports about how they stopped a major terrorism event but couldn’t reveal any details.

One wonders how many of the 12 million groups were created by single persons just to be cool but never really attracted a following.

Cause we’ve NEVER seen people online do stupid shit for the lulz..

Or someone deciding they are going to be Nancy Drew & the Hardy Boys creating a honeypot to lure in crackpots to turn in.

Couch warriors with delusions of grandeur LARPing online.

Zucks has a deficit in trust when it comes to things he says vs whats actually happening on his platform.

Despite what people believe until we all have the Elon chip in our brains detecting badthink won’t be 100% & I am confused once again by people who think that anything can be made 100%

We are not 100% safe from terrorists, despite all the stuff.
We are not 100% safe from drunk drivers, despite all the stuff.
We are not 100% safe from contaminated food, etc etc etc…

At some point people need to return to reality.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re:

"Having the word militia in the name should be easy to recognize."

If that’s your only criteria, it would also generate a LOT of false positives.

"After 911 people accepted a lot of stupid things"

Yes they did – the PATRIOT act, 2 unjust wars that killed hundreds of thousands of civilians, restrictions on rights never seen before on US soil, the invention of a new "security" force that never catches any actual criminals or explosive devices but makes life hell for travellers daily up to and including actions that would be considered rape if done by anyone else…

I’m not sure if that’s what you meant, but I also don’t see how doing that stuff online as well would make you any safer. Especially when you’re already admitted that a lot of innocent people would get burned along the way by your standards.

That Anonymous Coward (profile) says:

Re: Re: Re:

"I also don’t see how doing that stuff online as well would make you any safer."

I was attacking the insane premise that anything can be 100%.
People expect that a child will never see a boob online & demand the platforms make it so then scream when their kid sees the boob they searched for.

If you want 100% the only options are the Elon brain chip or disabling user content online and even then… a tit might slide by.

People cheer on the idea that it can be 100% because its just so easy, when they have no concept of how hard it actually is to do unless you hire half the planet to monitor the other half. Leaders pretend it is possible, demand it be done, & then unleash hell when the impossible isn’t delivered to them by the end of the week. The masses then get a soundbite about how the platform supports the bad thing rather than they’ve spent millions & have entire departments devoted to trying to reduce this because there is no way to stop it without stopping the entire world.

Rocky says:

Re: Re: Re: Re:

Covering 85-95% is quite easy (relatively speaking), but anything above that increases the difficulty exponentially. Considering the amount of content being generated online, even if 95% of "less desirable" content is filtered out that still means we are talking about millions of posts slipping by which means it’s quite easy to find edge-cases proving that social media X "does a poor job" while ignoring everything you didn’t see.

Anonymous Coward says:

Impossible for Mike Masnick not to use the word impossible on topic of content moderation

Human life is not a mathematical theorem. We don’t care if we can’t achieve 100% success rate. Sometimes >50% is enough.

We want to achieve the greatest good with the least harm. When it comes to online speech, harm relates to real world consequences; bad things happening to people, with physical violence being the worst case. With this criteria, we can look at engagement, and the audience for that speech.

If some guy is ranting against vaccination but only engaging with a few, then the degree of harm is low. But it’s a completely different story if there is a coordinated group, say a self-declared vaccine safety organization, with tens of thousands of followers, that is spreading lies about vaccines and telling people not to get vaccinations. I’d say bring down the ban hammer.

Impossible is not even the right choice of word when talking about content moderation. Get that arrow out of your butt.

Impossible does not mean ineffective or futile.

Sure, it’s a cat-and-mouse, whack-a-mole game; so what? It’s a constant struggle for truth to win out over lies. A lie makes it half way around the world while truth is still lacing up its shoes.

Anyways, I’d say delete Facebook. I’m for an open internet (what content moderation would be like in that context).

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

Impossible does not mean ineffective or futile.

But it does mean “impossible”. In the discussion around moderation, lawmakers and political pundits seem to think there is a one-size-fits-all, always-solves-everything solution — that one specific approach can automagically sort “bad” content from “good” and such. That solution doesn’t exist, nor can it exist.

And besides, moderation approaches that work well in smaller communities don’t scale well to larger communities. What might work for, say, a Discord server with a hundred people or so won’t work for Twitter — at least not in the sense that Twitter’s algorithms and bot-driven moderation can understand contexts and nuance that would be easy to grasp in a smaller community. (For example: Twitter repeatedly suspended a bot dedicated to reposting Donald Trump’s tweets verbatim while giving Trump a free pass on those same tweets.)

Moderating small communities is a pain in the ass; moderating larger ones, even moreso. So what makes you think Twitter can do a far better job at moderation than someone running, say, an imageboard with a couple hundred regular users at most?

Anonymous Coward says:

Re: Re: Re:

We can at least start at a baseline of “bad” content that’s “Holocaust denial”, “white supremacy”, and “nazism” and asking for a baseline level of moderation that consists of “use the basic Goddamn search bar and enter in keywords or the names of hate groups”. The TTP did basic shit and found groups and pages that had evaded Facebook moderation for fucking years.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:

We can at least start at a baseline of “bad” content

Facebook deciding what makes for “bad contet” would inevitably piss off at least some group of powerful people — at which point Facebook would start carving out “exceptions” like the ones they kept giving Donald Trump until a couple of months ago.

Anonymous Coward says:

Re: Re: Re:2 Re:

Facebook already moderates and decides what makes for “bad content” on its platform. “They’ll just piss off some powerful people off and stop” as an excuse for leaving the status-quo as is, for Facebook to keep fucking lying about how much “better” it does and only getting off its ass for real when controversy strikes (and then lying about their platform’s involvement), it’s a really shitty excuse.

In what feels like ages ago, Mike asked “Would you like to see a better Facebook or a dead Facebook?” and the opinions came back overwhelmingly for Facebook to die. If Facebook is such a pathetic gaggle of cowards who bend over backwards to appease fascists and other pricks in ways that make us all worse off, then that just helps prove that a “better” Facebook is impossible and that the corporation needs to do us all a favor and fucking die.

This comment has been deemed insightful by the community.
Mike Masnick (profile) says:

Re: Re: Re: Re:

We can at least start at a baseline of “bad” content that’s “Holocaust denial”, “white supremacy”, and “nazism” and asking for a baseline level of moderation that consists of “use the basic Goddamn search bar and enter in keywords or the names of hate groups”. The TTP did basic shit and found groups and pages that had evaded Facebook moderation for fucking years.

And it appears that Facebook did catch way over 99% of that. And tons of other stuff as well. You say why can’t they look for those words like TTP did? Well, I’m sure the list of stuff that FB moderators ARE looking for is already MASSIVE and the results then need to be reviewed. TTP looked for just one thing and found a few sites and didn’t bother to do a full analysis of them. FB doesn’t have that luxury. It needs to be looking for EVERYTHING ALL THE TIME and reviewing to make sure it ACTUALLY violates its policies.

Anyone who says "why didn’t they just do this search" is an idiot who has no clue what they’re talking about and shouldn’t be taken seriously because you have not even the first clue about how much is actually happening.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re: Re:

"We can at least start at a baseline of “bad” content that’s “Holocaust denial”, “white supremacy”, and “nazism” "

OK. Now, Facebook have a history of overmoderating such things and have mistakenly flagged anti-Nazi, anti-white supremacist accounts. This is what’s meant by impossible – it’s not possible to moderate something as huge as Facebook and neither miss something nor get false positives. That’s the point.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re:

"Impossible for Mike Masnick not to use the word impossible on topic of content moderation"

the problem with stating the truth is that there’s only so many words you can use to describe it.

"We want to achieve the greatest good with the least harm."

So… there will be harm. In other words, even by your own admission, it’s impossible not to harm.

"Anyways, I’d say delete Facebook"

Good for you. Now, how does that magically make moderation at scale on the many thousands of competing sites that people would go to instead not impossible?

This comment has been deemed insightful by the community.
Anonymous Coward says:

What is "correct" in content moderation is ultimately a matter of opinion and everyone has a different one. Before anyone could make 100% correct decisions, people would first have to agree on what the correct decisions are. This will never happen.

Facebook can, in their own opinion, be doing a perfect job. Others will inevitably disagree.

Ken Shear says:

Oh please! To the extent that this is an argument, there will be mistakes (false positives and negatives) in moderation, well, excuse me. It’s not just with moderation there will be mistakes — there are always mistakes in every human activity. And it’s not only "at scale" that there will be mistakes. Mistakes may be more obvious at scale, especially where (like on Facebook) much more effort is put into sharing information than into moderation. But, moderation at small scale also requires judgment calls, and mistakes go with that territory. Anyone who’s ever tried to moderate comments on even a very small website knows how hard moderation can be, even not at scale.

Also, please! That moderation is imperfect does not make it impossible. You say, moderation at scale is impossible, but you mean, moderation at scale can’t be perfect. Well, obviously. Then, you hold up Facebook’s clearly inadequate efforts at moderation as the best that anyone can do. And, yes, Facebook has gradually been dragged into putting some more serious resources into moderation by the intense bad publicity they’ve received for failing to address hate speech and incitement to violence on their platform. But no, Facebook has not put the kind of resources they could on this problem. This is a company that collects tens of billions in profits every year, and has some of the best technical talent in the world at its disposal. How much as been devoted to preventing use of the platform to incite violence? They say they’re doing all they can, but oh, please, they can’t do better than a simple word search when they’ve built the most sophisticated pattern matching systems to promote user engagement and to help advertisers find targets for products, services, and yes, hate speech and political disinformation.

Voltaire said long ago, "the perfect is the enemy of the better" (well it was French of course, but it translates quite straightforwardly into English). That’s what social media should be held accountable for. Better meaning, in this context, effort commensurate with the deep harm social media platforms are permitting, such as incitement of violence, hate speech, rampant falsehoods that undermine public health.

Better moderation is possible. It’s not impossible, even if there are gonna be mistakes.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

You say, moderation at scale is impossible, but you mean, moderation at scale can’t be perfect.

Tell that to lawmakers and political pundits who believe otherwise. When they get the message, the message won’t need repeating.

you hold up Facebook’s clearly inadequate efforts at moderation as the best that anyone can do

It is the best that Facebook can do, given its size. Other similarly large services can’t/don’t fare much better.

Better moderation is possible.

No one has ever said otherwise. But that would require throwing far more money and man-hours at the problem. At some point, that will be costlier than the problem such an approach means to solve…even for a company like Facebook.

Anonymous Coward says:

Re: Re: Re:

How much money and man-hours would it cost to have moderators just go to Facebook’s front page, search “Militia” or “nazi” or “aryan”, then follow those results to pages that have people spewing white supremacist fascist bullshit, and then nuking those pages? Is this really the best that Facebook can do? Because it looks like Facebook delivers below the bare minimum except when a controversy strikes and they move fast to ban or delete whatever, cover their asses, and say “we need to do better” for the googolplexth time.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:

How much money and man-hours would it cost to have moderators just go to Facebook’s front page, search “Militia” or “nazi” or “aryan”, then follow those results to pages that have people spewing white supremacist fascist bullshit, and then nuking those pages?

Less than Facebook might want you to believe, but more than you might think.

And assume for a moment that “militia” brings up groups/people that aren’t spouting bigotry and fascist propaganda and anti-government sentiments (on Facebook, at any rate). Should Facebook ban such accounts only because they have the word “militia” in the display name?

Anonymous Coward says:

Re: Re: Re:2 Re:

And assume for a moment that “militia” brings up groups/people that aren’t spouting bigotry and fascist propaganda and anti-government sentiments (on Facebook, at any rate). Should Facebook ban such accounts only because they have the word “militia” in the display name?

No, because like I said, they’d be going to the actual pages themselves, and then nuking the pages that “have people spewing white supremacist fascist bullshit”. False positives would eventually happen where bans/deletions have to get appealed, yes, but that wouldn’t invalidate the progress that’d be made by Facebook doing the bare fucking minimum that it should be doing and should’ve been doing for ages now.

Stephen T. Stone (profile) says:

Re: Re: Re:3

False positives would eventually happen

And that would run counter to the “perfect moderation” that lawmakers want from Facebook, so…yeah…

(That also wouldn’t begin to address the right-wing media firestorm of Facebook going after “militia” pages, which would inevitably make Facebook bend over backwards even further to please conservatives.)

Anonymous Coward says:

Re: Re: Re:4 Re:

Which U.S. lawmakers have said they want “perfect moderation and how many of them are there? No seriously, which members of the U.S. Congress say they want “perfect moderation”; can you give examples or are these lawmakers just a rhetorical fiction constructed to support your arguments?

This comment has been deemed insightful by the community.
Mike Masnick (profile) says:

Re: Re: Re: Re:

How much money and man-hours would it cost to have moderators just go to Facebook’s front page, search “Militia” or “nazi” or “aryan”, then follow those results to pages that have people spewing white supremacist fascist bullshit, and then nuking those pages?

You say that as if that’s the only thing they need to do. And that’s why you have no fucking clue the scale of what’s happening. There are probably 10,000 different searches they need to do in 125 different languages, and then they have to examine each result to make sure it actually violates a policy. And they have to do that every fucking day.

And if they miss a few idiots like you will step in and say "how many man-hours would it take to do this search" because you’re an ignorant fool who has no clue about the scale of how this works.

I’m no fan of Facebook, but these attacks demonstrate pure ignorance and stupidity from people who have no clue.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re: Re:

"How much money and man-hours would it cost to have moderators just go to Facebook’s front page, search “Militia” or “nazi” or “aryan”"

Many thousands of man hours per week. Which is why they use some level of automation, and mistakes are made by bots that cannot understand things like context and nuance, something which is also impossible by humans at that sort of scale.

This comment has been deemed insightful by the community.
Scary Devil Monastery (profile) says:

Re: Re: Re: Re:

"How much money and man-hours would it cost to have moderators just go to Facebook’s front page, search “Militia” or “nazi” or “aryan”, then follow those results to pages that have people spewing white supremacist fascist bullshit, and then nuking those pages?"

Assume 5 minutes per page. just for those three keywords, to locate, find, and peruse a page for moderation just to see that it isn’t sarcasm or some citizen watchdog journalist quoting the latest out of the white supremacy bunker. Assume literal millions of results you have to go through. Facebook isn’t going to hire full-time moderators which outnumber the rest of their staff by a few orders of magnitude.

Now add to that…child abuse, religious extremism in various flavors – from US evangelical doomsday cults quoting the text of revelations and advocating armageddon, to fundamentalist saudi nationals screaming about the necessity of the hijab, to crackpot suicide cults – etc, etc…Moderation at scale is an unending hydra always sprouting ten times as many heads as you can find moderators.

And those moderators all have to be on the same page, so add vast educational programs on teaching them what they can allow or not, so you don’t have to rely on some guy just not seeing anything wrong with phenomenon X and leaving those pages up while busting the accounts of people who, say, might be pro or con whistleblowers, BLM, LGBTQ, specific religions, etc…

Anyone who claims moderation is possible at scale doesn’t understand the concept of scale, and should be shown a beach and told to tally every individual grain of sand given very simple rules so they can get a grasp of how numbers work.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:2 Re:

"Facebook isn’t going to hire full-time moderators which outnumber the rest of their staff by a few orders of magnitude"

Also, they do already hire entire companies whose job it is to moderate content. Every time you read a story about something that was missed, or a story about something that was moderated by mistake, that’s after hundreds or even thousands of people were already hired to do that job. Throwing tens of thousands more people at the problem isn’t going to do much else other than generate more false positives and make the stuff that’s missed even more newsworthy.

Then, of course, as you rightly note, human beings don’t all tend to be on the same page. There’s stories of ex-employees who deliberately overmoderated left-leaning content and let right-wing propaganda fly through. Even without considering the obvious problems of cultural and social differences between moderators in different parts of the world, or even within individual US communities, you can’t ignore the fact that some people just won’t be doing the job properly either by omission or deliberate sabotage. All so that they can be told that they’re not hiring enough people the next time someone finds an obscure page that got missed.

Scary Devil Monastery (profile) says:

Re: Re: Re:3 Re:

"Every time you read a story about something that was missed, or a story about something that was moderated by mistake, that’s after hundreds or even thousands of people were already hired to do that job."

Yeah, and then some Very Stable Genius toddles along and states, in full confidence, that it’s "not a big thing".

I used to assume, as a young and idealistic DBA, that people not understanding data was not an issue- I mean, that was my job.

A few years down that road I was instead leaning to the idea that as soon as someone cracked their mouth open and showed they didn’t understand the concept of numbers the expedient way to go about it would simply beat that person to death and save everyone involved the trouble his dunning-kruger would bring.

Everyone with knowledge on how data is processed will say that moderation at scale is impossible. And yet we keep seeing village idiots and yokels claim the contrary with nothing but their dick in hand to back that assertion up.

It makes me tired. And note that so far I haven’t even mentioned that morality being relative, any ten moderators will be moderating ten different ways…

"…or even within individual US communities, you can’t ignore the fact that some people just won’t be doing the job properly either by omission or deliberate sabotage."

Yeah, and what really bugs me here is that if you were to ask any of the "of course moderation is possible" brigade if they’d trust any random ten people in their own community to moderate their media flow they’d scream in panic at the idea of the damyankee liberal a block down being the one judging their posts…

kshear says:

Re: Re: Re:

It is the best that Facebook can do, given its size. Other similarly large services can’t/don’t fare much better.

Just excuses for FB. What they’re doing now is totally not the best they could do, given their tech capabilities and financial resources. Are they really using their best tech resources on this problem? Hardly – those resources go to increasing user engagement (including users who spread hate), and improving ad effectiveness. Growth (aka user engagement) and profitability have been the no.1 and no.2 goals of FB since it started. Moderation, content standards and legal compliance far behind, though, of course enough resources devoted to those things so people can say, well, they’re doing the best they can.

Take the white supremacist groups that FB claims it can’t identify, even while it’s matching these groups to users who are ready to engage with the supremacist content. It does this matching by using the data in a very sophisticated way that demonstrates it can indeed identify white supremacist content for purposes of user engagement. The problem isn’t whether FB could moderate this more effectively, rather, moderation’s just a lower priority for FB.

But yes, let’s agree the problem is making moderation better not making it perfect, so we should expect FB to make good progress improving moderation constantly, not just when it gets bad publicity for its failures in this area. How about, FB provide regular audits of its moderation efforts, and report how much it’s spending and what tech resources it’s applying to this problem. We’re talking about hate speech advocating violence, and that stuff can cost people’s lives.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re: Re:

But yes, let’s agree the problem is making moderation better not making it perfect, so we should expect FB to make good progress improving moderation constantly, not just when it gets bad publicity for its failures in this area. How about, FB provide regular audits of its moderation efforts, and report how much it’s spending and what tech resources it’s applying to this problem. We’re talking about hate speech advocating violence, and that stuff can cost people’s lives.

The Techdirt regulars don’t actually care. It’s an endless circlejerk about why content moderation at scale is "impossible" and if you actually come up with a good point Mike will call you an "idiot" or "ignorant" or "clueless" and saying "he’s no fan of Facebook" simultaneously taking whatever Facebook says about how they’re doing the best they can at face value, as well as advocating for "Protocols Not Platforms" which is the only solution he agrees with because he’s the one who created it. White cishet Gen X-er tech bros in their 40s leaping to the defense of Facebook and every other habitually lying tech corps are fucking pathetic.

Scary Devil Monastery (profile) says:

Re: Re: Re:2 Re:

" It’s an endless circlejerk about why content moderation at scale is "impossible" and if you actually come up with a good point…"

That’s a novel way of describing factual reality and validated assertion.

Of course Mike calls you an idiot when your "idea" has been disproven fifty times on this forum alone and been proven impractical or impossible a few thousand times in real life.

"White cishet Gen X-er tech bros in their 40s.."

Yeah. The experts. The ones who actually know what they’re talking about.

But go ahead. Prove your assertions. Better yet, make a single claim which isn’t either impractical, impossible, or outright infantile. Or go count the grains on the beach to judge their merits based on any ruleset you like.

The rest of us who in many cases have hands-on experience with moderation and mass data processing, will eagerly await your nobel prize-winning new math algorithm which – since it’ll be a genuine AI – will open all kinds of new vistas for everyone.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:3 Re:

Yeah. The experts. The ones who actually know what they’re talking about.

I’d wager that people who’ve never had to face death threats for the color of their skin, their sexual orientation and/or gender identity, or what country they came from while being heavily embedded in Silicon Valley and Valley-centric academia are shitty experts to have at the forefront in the face of many of the issues that’ve been coming down the pipe which span not just the country but the globe as well.

If you disagree, feel free to start up a project where you ask people to fill out a survey about what they think about my comment, then promptly ignore that survey and do what you actually wanted to do: play a scenario-building card game with rules that y’all made up with your friends about why I’m wrong and write a book based on the scenarios that y’all come up with. It’s what the "experts" like Mike & Co. did when they made Working Futures, so why not use it here?

This comment has been deemed insightful by the community.
Mike Masnick (profile) says:

Re: Re:

Oh please! To the extent that this is an argument, there will be mistakes (false positives and negatives) in moderation, well, excuse me. It’s not just with moderation there will be mistakes — there are always mistakes in every human activity. And it’s not only "at scale" that there will be mistakes. Mistakes may be more obvious at scale, especially where (like on Facebook) much more effort is put into sharing information than into moderation. But, moderation at small scale also requires judgment calls, and mistakes go with that territory. Anyone who’s ever tried to moderate comments on even a very small website knows how hard moderation can be, even not at scale.

You are repeating my point, so not sure why the "oh please"

Also, please! That moderation is imperfect does not make it impossible. You say, moderation at scale is impossible, but you mean, moderation at scale can’t be perfect. Well, obviously. Then, you hold up Facebook’s clearly inadequate efforts at moderation as the best that anyone can do.

I most certainly did not say that it’s the "best" that anyone can do and have yelled for years about how they can do it better.

But I’m saying that policy makers, the media, and random idiots in comments keep insisting they have to be perfect.

What I’m saying is not that it’s impossible to be perfect, but that it’s impossible to do well. Because it is for exactly the reasons you stated. So you’re agreeing with me while thinking you’re disagreeing.

Better moderation is possible. It’s not impossible, even if there are gonna be mistakes.

I never said that better moderation was impossible. The point I’m making is that even as they can get better, expecting it to ever be good is a mistake.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re:

"You say, moderation at scale is impossible, but you mean, moderation at scale can’t be perfect"

That’s the entire point – even if Facebook moderate at 99.9999% perfection, something will be missed. They will also get false positives and ban something that should not be banned. Therefore, the expectation on the part of politicians and the media that they can do 100% complete moderation is impossible.

"Voltaire said long ago, "the perfect is the enemy of the better" (well it was French of course, but it translates quite straightforwardly into English). That’s what social media should be held accountable for."

Nobody’s saying "it’s impossible to do perfectly, so why bother? They’re simply saying that it’s impossible to do perfectly, so stop trying to demand perfection.

PaulT (profile) says:

Re: a furriner's curiosity

Also not an American, but this is my understanding – militias have a long history of being associated with secessionist groups and racism in the US, along with other things. There are such things as left-wing and other militias (for example, the New Black Panthers), but they seem to concentrate more on the right wing.

Stephen T. Stone (profile) says:

Re:

While left-wing militias may exist in the U.S., they would be so few in number that their existence would be insignificant. Milita groups are closely tied to right-wing causes because right-wing/conservative ideologies in the U.S. treat unfettered gun ownership, “might makes right” thinking, and the ideas expressed in the sentence “the tree of liberty must be refreshed from time to time with the blood of patriots and tyrants” as absolute moral virtues. Left-wing groups tend to be far more non-violent in their approaches — including antifascist groups, which rarely engage in violence to further political goals or intimidate people (including lawmakers).

Point is, left-wing groups don’t go around carrying “long guns” into public places on a regular basis as a “message” because they’re not violent dickbags.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

I’m so glad that Ars Technica wrote an article about what the TTP found. The discussion thread over there is full of people who actually treat FB and other tech corps like the habitual liars they are.

I’m very excited for Techdirt’s future article about why Section 230 should protect Amazon for letting fly-by-night sellers get away with selling defective goods that cause people harm.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:

Amazon is a shop. They sell stuff in their shop. If they can’t vouch for what they sell, don’t sell it. It’s entirely possible that they can vouch for the stuff they sell by having clear contact details to the manufacturer/seller which can then point to where the stuff was vetted by an official third party, that’s fine by me. But Amazon started as a little book shop. They deliberately and with careful planning and intent made themselves the size they are today – and every step of the that growth process they should have grown their vouching-for-the-stuff-we-sell process as well. Amazon’s Marketplace and the infrastructure (both physical warehouses and digital listings and info about sellers) that they have 100% control over for ensuring that the Marketplace exists and that Amazon is a key part of and gets a cut of the transactions that happen there, are far different from them being a facilitator of third parties.

If I buy a pint of milk in the supermarket, and it gets discovered that there is toxic stuff in the milk, it is the supermarkets job to do something. The supermarket didn’t milk the cow, nor am I expecting the supermarket to sample taste every carton of milk. But I expect them to take responsibility for what they sell. It’s not hard. It may take a few million dollars off of Amazon’s yearly profits, but it’s not hard. I don’t expect Amazon to test all the products that come through their warehouses themselves. But I do expect them to be able to properly vet the people on their Marketplace. This is a company specializing in tools to assist deployment on scale for business and has already figured out how to control and monitor supply chains for their own labels.

I would argue that at the very least, with regards to Amazon as a digital storefront, a critical responsibility they should have is for keeping a line of communication open between the buyer and the seller, and if the storefront cannot do that because the ‘seller’ skipped town, that’s on the storefront for not vetting the ‘seller’, and they should bear liability. This creates incentive for Amazon to deal with reputable companies and to avoid selling dangerous or defective products.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:2 Re:

Amazon’s Marketplace and the infrastructure (both physical warehouses and digital listings and info about sellers) that they have 100% control over for ensuring that the Marketplace exists and that Amazon is a key part of and gets a cut of the transactions that happen there, are far different from them being a facilitator of third parties.

Physical products you buy where the main corporation that owns the site where the products are listed gets a cut of that sale, the corporation owns the warehouses where the products are stored, and the corporation employs the people who drive up to your house and plop it on your doorstep, this all means the corporation has a key hand in getting it to you all throughout the process. This isn’t third-party content.

You care to elaborate on why you think Amazon should be able to get away with letting shady nameless Chinese vendors shove defective products into the US market and then disappear without a trace?

Rocky says:

Re: Re: Re:3 Re:

You care to elaborate on why you think Amazon should be able to get away with letting shady nameless Chinese vendors shove defective products into the US market and then disappear without a trace?

Now you are just silly. Nobody has said that, we where wondering why you thought this was related to Section 230, hence the question if you understand the difference between product liability and third party content liability. One pertains to consumer safety and the other to user generated content online, to conflate the two shows a severe lack of understanding of the issue at hand.

This comment has been flagged by the community. Click here to show it.

Rocky says:

Re: Re: Re:5 Re:

Oh, do point out where in those articles TD conflate the two. It’s quite clear from what Cathy Gellis wrote that her stance was that they where separate issues and it’ was 2 of the 3 judges presiding over the case that went to some lengths to stretch their reasoning in an effort to come to the foregone conclusion that Amazon was liable regardless who the seller was.

Tanner Andrews (profile) says:

Re: Re: Re:2 Re:

You care to elaborate how the two are connected?

Sure. They were both mentioned in the same opinion, Oberdorf v. Amazon, Inc, 930 F.3d 136 (US 3rd Cir. 2019). There, the court carefully distinguished the two, finding that Amazon was a “seller” under Pennsylvania law and Restatement S:402a. It also held that, as to failure to regulate information posted by the underlying vendor including particularly the failure to warn, Amazon was not liable due to S:230 immunity.

It is as the connection between frozen custard and lawn care equipment. Both are mentioned in this reply and one may be consumed after use of the other, and so they are connected. Some may view the connection as tenuous.

Anonymous Coward says:

Re: Re: Re: Re:

Amazon is a shop.

Amazon is also a market place. You can buy goods sold by Amazon, sold via Amazon and using their logistics, and sold via Amazon, where Amazon only provides ordering and payment services, and the seller deals with the logitics of delivery. Which of those groups should Amazon be held liable for?

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...