The Impossibility Of Content Moderation Extends To The People Tasked With Doing Content Moderation

from the maybe-we-should-stop-demanding-more-be-done dept

For years, now, we’ve been writing about the general impossibility of moderating content at scale on the internet. And, yet, lots of people keep demanding that various internet platforms “do more.” Often those demands to do more come from politicians and regulators who are threatening much stricter regulations or even fines if companies fail to wave a magic wand and make “bad” content disappear. The big companies have felt compelled to staff up to show the world that they’re taking this issue seriously. It’s not difficult to find the headlines: Facebook pledges to double its 10,000-person safety and security staff and Google to hire thousands of moderators after outcry over YouTube abuse videos.

Most of the demands for more content moderation come from people who claim to be well-meaning, hoping to protect innocent viewers (often “think of the children!”) from awful, awful content. But, of course, it also means making these thousands of employees continuously look at highly questionable, offensive, horrific or incomprehensible content for hours on end. Over the last few years, there’s been quite a reasonable and growing concern about the lives of all of those content moderators. Last fall, I briefly mentioned a wonderful documentary, called The Cleaners,focused on a bunch of Facebook’s contract content moderators working out of the Philippines. The film is quite powerful in showing not just how impossible a job content moderation can be, but the human impact on the individuals who do it.

Of course, there have been lots of other people raising this issue in the past as well, including articles in Inc. and Wired and Gizmodo among other places. And these are not new issues. Those last two articles are from 2014. Academics have been exploring this issue as well, led by Professor Sarah Roberts at UCLA (who even posted a piece on this issue here at Techdirt). Last year, there was another paper at Harvard by Andrew Arsht and Daniel Etcovitch on the Human Cost of Online Content Moderation. In short, none of this is a new issue.

That said, it’s still somewhat shocking to read through a big report by Casey Newton at the Verge, about the “secret lives” of Facebook content moderators. Some of the stories are pretty upsetting.

The moderators told me it?s a place where the conspiracy videos and memes that they see each day gradually lead them to embrace fringe views. One auditor walks the floor promoting the idea that the Earth is flat. A former employee told me he has begun to question certain aspects of the Holocaust. Another former employee, who told me he has mapped every escape route out of his house and sleeps with a gun at his side, said: ?I no longer believe 9/11 was a terrorist attack.?

There may be some reasonable questions about what kind of training is being done here — and about hiring practices that might end up having people susceptible to the internet’s garbage put into a job reviewing it. But, still…

Part of the problem is that too many people are looking to the big internet companies — mainly Google and Facebook — to solve all the world’s ills. There are a lot of crazy people out there who believe a lot of crazy things. Facebook and YouTube and a few other sites are often a reflection back of humanity. And humanity is often not pretty. But we should be a bit concerned when we’re asking Facebook and Google to magically solve the problems of humanity that have plagued humans through eternity… and to do so just by hiring tens of thousands of low-wage workers to click through all the awful stuff.

And, of course, the very same day that Casey’s article came out, Bloomberg reported that its growing roster of thousands of moderators are increasingly upset about their working conditions, and Facebook’s own employees are getting annoyed about it as well — noting that for all of the company’s claims about how “important” this is, it’s weird that they’re outsourcing content moderation to third parties… and then treating them poorly:

The company?s decision to outsource these operations has been a persistent concern for some full-time employees. After a group of content reviewers working at an Accenture facility in Austin, Texas complained in February about not being allowed to leave the building for breaks or answer personal phone calls at work, a wave of criticism broke out on internal messaging boards. ?Why do we contract out work that?s obviously vital to the health of this company and the products we build,? wrote one Facebook employee.

Of course, it’s not clear that hiring the content moderators directly would solve very much at all. As stated at the very top of this article: there is no easy solution to this, and every solution you think up has negative consequences. On that front, I recommend reading Matt Haughey’s take on this, properly titled, Content moderation has no easy answers. And that’s coming from someone who ran a very successful online community (MetaFilter) for years (for a related discussion, you can listen to the podcast I did last year with Josh Millard, who took over MetaFilter from Haughey a few years ago):

People often say to me that Twitter or Facebook should be more like MetaFilter, but there?s no way the numbers work out. We had 6 people combing through hundreds of reported postings each day. On a scale many orders of magnitude larger, you can?t employ enough moderators to make sure everything gets a check. You can work off just reported stuff and that cuts down your workload, but it?s still a deluge when you?re talking about millions of things per day. How many moderators could even work at Google? Ten thousand? A hundred thousand? A million?

YouTube itself presents a special problem with no easy solution. Every minute of every day, hundreds of hours of video are uploaded to the service. That?s physically impossible for humans to watch it even if you had thousands of content mods working for YT full time around the world.

Content moderation for smaller communities can work. At scale, however, it presents an impossible problem, and that’s part of the reason why it’s so frustrating to watch so many people — especially politicians — demanding that companies “do something” without recognizing that anything they do isn’t going to work very well and is going to create other serious problems. Of course, it seems unlikely that they’ll realize that, and instead will somehow insist that the problems of content moderation can also be blamed on the companies.

Again, as I’ve said elsewhere this week: until we recognize that these sites are reflecting back humanity, we’re going to keep pushing bad solutions. But tech companies can’t magically snap their fingers and make humanity fix itself. And demanding that they do so, just shoves the problem down into some pretty dark places.

Filed Under: ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “The Impossibility Of Content Moderation Extends To The People Tasked With Doing Content Moderation”

Subscribe: RSS Leave a comment
63 Comments
Anonymous Coward says:

But we should be a bit concerned when we’re asking Facebook and Google to magically solve the problems of humanity that have plagued humans through eternity… and to do so just by hiring tens of thousands of low-wage workers to click through all the awful stuff.

By my calculations, posted here, it would require well over 100,000 people to review YouTube content alone. Probably an equal number for Facebook and another massive payroll for Twitter. And those are just the big 3.

This is a (non?) problem without solution.

Anonymous Coward says:

Read that their solution would be to crowd source moderation.

So instead of removing questionable content through their own failures they will facilitate more dead souls among the masses.

Perhaps they should look to other solutions that don’t include creating more causalities…

My best opinion on the topic would be to shut down Facebook globally until regulations and moderation concerns are at least up to par with the problems this company creates.

This isn’t about free speech, this is about harm in moderation and viewing on a platform that doesn’t have a clue how to handle harm.

Anonymous Coward says:

Re: Re:

And this is why we don’t let you run anything, much less governments.

This isn’t about free speech

No, it really is. Your solution is to infringe on a bunch of people’s rights, just because you don’t like it. Too bad, it doesn’t work that way.

this is about harm in moderation and viewing on a platform that doesn’t have a clue how to handle harm.

You obviously didn’t read the article then, otherwise you would have seen that it’s not about how to handle harm, it’s that it’s impossible to do that kind of moderation on a global scale. Even China with all its firewalls and censorship techniques, still can’t completely quash the stuff they don’t want from getting through.

Anonymous Coward says:

Re: Re: Re:

re: it’s that it’s impossible to do that kind of moderation on a global scale.

… and thus Facebook can’t do the job, there are not enough moderators to do the job, machine learning can’t do the job – so my question: why let Facebook continue to operate?

To me this ISN’T about free speech, it’s about a public company that can’t effectively manage the platform it created, that has no realistic solutions to the problems they facilitated (news feeds, push, content profiles, et) that cause REAL world harm which has led to DEATHS on more than one occasion, which has impacted elections through the companies inability or unwillingness to tackle the problems.

I look at akin to coal power, sure we get electricity but what about the issues of storing the coal ash ( ask the Tennessee Valley Authority), is the platform worth the harm it causes and my answer is unequivocally NO.

Stephen T. Stone (profile) says:

Re: Re: Re:

why let Facebook continue to operate?

Because to give any government that power — the power to unilaterally and without due process shut down a platform for speech because “it got too big to moderate” — is to give that government a power that can and will be exploited for the personal comfort/gain of those who would wield that power.

Anonymous Coward says:

Re: Re: Re:2 why let Facebook continue to operate? [was ]

          why let Facebook continue to operate?

Because to give any government that power — the power to unilaterally and without due process…

A coalition then. A coalition formed of, say, sixty or eighty or a hundred of the world’s leading military powers and advanced industrial economies, all banding together against one puny corporation. That’s not unilateral.

As for ‘due process’, the question is always how much process is due in a particular situation? How much process is needed to declare war? Before a nation’s government can shoot someone for the common good?

Stephen T. Stone (profile) says:

Re: Re: Re:3

A coalition formed of, say, sixty or eighty or a hundred of the world’s leading military powers and advanced industrial economies, all banding together against one puny corporation. That’s not unilateral.

But it is still dangerous. If you let this coalition have the power to shut down Facebook for an arbitrary reason such as “it got too big”, what would stop them from using that power against the next platform it deems “too big”, such as YouTube or Twitter? What arbitrary standard of size or cultural influence or some other factor will it use to justify shutting down not-quite-as-big platforms after the biggest ones are all dead and buried? How far would this coalition need to go before you would consider it a “bad idea”?

cpt kangarooski says:

Re: Re: Re:3 why let Facebook continue to operate? [was ]

As for ‘due process’, the question is always how much process is due in a particular situation?

That’s a question of procedural due process. There is also the separate issue of substantive due process; finding the line where governments are prohibited from acting because the rights of individuals and/or the people at large are more important than whatever the government is trying to do.

So you must also ask why some government action is being done, who the government will or could harm if it takes this action, is it more important to protect the victim of government action or let the government act in this case, and what sorts of side effects or sloppy targeting exists such that either the government action will affect too many people (causing undue harm) or too few people (causing it to be pointless, and thus rendering harm done undue because of the futility of it) or sometimes both.

Anonymous Coward says:

Re: Re: Re: Re:

and thus Facebook can’t do the job, there are not enough moderators to do the job, machine learning can’t do the job – so my question: why let Facebook continue to operate?

Because there is no requirement that they be able to moderate content on their platform to such a degree that no one ever sees anything harmful or offensive. That’s what it means to have freedom of speech. You get the good with the bad. You can’t have one or the other and still have freedom of speech.

Whether Facebook can do the job of successful content moderation is irrelevant. That’s not what they are in the business of, that’s not what their platform is for, and there’s no legal or moral requirement for them to do it at all, much less successfully.

To me this ISN’T about free speech

You are entitled to your opinion, that doesn’t make you right. This very much IS about free speech because you have a government, or a group of governments, telling a platform, and by extension users of that platform, what they can and cannot say on said platform. That is the literal definition of government censorship and the First Amendment says they can’t do that. So it really doesn’t matter what you think it’s about, it is about free speech.

it’s about a public company that can’t effectively manage the platform it created

You do know what an open platform means right? It means the exact opposite of managing what users say and do not say on it. Besides that, there’s nothing morally wrong or illegal about this.

that has no realistic solutions to the problems they facilitated (news feeds, push, content profiles, et)

In and of themselves these are technical tools and not problems.

that cause REAL world harm which has led to DEATHS on more than one occasion, which has impacted elections

You know what? So have people talking to each other in person, writing articles in old school newspapers, books, having telephone conversations, town hall meetings, hell any time one person is allowed to communicate with another person, or a group of people, it has the potential to cause real world harm and lead to deaths, and actually HAS on MANY occasions. Just look at history before the rise of the internet. How many genocides and mass instances of slavery were caused by people talking and spreading their ideas by word of mouth and print? All of them. What you’re saying is the equivalent of saying that governments should tell the people of the world what they can and cannot say because their words can cause harm and death. The medium is irrelevant, and in this case, Facebook is just the medium.

through the companies inability or unwillingness to tackle the problems

Well, they are actually trying, it’s just impossible to do at the scale you are talking about. Literally impossible for anyone. Not just Facebook, NO ONE can successfully do what you are wanting them to do. It’s never been done in the history of the world, and when it’s been tried, it’s ended badly for everyone.

I look at akin to coal power, sure we get electricity but what about the issues of storing the coal ash

False equivalence. One is about the rights of human beings, the other is about an inanimate object that has no rights.

is the platform worth the harm it causes and my answer is unequivocally NO.

Because this can be scientifically proven to be harmful to the environment and that there are better ways to generate electricity. Therefore it’s the cost benefit analysis tips towards no.

But that’s not the case here. Here, the solution you propose is to take rights away from everybody ( massive harm) because a platform that has revolutionized communication and allowed people all over the world to communicate at an unprecedented level (massive amount of good) allows people to be people, good and bad. Right, wrong, or indifferent, Facebook has a legal right to exist and run their business as they see fit within the law. Not moderating what people say on their platform is within that law. And that law is the First Amendment.

Anonymous Coward says:

Re: Re: Re: Re:

YouTube has acted,[More updates on our actions related to the safety of minors on YouTube }(https://youtube-creators.googleblog.com/2019/02/more-updates-on-our-actions-related-to.html).

If your video features minors, no comments will be allowed, except for a chosen few heavily moderated channels.

So once again, the bad behaviour of a few people brings punishment down on everybody else under the guise of protecting minors.

cpt kangarooski says:

Re: Re:

Read that their solution would be to crowd source moderation.

So instead of removing questionable content through their own failures they will facilitate more dead souls among the masses

Oh, it’s so much worse than that.

It will open the sites using that up to the risk of lawsuits for emotional distress and it will allow unvetted and malicious individuals to manipulate the filtering. (And they will try; think about the weirdos that make those creepy videos for kids on YouTube for whatever reason)

cpt kangarooski says:

Re: Re:

Read that their solution would be to crowd source moderation.

So instead of removing questionable content through their own failures they will facilitate more dead souls among the masses

Oh, it’s so much worse than that.

It will open the sites using that up to the risk of lawsuits for emotional distress and it will allow unvetted and malicious individuals to manipulate the filtering. (And they will try; think about the weirdos that make those creepy videos for kids on YouTube for whatever reason)

Not.You says:

Youtube

As a parent of kid just getting to the age where youtube music videos are of interest I had given some thought to the youtube issue at any rate. Youtube could have two versions, one where all content has been 100% vetted, and then the normal version they have now, and as long as they were explicit as to which version you were browsing then at least people would have the option of being in the guaranteed "safe" zone.

In general though I recognize that the internet is full of morons and trolls and scammers and bigots and assholes of all varieties (as is the world at large) and I parent appropriately. Meaning that mostly up until now it has all been PBSKIDS and Netflix kids without parental supervision. I don’t let my kid entirely loose on the internet just yet although those days are not far off.

I don’t use the facepages so I have no suggestions there but overall I am inclined to suspect that AI can be made better at this then it is now and if anyone has the resources to develop an AI that can do a pretty decent job of this it would be google and facebook. Building appropriate options that recognize that AI will never be 100% effective is necessary too I think.

In the end I would rather see too much inappropriate content than too much moderation though if I have a choice between the two. As long as it is clear when I am browsing that I am in a less than fully moderated environment I can deal with it accordingly.

Anonymous Coward says:

Re: Re: Re:3 Youtube

That is a problem with corporations relinquishing freedom to sustain their bottom lines. That is why they cannot be trusted to ensure sovereignty doesn’t take a back seat to greed. They need to be held in check by a government that is governed by sovereign people. Otherwise you get monsters hovering over nations’ welfares ready to strike.

Mike Masnick (profile) says:

Re: Youtube

As a parent of kid just getting to the age where youtube music videos are of interest I had given some thought to the youtube issue at any rate. Youtube could have two versions, one where all content has been 100% vetted, and then the normal version they have now, and as long as they were explicit as to which version you were browsing then at least people would have the option of being in the guaranteed "safe" zone.

YouTube does have a YouTube Kids version, which I agree they probably should set up the way you’ve described. So far, it’s more that they use some unknown process to approve channels… but they don’t review each video. It does seem like YouTube Kids would be a lot better if it did involve reviewing the videos that get on there.

While that’s an impossible ask for the fully open YouTube, it does seem much more reasonable for a more limited version, such as YouTube kids, to be a more curated version of YouTube that has all the videos reviewed.

In general though I recognize that the internet is full of morons and trolls and scammers and bigots and assholes of all varieties (as is the world at large) and I parent appropriately. Meaning that mostly up until now it has all been PBSKIDS and Netflix kids without parental supervision. I don’t let my kid entirely loose on the internet just yet although those days are not far off.

Yup. I think the trick here is not to watch over every little thing your kids do, but to train them to have the tools to deal with bad behavior, and to recognize that such bad behavior exists. I’ve actually been surprised and impressed at the training the local schools provide kids here about internet safety. It is reasonable, thoughtful, and not based on moral panics.

Anonymous Coward says:

I Think that the best solution is to give people the tools to control their own experience. The problem with that approach is that there are those people out there who think that if they don’t like it, nobody else should be able to see it, and such tools will never satisfy them, because the cannot impose their morals on other people.

Also, the moderated platforms, and the software on which the build a moderated platform exists, and so any individual or group could build their own platforms, and federate with individuals and groups with a similar outlook. Any church group could set up their own social media sites, and federate with other churches. Don’t like the big sites, club together and start building a network of like minded sites.

Thad (profile) says:

Re: Re:

I Think that the best solution is to give people the tools to control their own experience. The problem with that approach is that there are those people out there who think that if they don’t like it, nobody else should be able to see it, and such tools will never satisfy them, because the cannot impose their morals on other people.

That’s not the only problem with it.

I agree with the stance that platforms should give people the tools to control their own experience — hell, I’m browsing this site right now with a script I wrote to block some trolls who were adversely affecting my experience.

But that only gets you so far.

How do you protect against targeted harassment campaigns? Or true threats?

There’s no simple answer to those questions, because to start with, it’s extremely difficult to even agree on a definition of the former, and while the latter may have a legal definition, it varies by jurisdiction. And even assuming the platforms could define those terms in a satisfactory way, there’s still the matter of accurately identifying threats and harassment (telling them apart from jokes, or descriptions of threats and harassment, etc.).

Anonymous Coward says:

Re: Re: Re:

How do you protect against targeted harassment campaigns?

The person being harassed can disengage by not following or friending those people harassing them, just like people do in real life.

Or true threats?

Those along with harassment to the point of stalking are a matter for the police. Expecting the social media sites to deal with them is simply a means of driving the problem underground, and all too often that leads to police involvement after the threats have been realised.

Thad (profile) says:

Re: Re: Re: Re:

The person being harassed can disengage by not following or friending those people harassing them

I don’t use Twitter, but that’s not my understanding of how it works. If a thousand people start directing abuse @ your account, you’re going to see it.

just like people do in real life.

…that’s…a completely baffling statement. People in real life only get harassed by people they friend or follow? What the fuck are you talking about?

Those along with harassment to the point of stalking are a matter for the police.

"Hello, police? Somebody named charizard69 on Twitter said he’s going to murder my family. Can you help me?"

Expecting the social media sites to deal with them is simply a means of driving the problem underground, and all too often that leads to police involvement after the threats have been realised.

But leaving threatening messages up and visible until law enforcement has the time and resources to investigate them doesn’t?

Thad (profile) says:

Re: Re: Re:5 Re:

How, kill their account? That only works if a person wants to use a specific name, otherwise they will soon be back, and with an increased desire to harass you/

I mean, if your reasoning is "a sufficiently dedicated harasser can sign up under another account," then a sufficiently dedicated harasser can follow me to other websites if I quit Twitter, too. Again, your reasoning is absurd; you’re starting at your conclusion and backfilling justifications for it, with no regard toward whether the justification you’re giving now is logically consistent with the one you gave two posts ago.

Thad (profile) says:

Re: Re: Re:7 Re:

If somebody is prepared to track down where you have moved you social media activity to, you need to involve the police

Okay. What crime do I report?

"Hello, officer? Somebody named charizard69 Googled my name, signed up for accounts on every messageboard I use, and every time I post he responds by saying rude things about my mother. There’s a law against that, right?"

So I ask again, what is a social media site meant to do to solve your problem?

That’s not "again", that’s literally the first time you’ve asked that question. Though nice try pretending that I’m the guy not answering questions here. Say, have you answered a single one of mine? For example, the one about what you mean by "The person being harassed can disengage by not following or friending those people harassing them, just like people do in real life"? You never explained that one. People are only harassed by people they follow or friend in real life? Huh?

What can moderators do to solve problems? They can moderate. Respond to reports of abuse; investigate them; temporarily or permanently ban the associated accounts and IPs if appropriate.

Now, that’s a simple response that hides how complex the issue actually is. What is abuse? When is it appropriate to ban someone? Those are, as I noted upthread, difficult questions, and, especially on a larger network, it’s impossible to answer them to everyone’s satisfaction.

But your "solution" — never moderate anything, by anyone, at any time or for any reason — is juvenile and reductive. And I think you know that, because you keep changing your justifications and moving your goalposts. Your argument is absurd; it’s a variation on the old "you can’t have 100% success, so you shouldn’t even try" routine. Plus a healthy dose of victim-blaming, and a deep misunderstanding of what happens when people report online harassment to the police.

You’re arguing in bad faith, and you’re wasting my time. You can keep yammering if you want, but I’m done here.

Anonymous Coward says:

Re: Re: Re:

I’m browsing this site right now with a script I wrote to block some trolls who were adversely affecting my experience.

Out of technical curiosity, how do you do that? And please, if I’m asking a dumb question, please excuse me. As someone who in the past dabbled with coding and HTML, and who currently works in IT, this intrigues me.

I mean, I get how adblockers work, is this similar? A plugin in your browser running a script that searches for the name of said trolls, identifies the div or frame the text is found in and blocks that div or frame from loading in your browser? It just doesn’t make sense to me since instead of being served from a third party location, the comments are more or less embedded in this page and thus a part of the page itself. How can a script block that?

Thad (profile) says:

Re: Re: Re: Re:

I mean, I get how adblockers work, is this similar? A plugin in your browser running a script that searches for the name of said trolls, identifies the div or frame the text is found in and blocks that div or frame from loading in your browser?

More or less, though it’s not a plugin itself; it’s a userscript that you can use through another plugin like Greasemonkey or Tampermonkey. You can take a look at the source code if you’d like; it’s on my website. (Used to link it from every post I wrote, but that seems to have broken at some point during the recent site update, and since nobody but me ever used the damn thing anyway, I wasn’t really sweating it.)

At any rate, it can block comments from specific usernames; I’ve also added optional whitelist functionality, and the ability to hide replies to hidden posts.

It just doesn’t make sense to me since instead of being served from a third party location, the comments are more or less embedded in this page and thus a part of the page itself. How can a script block that?

Technically it doesn’t really block the comments in question, it just hides them. The page loads; all the content on it loads; then the script runs, examines the DOM, and removes content based on specified criteria.

Anonymous Coward says:

Its not hard to understand what toll viewing the worst of the worst trash humans can come up with, posting it sometimes live even if you take a look at the toll it takes on police and combat veterans. There is a mindshift that takes place. You must fight violence with violence and the governments know this very well. They multiply force times 100 domestically and in war multiply force times 1000. People are never the same after conflict. If you survive it, it will be hard to remove it from your sub-conscious mind. Good luck.

Uriel-238 (profile) says:

Overheard on a Facebook hangar bay...

Moff Jerjerrod: Lord Vader, this is an unexpected pleasure. We are honored by your presence…
Darth Vader: You may dispense with the pleasantries, Commander. I’m here to put you back on schedule.
Moff Jerjerrod: I assure you, Lord Vader. My men are working as fast as they can.
Darth Vader: Perhaps I can find new ways to motivate them
Moff Jerjerrod: I tell you that this station will be operational as planned.
Darth Vader: The Emperor does not share your optimistic appraisal of the situation.
Moff Jerjerrod: But he asks the impossible. I need more men.
Darth Vader: Then perhaps you can tell him when he arrives.
Moff Jerjerrod: …The Emperor’s coming here?
Darth Vader: That is correct, Commander. And, he is most displeased with your apparent lack of progress.
Moff Jerjerrod: We shall double our efforts.
Darth Vader: I hope so, Commander, for your sake. The Emperor is not as forgiving as I am.

Anonymous Coward says:

Re: Re:

Human moderation is quite possible

Please explain how you plan to moderate 300 hours of video uploaded to Youtube every minute. That’s 432,000 hours of video uploaded every day. You would need, at a minimum, 18,000 people watching video 24/7 with no breaks. No bathroom breaks, food breaks, sleeping breaks, doing nothing but watching video. Oh and definitely not actually taking the time to flag or approve the content, since that takes time to do that you wouldn’t be watching video.

Now, say you want to hire enough people to review that content in 8 hour shifts. If I’ve done my math right, that’s 54,000 people that you would need doing nothing but watching video. Again, no breaks, just straight video watching.

Now add in lunch breaks, smoke breaks, time to actually flag or approve those videos, and do the day-to-day administrative tasks every company requires of its employees, such as checking email, clocking time, etc… You’ve just increased the amount of people you would need by likely another 10 – 20 thousand.

So far we’ve only been talking about the people needed to actually watch the videos, but now you’ve got somewhere in the neighborhood of 70,000 employees. Who manages them? Now you need team leads, managers, supervisors, additional IT people to support them, HR people. At this point you’re probably looking at far more than 100,000 people.

But wait, there’s more, the amount of video getting uploaded is probably only going to increase for the forseeable future (yes at some point it will hit a level off point but who knows where that is), so you’re going to have to hire EVEN MORE people to watch videos, and even more people to manage and support them.

And none of this takes into account the fact that humans WILL STILL GET IT WRONG.

If your definition of "quite possible" is so horrendously expensive and a logistics and management nightmare that it really isn’t all that possible because they are still going to get it wrong, then yes, you are "quite correct". And by that I mean you’re insane and don’t understand what you are talking about because no, it’s not possible.

too expensive for a business model built on using bots to filter content.

How does that make any sense? If I have to pay people to do what I could get a bot to do for free, how is using bots more expensive? Ignoring the fact that bots are really bad at content moderation and context.

Anonymous Coward says:

Re: Re: Re: Re:

False equivalence. The auto industry is not a platform for people to communicate and exercise their First Amendment rights to free speech on.

It’s also "too expensive" to have all humans answer customer support lines for major businesses, and bots aren’t nearly as good at it as humans, given the amount of complaints about the automated answering systems. Yet companies do that too.

Your argument is invalid.

Uriel-238 (profile) says:

Re: Re: Re:3 Child porn

That’s one we’re ultimately going to have to concede, given that the filtering software used by Google and YouTube to filter porn (or child porn) are extremely susceptible to adversarial data.

Child porn already leaks through (though for the moment Bing has the reputation for the go-to image search site for child porn, rather than Google.)

Fortunately we’re in an era in which renders of human-shaped three-dimensional models are fast approaching photo-realism, and legalizing those can create a strong incentive for pornographers to use those instead of exploiting actual children in making their porn.

Of course, if we don’t decriminalize digitally rendered child porn (because child porn is gross and toxic to political careers) then digital kids will continue to be just as illegal as photos of real kids, and the latter will stay way easier to produce.

Either way, your child porn (or your war gore or terrorist manifesto or bomb-building plan) can be superimposed with a transparent panda-mesh so that Google thinks its looking at a panda even though human eyes see something completely different. Sure Google can screen for the panda mesh, but then the pornographers will switch to a giraffe mesh and there will be new moles to whack each week.

Anonymous Coward says:

Re: Re: Re:3 Re:

Most child porn is not posted to public sites because that will bring the police down on the posters head in a hurry. There is some content published which some consider child porn, but which are just patents sharing videos and pictures of kids being kids. The intent is not to arouse people, but some people find the images arousing, but why should that be reason to block content with innocent intent and purpose.

Anonymous Coward says:

Re: Re: Re:

Double your estimates at least, as the last figure I can find with a quick search is 500 hours a minute in November 2015. Also, on your hours of videos you need 54,000 people, hours of video a day divided by 8. A better base estimate is hours a day * 7 divided by weekly hours of an employee, and for a 40 hour week that equates to 75,600 people.

Now lets call it 600 hours of videos a minute, with people working 40 hour weeks, and then ignoring holiday and sickness cover, you need 151,200 just to watch every video. Add in holiday and sickness cover, management structures, and administrative staff for the personnel department, plus legal experts to deal with edge cases, and 200 thousand would not be too many. And all that is before adding enough people in the customer service department to deal with user complaints and challenges.

Anonymous Coward says:

Re: Re: Re:4 Re:

Governments are not debating this. They are stating this is how they want things to be and lalalalala they aren’t willing to listen to reason and reality. That’s not reality and they are literally tilting at windmills by continuing to pursue this magical form of moderation that will solve all the world’s online and offline problems.

There is no debate. It’s just a bunch of people who are technologically illiterate running around like chickens with their heads cut off doing things just to do things.

ECA (profile) says:

Humans the ultimate assholes...

Ever wonder why they use Automation for this??
You are an editor/scanner/… and you see all this BS floating along your computer that you have to evaluate.
Then you get this GREAT idea…take the Emails and send a strange little notice..
"WOW, we have noticed what you are watching and doing, wouldnt your BOSS/SPOUSE/WORKER/everyone love to know what you are watching…and what your left hand is doing.."
"Pay us this amount and we wont send this info to Everyone on your email list, and in your city"

Yep, I got one of those letters…

dontwant2getfired (profile) says:

do you know that these secondary content moderators are not given paid holidays? and if worked on federal holidays they are only paid 20% extra, meaning if the person is getting paid 10 a hour, the extra 20% is only 2, so on federal holidays that are observed by accenture/cognizant, content moderator only get paid 12 a hour. Do you know that the people who work for content moderation jobs are basically having lots of health issues, from sitting down long time, eat junk food which are free, and not allowed to express themselves even though hiring company said ‘be your true self’? Do you know that when the content moderator initially started the training, they were told that if they are not comfy with the contents, they can request to change and now when they do, they are told to either suck it up or resign! Do you know that content moderators are jobs for unskilled workers who does not have college degree or simply no other way to go for a job but this one?

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...