A ContentID For Online Bullying? What Could Possibly Go Wrong…

from the let's-think-this-through dept

Let me start out by saying that I think online harassment and bullying is a significant problem — though also one that is often misrepresented and distorted. I worry about the very real consequences of those who are bullied, harassed and threatened online, in that it can often lead to silencing voices that need to be heard, or even causing some to not even bother to participate for fear of the resulting bullying. That said, way too frequently, it seems that those who are speaking out about online bullying assume that the best way to deal with this is to move to push for censorship as the solution. This rarely works. Too frequently we see “cyberbullying” being used as a catchall for attacking speech people simply do not like. Even here at Techdirt, people who dislike our viewpoint will frequently claim that we “bullied” someone, merely for pointing out and discussing statements or arguments that we find questionable.

There are no easy answers to the question of how do we create spaces where people feel safer to speak their minds — though I think it’s an important goal to strive for. But I fear the seemingly simple idea of “silence those accused of bullying” will have incredibly negative consequences (with almost none of the expected benefits). We already see many attempts to censor speech that people dislike online, with frequent cases of abusive copyright takedown notices or bogus claims of defamation. Giving people an additional tool to silence such speech will be abused widely, creating tremendous damage.

We already see this in the form of ContentID from YouTube. A tool that was created with good intent, to deal with copyright infringement on the site, is all too often used to silence speech on the site, either to silence a critic or just through over-aggressive robots.

So, imagine what a total mess it would be if we had a ContentID for online bullying. And yet, it appears that the good folks at SRI are trying to build exactly that. Now, SRI certainly has led the way with many computing advancements, but it’s not clear to me how this solution could possibly do anything other than create new headaches:

But what if you didn?t need humans to identify when online abuse was happening? If a computer was smart enough to spot cyberbullying as it happened, maybe it could be halted faster, without the emotional and financial costs that come with humans doing the job. At SRI International, the Silicon Valley incubator where Apple?s Siri digital assistant was born, researchers believe they?ve developed algorithms that come close to doing just that.

?Social networks are overwhelmed with these kinds of problems, and human curators can?t manage the load,? says Normal Winarsky, president of SRI Ventures. But SRI is developing an artificial intelligence with a deep understanding of how people communicate online that he says can help.

This is certainly going to sound quite appealing to those who push for anti-cyberbullying campaigns. But, at what cost? Again, there are legitimate concerns about people who are being harassed. But one person’s cyberbullying could just be another person’s aggressive debate tactics. Hell, I’d argue that abusing tools like contentID or false defamation claims are a form of “cyberbullying” as well. Thus, it’s quite possible that the same would be true of this new tool, which can be used to “bully” those the algorithm decides is bullying as well.

Determining copyright infringement is already much more difficult than people imagine — which is why ContentID makes so many errors. You have to take into account context, fair use, de minimis use, parody, etc. That’s not easy for a machine. But at least there are some direct rules about what truly is “copyright infringement.” With “bullying” or “harassment,” there is no clear legal definition to match up to and it’s often very much in the eye of the beholder. As such, any tool that is used to “deal” with cyberbullying is going to create tremendous problems, often just from misunderstandings between multiple people. And that could create a real chilling effect on speech.

Perhaps instead of focusing so much technical know-how on “detecting” and trying to “block” cyberbullying, we should be spending more time looking for ways to positively reinforce good behavior online. We’ve built up this belief that the only way to encourage good behavior online is to punish bad behavior. But we’ve got enough evidence at this point showing how rarely this actually works, that it seems like perhaps it’s time for a different approach. And a “ContentID for harassment” seems unlikely to help.

Filed Under: , , , ,
Companies: sri

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “A ContentID For Online Bullying? What Could Possibly Go Wrong…”

Subscribe: RSS Leave a comment
19 Comments
Graham J (profile) says:

Think of the children

Agreed. Any good parent knows that positive reinforcement should be employed much more than threats and punishment when teaching children (and animals, for that matter) good behaviour.

In a very real sense people are being childish when they lash out on the internet – rationality goes out the window – so better to use these techniques for them too.

lars626 (profile) says:

On the other hand ...

All these content-ID schemes sound great. But like many solutions, especially when using software, they don’t create a complete solution. They work to shutdown the bad, sometimes, and take a lot of the good with it.

When they do take down the wrong things there Must be a way to disagree and override the take down. The current YouTube system is not adequate. The should also be a way for the operators to lock out repeat offenders, including an appeal process if they disagree.

Any system that has an automated take down should have shield setting. This would prevent content that has been determined to be ‘acceptable’ from being taken down automatically. This would cover fair use or repeated bogus take downs on content that someone finds disagreeable.

This is not a problem that will be quickly solved, if ever. What I don’t understand is why Google has not made improvements in YouTube. They must not be making any money off the thing and have a tight budget.

Michael (profile) says:

Re: On the other hand ...

When they do take down the wrong things there Must be a way to disagree and override the take down.

And that right there is the backward thinking that is the problem. People think it is acceptable to block or take down content that isn’t illegal as long as there is a way to get it back.

That is NOT OK.

In addition, anyone claiming that have a system that supposedly can identify illegal content is simply lying. Much of this content cannot be identified as illegal until there has actually been a court ruling. Anything that takes the content down and then allows it to be restored after a ruling is effectively locking people in prison until there is a trial to determine if they are guilty.

Groaker (profile) says:

An introductory course in computability might be useful for all these la la land ideas, though I have little reason to believe that it would be neither be taken nor understood.

Over the past two and a half centuries more than almost a million died for the principles of the Constitution. Are we going to throw it away on some elected official who has an “idea (poor thing it must be lonely)?”

Anonymous Coward says:

I can see it now, someone quotes a movie, tv, et al and is arrested for harassment. Ex:
“You lousy cork-soakers. You have violated my farging rights. Dis somanumbatching country was founded so that the liberties of common patriotic citizens like me could not be taken away by a bunch of fargin iceholes… like yourselves.” — Johnny Dangerously (1984)

Anonymous Coward says:

what?

There are no easy answers to the question of how do we create spaces where people feel safer to speak their minds

There actually are some pretty easy answers! Anonymity is actually one of them. And additionally, there is till a limit on how safe anyone can be anyways. You could die sitting right where you are by home invasion from criminals or some hot SWATTING brought to you by a corrupt police dept near you!

The founders knew what was going on, stand up for what you believe in or just shut up and lose your voice. Anyone at anytime could become unreasonably hostile to anything you say because that is just life.

And at long as we expect everyone else like corporations and the government to keep use safe you become nothing more than a kept hamster worthy of no safety at all.

Anonymous Coward says:

It is a much harder, more fraught, problem than contentID, simple voice recognition, or driverless cars etc.

It requires a mature intellect to identify “bullying”, and even then, it will very often be contentious.

NLP (Natural Language Processing) seems to currently have the “intelligence” of about a 5 year old.

I can’t see this going anywhere.

That Anonymous Coward (profile) says:

Oh sweet baby FSM.

Quick decide that your patent is the solution to all of the worlds ills and cash in, cause a bunch of problems, and walk away.

If one were to look at a majority of my online interactions with that Adam Steinbaugh fellow without the correct frame of reference, I’m a huge bully picking on poor Adam. Except he has tools to not see what I say, doesn’t have to reply, and he is pretty much in on the joke.

I’ve been accused, more than once, of bullying lawyers online. Overwrought filings with courts accuse me of mental illness, because I think they are a joke.

We have been making the world to soft and fluffy to “protect” the children. We’ve seen stories where the media loves to play up the “bullying” aspect… but a saying a child looked fat once isn’t really bullying.

Humans LOVE to stick everything into clearly labeled boxes, and we’ll expand what the label covers to keep it easy to sort. So an online shouting match between the old gf and the new gf (and she dated him first for 2 whole weeks) is considered the same as a child who is the target of a malicious group who bury her in negative attention.

Once upon a time, the parent of the aggrieved would call the other kids parents and hash it out… now it’s a matter for the authorities. Some parents are completely clueless about how their kid behaves online, because they assume the world will watch out for them and keep them safe (and not being the evil bastards they can be).

Perhaps we should spend much less time looking for a technical solution to a failure to raise kids. Many parents are failing their kids, because being a parent isn’t something we require them to do. I’m sorry my kid yelled at your kid, but you understand your kid hit him first. More often than not everyone is a special innocent child who did nothing to incite what happened… and with no adult to talk to when things spin out of control… it gets worse.

Wendy Cockcroft says:

The germ of the answer is in the last paragraph: how can we encourage people to behave better online of their own free will?

Effective moderation requires a willingness to enforce it; I’ve been in situations where the theory and practice differed wildly: people don’t like laying down the banhammer on people they are friendly with or intimidated by.

It’s true that you can’t legislate better attitudes but I’m very glad to see nuance in this article and hope that better minds than mine can come up with a more effective solution than “Censorship,” “Sod off, then,” or “Suck it up,” which is what we have now.

John85851 (profile) says:

What is bullying?

I think this is the question that needs to be answered first. Like the article says, one person’s “bullying” could be someone else’s aggressive debating.

And what happens if the “bullied” person goes along with the aggressive debate, but the automated system flags the comments as bullying? In other words, it doesn’t account for think-skinned people.
Or what if you and I don’t think a comment is bully comment, but the automated system does? So now the system is being too thin-skinned.

So like one of the commenters says, the researchers should go back to their labs until the “close enough” system can take every situation into account.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...