Eric Schmidt Suggests Building A 'Spell Checker' For Online Harassment And Other Bad Things Online

from the good-luck-with-that dept

Google’s executive chairperson Eric Schmidt has an opinion piece in the NY Times, in which he advocates partly for an internet that is more widely available and enabling greater freedom of expression… but also one where there are “spell-checker” like tools to identify bad stuff online like harassment and ISIS videos:

Authoritarian governments tell their citizens that censorship is necessary for stability. It?s our responsibility to demonstrate that stability and free expression go hand in hand. We should make it ever easier to see the news from another country?s point of view, and understand the global consciousness free from filter or bias. We should build tools to help de-escalate tensions on social media ? sort of like spell-checkers, but for hate and harassment. We should target social accounts for terrorist groups like the Islamic State, and remove videos before they spread, or help those countering terrorist messages to find their voice. Without this type of leadership from government, from citizens, from tech companies, the Internet could become a vehicle for further disaggregation of poorly built societies, and the empowerment of the wrong people, and the wrong voices.

This is one of those “sounds good when you say it, but what does it really mean” kind of statements. In some ways, you could argue that his statement is almost self-contradictory. We should make it easier to see news and information from another country’s point of view… unless that point of view is one we’ve declared to be terrorists. As easy as it is to agree with that general sentiment — I’d prefer a world without ISIS and other terrorist groups, certainly — it leaves open the possibility for widespread abuse. Don’t like a particular group or country? Just declare them terrorists, and shooop down they go through the internet memory hole.

And this is the problem with these kinds of feel good suggestions. So much is dependent on the idea that there is some objective standard for “good” content that is okay that we should all share and “bad” content that is evil and should be taken down. Where you draw that line can be quite different for nearly everyone, and if you’re in a position of serious power, drawing that line can result in dangerous abuses of power.

That’s why I still think a much better solution is to separate out the layers a bit. A few months ago, I talked about the importance of protocols instead of platforms. Separate out the content from the platforms, and then let many others create tools that can filter that content in different ways. Then each individual can decide for themselves which tools they want to use to create their own internet experience. Someone could create that kind of “anti-harassment/anti-terrorist” filter tool, and people who want to use it are free to do so, but it doesn’t impact the experience of others. Where things get tricky is — as the internet gets more centralized, the platforms are also in charge of the filters. When that happens there are inevitable mistakes and abuses, leading to censorship and voices being silenced.

Filed Under: , , , ,
Companies: google

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Eric Schmidt Suggests Building A 'Spell Checker' For Online Harassment And Other Bad Things Online”

Subscribe: RSS Leave a comment
53 Comments
Anonymous Coward says:

With the rise of crybullies, I am extraordinarily wary of efforts to filter online harassment. The bullies use accusations of others harassing them in order to have them censored. The definition or attribution of harassment is almost never defined or questioned.

Many crybullies define political disagreement as harassment and have successfully censored people based on this premise. This is what terrifies me the most.

Anonymous Coward says:

I agree with Mike that an automated tool is too prone to censorship. What I would suggest however is a community driven approach. Any site can put whatever they want is TOS, so it’s not really censorship by the US Government, but censorship at the host level which is a much better idea. So I would encourage Google, FaceBook, et al, to have a community Flag system, in which the users can flag things as inappropriate. I know YouTube has a system like this already, so just encourage it’s use.

Ed Allen says:

Eric Schmidt thinks calling censorship by another name makes it desirable

Just like China…


https://www.techdirt.com/articles/20111020/03291216428/china-great-firewall-isnt-censorship-its-safeguarding-public.shtml

Hiding ideas you don’t like does not influence the holders of those ideas.

Refusing to store the text a user types is censorship by whatever name you use.

Although maybe he is hoping to create more goverment jobs…


https://www.techdirt.com/articles/20060403/0216237.shtml

Has to employ people. No machine would know that:
bomb
b*o+m=b
obzo
are all the same without taking a large amout of time to scan all text. And that is without even getting to encryption.

Good luck burning all those computer books with encryption algorithms inside.

Then, of course, we have book codes. Suppose we have a group of 250 ebooks which we
agree on the order of, random choice to anybody without our list.

Now I precede each message with either a DNS name or an IP address. Like 192.168.17.45 or http://www.ajax.com which is
the sequence of books joined for this message and the message consists of comma seperated word counts
from the joined books. As long as our lists are secret so are our messages.

Mike and I will have a different list, same 250 books, different order.

That way Eric will not know what we are saying about him.

Are they going to ban Excel spreadsheets too ?

Ignorant Assholes !

That reminds me: “Definition of a hipcrime, you committed one when you picked up this book.
Keep it up, it’s your only hope.”
“Remember you are an ignorant idiot.”
Stand on Zanzibar, John Brunner

Anonymous Coward says:

>the empowerment of the wrong people, and the wrong voices.

As much as I don’t want ISIS propaganda shoved in my face, I don’t want Mr. Schmidt dictating what is right and wrong even more.

Besides, the MSM has done a better job distributing ISIS propaganda than the group itself. You can’t turn on a TV without seeing their black flag being waved and their latest goings-on being announced at the beginning of each nightly newscast.

jilocasin (profile) says:

History is written by the victors...

The expressions;

History is written by the victors.

and

One man’s terrorist is another man’s freedom fighter.

both come to mind. Speech you don’t agree with becomes terrorist propaganda while speech you do is the voice of the resistance.

Which is just a long way of saying, even if you get it, it will never enough of what you want to it to be.

To this day there are probably some folks over in England still miffed about those bloody colonial terrorists in that breakaway province now known as the United States of America.

Anonymous Coward says:

Eric Schmidt's Plan to Destroy the First Amendment

Earth to Eric (_my_ censorship is the good kind, because I’m a _good_ guy, from a company that _does no evil_) Schmidt:

We heard _exactly_ the same rhetoric from Hitler in the 1920’s and 1930’s.

Just how many micro-seconds do you think will elapse between the time that Schmidt’s hate-checker is installed before it becomes _required_ for all university communications (with “hate” redefined as “microaggressions”, of course) and soon thereafter, _all_ public communications ?

“spell-checkers, but for hate and harassment”

“masks the need for *common values* and *strong leadership*” (** are dog-whistles)

The only First Amendment worth having is the one that protects hateful/obnoxious/irritating/banal/… speech;
the First Amendment must do its best work when it’s hard,
not just when it’s easy.

Eric must have missed that class at Princeton.

Anonymous Coward says:

Re: Re: Eric Schmidt's Plan to Destroy the First Amendment

I don’t get people who say “(company) has the right to do what they want”.

No shit. No one is arguing if they have the right to do something. That’s a non-sequitor that tries to change the argument.

People are saying they don’t want Google to be in the position to decide what is RightThink or WrongThink. These kinds of discussion help individuals decide how to best lobby Google for change.

Anonymous Coward says:

Re: Re: Re: Eric Schmidt's Plan to Destroy the First Amendment

Google has a first amendment right

No one is arguing if they have the right

I thought the Bernie Sanders crowd argues: Alphabet Inc., the multinational conglomerate, has no right to distribute Hillary: The Movie.

Isn’t that like the Sanders core campaign platform or something? I gotta admit, I haven’t been paying a whole lot of attention to them—I know Sanders is no Elizabeth Gurley Flynn, and that’s about all I need to know about the Sanders brand of soi-disant “socialism”.

Alphabet Inc., the multinational conglomerate, has no right to distribute Hillary: The Movie.

Anonymous Coward says:

Re: Re: Eric Schmidt's Plan to Destroy the First Amendment

“Google has a first amendment right to regulate its own speech in its products, just as Techdirt has the right to moderate its comments.”

True, but irrelevant.

Schmidt wasn’t talking about Google/Alphabet in his NYT OpEd; he has much bigger plans: a position in in the Hillarious administration — perhaps as “czar of the Internet red button”.

Anonymous Coward says:

Re: Re: Eric Schmidt's Plan to Destroy the First Amendment

What? … Search results are speech?
This is bullshit, you can’t have it both ways.

The link provided either re-publishes the content of others or points you to the content of others. You can not pick and choose, you pick one and stick with it. Children like to play this game but adults are supposed to act in a mature manner – lol, wth am I talking about – yeah they do this all the time.

Mike Masnick (profile) says:

Re: Yes, but....

The lines are hard to draw, but we got section 230 passed back in 1996 on a promise of self-regulation. When Internet companies struggle about how to do that, we can’t object that the entire enterprise is illegitimate.

I don’t think that invalidates concerns about how they try to do that. I still think it makes more sense to put the power in the end users’ hands, rather than the centralized platforms.

Cooper's Loincloth (anonymous coward) says:

Website Idea

Let me be the first to say I totally lack the knowledge and skill required to do this, but I *really* wish someone would.

My idea is for a website which tests the concept of human rationality vs. the all powerful forces of cognitive dissonance. It’s operation is functionally simple, you input a quote (tweet/text of a soundbyte/whatever) and it takes key words and switches them with their exact opposite, or optionally with a range of different ones (drop down menu, natch)

The new resulting phrase can then be examined to determine if the original statement is, in fact, biased/racist/sexist/whatever-ist. People with extremist views (on either side, interestingly enough) will be exposed when their very own words are used against them.

Example from today’s news:

“The only terrorists we need to fear are domestic [white] ‘[Christian]’ [men] with easy access to guns. Vote Bernie,”

would become:

“The only terrorists we need to fear are domestic [black] ‘[Muslim]’ [women] with easy access to guns. Vote Bernie,”

When you read the new version, does it make sense, or does it make your head want to explode? If it sounds reasonable, you might want to consider whether or not the original statement is rational too. If you go the head exploding route, you should examine the original statement to see where the problem lies. If you don’t want to even consider doing that, well, yeah. Cognitive Dissonance.

Blackfiredragon13 (profile) says:

Just going to play devil's advocate for once.

The only way I would agree with that is if the guidelines the program had for identifying “harassment” be incredibly narrow and strict, and only in cases where it manages to check off as positive for all those strict and narrow guidelines that it is allowed to automatically takedown the alleged “harassment”.
If the program finds something it’s unsure of and fits a minimum of 90% of the criteria, then it’s handed to an actual human to look at and judge for themselves.
In all other cases, it ignores the content and moves on.
This should avoid the problem of false flags like with the ContentID system on YouTube, where if it thinks a video contains 5 seconds of possibly infringing content. By which I mean as few as possible while still satisfying those ask for such a thing.

Anonymous Coward says:

I see the implementation of the ‘hate checker’ to be similar to the squiggly red line underneath misspelled words. Perhaps it’s better compared to the ‘grammar checker’.

The identified phrases could be faded slightly or made smaller – so not censored, but merely identifiable to the reader who may not otherwise recognize the author’s intent.

Anonymous Coward says:

Re: Re:

“I see the implementation of the ‘hate checker’ to be similar to the squiggly red line underneath misspelled words.”

I am still resentful of the teacher who, when I was 9 yrs old and had done my best to write a ‘scary story’ (per assignment) docked me a point (out of 10) underlined and wrote “See me” underneath the word “bloodthirsty”. It seems the word was “too adult”, and there I, little kid, thought I could show how erudite and well-read I was.

“The identified phrases could be faded slightly or made smaller – so not censored, but merely identifiable to the reader who may not otherwise recognize the author’s intent.”

Nope, bold them, underline them and put them in large font, because they say more about the writer than the reader.

Anonymous Coward says:

Time for "Two Minutes of Hate", brought2you by Schmidt

https://en.wikipedia.org/wiki/Two_Minutes_Hate

The Two Minutes Hate, from George Orwell’s novel Nineteen Eighty-Four, is a daily period in which Party members of the society of Oceania must watch a film depicting the Party’s enemies (notably Emmanuel Goldstein and his followers) and express their hatred for them.

Between MSNBC & Fox, we have more like Twenty Four Hours of Hate.

John85851 (profile) says:

Don't censor at all

Yes, terrorist videos could be used to spread the terrorist’s ideas, but if they can see it, we can see it. And if we can see it, maybe we can use it to understand their mindset or even catch or stop them before something happens.

It’s like banning videos of crimes on YouTube. Police can watch those videos, look for clues, and actually solve the crime… especially if the video shows something stupid like the criminal showing his face.

Banning these things only drives them further underground where they’re harder to find.

That One Guy (profile) says:

"Censorship is bad! Except when we do it, then it's great!"

Authoritarian governments tell their citizens that censorship is necessary for stability. It’s our responsibility to demonstrate that stability and free expression go hand in hand.

So censorship is bad, authoritarian governments use it to squash dissent to ‘maintain stability’, but free expression isn’t an enemy of stability, got it.

We should make it ever easier to see the news from another country’s point of view, and understand the global consciousness free from filter or bias.

Content should be presented as-is, without a filter to get in the way, got it.

We should build tools to help de-escalate tensions on social media — sort of like spell-checkers, but for hate and harassment.

… censorship is good now, so long as it’s for ‘bad’ or disruptive stuff?

We should target social accounts for terrorist groups like the Islamic State, and remove videos before they spread, or help those countering terrorist messages to find their voice.

… the government and/or companies should act as middle-men, deciding what should and should not be allowed to be said?

Without this type of leadership from government, from citizens, from tech companies, the Internet could become a vehicle for further disaggregation of poorly built societies, and the empowerment of the wrong people, and the wrong voices.

The government should step in and censor stuff to keep the ‘wrong’ people from speaking? And help the ‘right’ people spread their speech more?

It’s like he had a stroke mid-paragraph and went from ‘censorship is used by authoritarian governments, and is bad’ to ‘censorship is awesome, and should be used all the time’. Either that or he’s not paying the slightest bit of attention to the words coming out of his mouth, and fails to see the blatant contradiction within his own statements.

If individual users want to not ‘listen’ to something, they have that right. Just because someone’s speaking, doesn’t mean you have to listen, so I’m all for allowing individuals to chose to block and/or ignore people or groups they don’t care for. What I am not in favor of is for third-parties to step in and decide who is and is not allowed to speak, to decide what a person is allowed to hear.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...