Unintended Consequences Of EU's New Internet Privacy Rules: Facebook Won't Use AI To Catch Suicidal Users

from the beware-the-innovations-you-kill dept

We’ve written a few times about the GDPR — the EU’s General Data Protection Regulation — which was approved two years ago and is set to go into force on May 25th of this year. There are many things in there that are good to see — in large part improving transparency around what some companies do with all your data, and giving end users some more control over that data. Indeed, we’re curious to see how the inevitable lawsuits play out and if it will lead companies to be more considerate in how they handle data.

However, we’ve also noted, repeatedly, our concerns about the wider impact of the GDPR, which appears to go way too far in some areas, in which decisions were made that may have made sense in a vacuum, but where they could have massive unintended consequences. We’ve already discussed how the GDPR’s codification of the “Right to be Forgotten” is likely to lead to mass censorship in the EU (and possibly around the globe). That fear remains.

But, it’s also becoming clear that some potentially useful innovation may not be able to work under the GDPR. A recent NY Times article that details how various big tech companies are preparing for the GDPR has a throwaway paragraph in the middle that highlights an example of this potential overreach. Specifically, Facebook is using AI to try to catch on if someone is planning to harm themselves… but it won’t launch that feature in the EU out of a fear that it would breach the GDPR as it pertains to “medical” information. Really.

Last November, for instance, the company unveiled a program that uses artificial intelligence to monitor Facebook users for signs of self-harm. But it did not open the program to users in Europe, where the company would have had to ask people for permission to access sensitive health data, including about their mental state.

Now… you can argue that this is actually a good thing. Maybe we don’t want a company like Facebook delving into our mental states. You can probably make a strong case for that. But… there’s also something to the idea of preventing someone who may harm or kill themselves from doing so. And that’s something that feels like it was not considered much by the drafters of the GDPR. How do you balance these kinds of questions, where there are certain innovations that most people probably want, and which could be incredibly helpful (indeed, potentially saving lives), but don’t fit with how the GDPR is designed to “protect” data privacy. Is data protection in this context more important than the life of someone who is suicidal? These are not easy calls, but it’s not clear at all that the drafters of the GDPR even took these tradeoff questions into consideration — and that should worry those of us who are excited about potential innovations to improve our lives, and who worry about what may never see the light of day because of these rules.

That’s not to say that companies should be free to do whatever they want. There are, obviously LOTS of reasons to be concerned and worried about just how much data some large companies are collecting on everyone. But it frequently feels like people are acting as if any data collection is bad, and thus needs to be blocked or stopped, without taking the time to recognize just what kind of innovations we may lose.

Filed Under: , , , , , , , ,
Companies: facebook

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Unintended Consequences Of EU's New Internet Privacy Rules: Facebook Won't Use AI To Catch Suicidal Users”

Subscribe: RSS Leave a comment
35 Comments
Anonymous Coward says:

Re: Re: Responsibility

There are two reasons why Facebook would be trying to stop people from killing themselves:

1.) To keep their engagement statistics up.

2.) Because they realized the psychological experiments they performed on hundreds of thousands of people without consent, led some of their users to suicide.

PaulT (profile) says:

Re: Re: Re: Responsibility

3) Because they’re big and people will attack them for anything even tangentially related to their platform and

4) Even the grandstanding politicians who use them as a scapegoat for everything wrong with society will struggle to use “they’re trying to stop teenagers killing themselves” as effective ammunition.

People are currently trying to attack social media platforms for everything from people being gullible enough to base their votes on outlandishly ridiculous fictions masquerading as news to people stupid enough to be literally eating poisonous chemicals because someone else dared them to. You don’t have to come up with any additional conspiracy theories to explain why FB would think that being visible active in preventing teen suicide might be a good idea.

Anonymous Coward says:

"That's not to say that companies should be free to do whatever they want." -- OMG! Mr Corporatism Uber Alles agrees with me! My work here is almost done...

There he is, obliquely recognizing the rights of “natural” persons under common law, which can’t be repeated too often, becoming a rare notion. — Those terms seem to annoy some ACs (but clearly fanboys) here.

But you’re moaning about a sheerly notional non-loss — because hasn’t been done or even possible except in last few years out of the 5000 or so since the Babylonians invented civilization — while the vastly larger up-side for 99.9% of persons is that Facebook won’t be monitoring for those who aren’t slavish nebbishes to report them to gov’t for “re-education”.

Dan says:

"Save the children"

At first I thought ya, bad consequence. As I read the article, I kept hearing the “save the children” mantra. So, I changed my mind. If Facebook is the only thing that catches a suicidal person, I doubt there is much that should be done. I mean, what is Facebook going to do? Perform a automated suicide swat on someone’s house?

discordian_eris (profile) says:

Re: "Save the children"

Yes, that is exactly what they will do. The cops will show up and force an involuntary commitment on anyone they feel is a risk to themselves. Then they will have the joy of being forced to take drugs that actually increase the risk of suicide, especially in anyone under the age of 25. This is why ALL SSRIs have a black box warning on them warning of the risk to anyone under 25.

DannyB (profile) says:

Re: "Save the children"

If Facebook is the only thing that catches a suicidal person, perhaps the problem is that they are suicidal precisely because they use Facebook!

You know how every time you read any article about Facebook and it seems creepy and makes the hairs on the back of your neck stand up? That’s 200 million years of evolution telling you to RUN, DON’T WALK but RUN away NOW. Except on the Internet.

Anonymous Coward says:

Re: "Save the children"

Exactly, Facebook contacts “emergency services”, which is the police in most places. The police are not trained to handle these things and treat at-risk people like dangerous criminals who must be suppressed. They are going into the situation assuming they are potentially facing an armed threat because the details are not known to by FB to pass on.
FB isn’t responsible for helping suicidal people, but I applaud the effort because I think it is driven by a genuine desire to do good. I think they need to re-evaluate the best way to help those at-risk, including more anonymity for anyone identified as needing help.
I have suffered from depression since around age 12, I have attempted suicide twice, and been through a few bouts of cutting. Proper mental healthcare, as in attentive medication management and therapy are incredibly helpful, but some times difficult to obtain. But even with proper treatment, situational problems can seem insurmountable; this is when most of us need people to reach out and actively support us. If it’s left up to us to do the reaching out (basically the well-intentioned ‘call me if you need to talk’ is useless) we end up doing self-destructive things to signal the call for help that might go completely unseen. I would have appreciated getting phone calls, or contacted online from anyone who was concerned about my well-being, wouldn’t have to be someone I knew. Just as long as it was someone who recognized something was wrong and reached out to me to talk. I know how terribly lonely suicidal people feel; even if surrounded with loving family, they don’t always know what is going on inside your head and can misinterpret your attitude to mean you need space when it’s the exact opposite.
I’m rooting for FB to get this right.

discordian_eris (profile) says:

Not Facebooks Problem

Face has zero reason or responsibility to try and prevent suicide, either in the US or in the EU. In the EU articles 3 and 4 of the Charter of Fundamental Rights makes it crystal clear.

Article 3 – Right to integrity of the person
Send with Email

1. Everyone has the right to respect for his or her physical and mental integrity.
2. In the fields of medicine and biology, the following must be respected in particular:
– the free and informed consent of the person concerned, according to the procedures laid down by law,
– the prohibition of eugenic practices, in particular those aiming at the selection of persons,
– the prohibition on making the human body and its parts as such a source of financial gain

Article 4 – Prohibition of torture and inhuman or degrading treatment or punishment
No one shall be subjected to torture or to inhuman or degrading treatment or punishment.

Suicide is a personal decision and the state has no business interfering with it. Both the US and EU make it clear that informed consent is required for any and all medical procedures and interventions. Forcing people to take medications and/or imprisoning them in psych hospitals is a gross violation of human rights. It actually increases the risk of suicidal behaviours. There are no anti-depressants that are safe for anyone under the age of 25 and all SSRIs increase both suicidal ideation and suicide attempts. Since that is the main way that suicidal people are treated, it is counter-productive and harmful. Neither Facebook or the state has the right to try and force anyone to be be ‘treated’ for having suicidal thoughts. Just because a person is ‘broken’ doesn’t mean they have no rights. And it sure as hell isn’t the states responsibility to force someone to live who chooses not to. Facebook needs to stick to serving ads and stay the fuck out of peoples business.

Anonymous Coward says:

Re: A New GDPR Right

With what I have read in the rules, most of it is relatively benign, light touch and to an extent codifying logical solutions for the most egregiously sloppy treatment of data.

While there are some real backbone challenges of implementing “right to be forgotten” and “right to access”, the real fear seems to be users using the rights!

Facebooks problem is more correctly tied to the question: To what extend have any user signed up for Facebooks Healthcare? And how about patient-doctor confidentiality?

Research is all well and good and none of GDPR is actually preventing it. But when breaking with fundamental principles like professional confidentiality (whether legal, medical or otherwise), you better do it through proper channels and with a worthwhile pursuit. Very few want Equifax-like leaks of such data.

JarHead says:

Now… you can argue that this is actually a good thing. Maybe we don’t want a company like Facebook delving into our mental states.

I’m one of this school of thought. In this particular instance, I’d say GDPR works as intended, and this is not unintended consequences. I’m hoping that this is the intended consequences of GDPR drafters.

You can probably make a strong case for that. But… there’s also something to the idea of preventing someone who may harm or kill themselves from doing so.

Legalization for busybodies to shove their moral compass to another? Thanks, but no.

Everybody has the right to self destruct, limited to the right to self destruct of others and the well being of others. Meaning, you want to commit suicide. Fine, go ahead, as long as you don’t injure anyone else. Do it with a knife then go ahead. Killing yourself with a bomb then we have a problem.

Rick O'Shea (profile) says:

unintended consequences is right...

I can visualize the gun lobby slavering over Facebook Ads Manager questions like:

Select individuals with:
☐ suicidal tendencies
☐ low self esteem
☐ actualization anxiety

They may ostensibly be collecting the information to avoid self-harm, but the real question is how wide that information will spread beyond the Chinese walls of the organization. I, for one, wouldn’t trust Facebook to not capitalize on such information. Such is the nature of corporate America.

Anonymous Coward says:

Not sure if even facebook has thought this through. Let’s see how this can go horribly wrong:

Ex. 1> mistaken identity or arriving at the wrong address, this never happens right? (http://www.post-gazette.com/local/region/2018/01/03/Meadville-federal-lawsuit-wrong-man-Eugene-Wright-police-injected-drugs-Meadville-Medical-Center/stories/201801030163)

Ex. 2> innocent bystanders are never hurt (http://www.miamiherald.com/news/local/crime/article90905442.html)

Ex. 3> Cops and good guys always are on the same thought process, never a “misunderstanding” between them(http://www.kansas.com/news/local/crime/article192244734.html)

Must be I am missing something, but am sure we will not mind a few “broken eggs” for technological progress

orbitalinsertion (profile) says:

Their AI should not be reading my shit, period.

That’s where the problem starts.

Let FB do a suicide watch? Are you insane? And quite frankly, if they can “do” (for various values of “do”) that, then they can: Catch all the bad guys, identify exactly who is dangerous and who is not, identify exactly what is “bad” speech in every jurisdiction, identify exactly what is fake news, etc.

I’m sorry, but fsck people’s “AI”s and their data farming. Call it “innovation”, because don’t. It’s about as real, useful, and good as the whole fake-ass financial sector, or advertising and marketing.

Anonymous Coward says:

Re: Re:

“Their AI should not be reading my shit, period.”

“Let FB do a suicide watch? Are you insane?”

Hear, hear.

Can you imagine how abusable it is? What happens if FB contacts law enforcement without checking to see if someone is actually in danger? What happens if it turns into some FB equivalent of “swatting”? You’ll either see hacks of people’s accounts, or fake accounts setup to do this sort of thing and tie up law enforcement depending on how FB’s AI handles this situation.

Anonymous Coward says:

Summary makes two basic assumptions:

1: That Facebook ‘self-harm detection’ actually works.
2: That it will not be abused by Facebook itself, or third parties.

For the first instance, we don’t have any idea how this prevention system works. We don’t know its parameters, what data it collects, how it uses it, how it determines “self harm”. We don’t know its success rate or how it could be abused. It’s a black box that automatically decides a case for intervention in someone’s life without their consent.

If you think that Facebook won’t be adding “possible mental health issues” to its vast treasure chest of personal data about their users you’re god damned naive. That’s a good enough reason to prevent it. We have no idea how this data might be abused in the future.

We can assume it will be sold to advertisers, sure, but what about health insurers or employers? What about using it for ‘nudge’ psychology which we know Facebook has experimented with in the past?

Social media as a whole is making a big PR push at the minute, because the damage and abuses it can cause are slowly bubbling to the surface, notwithstanding the massive privacy invasions and reckless profiteering. We can expect to see more of this sort of “all watched over by machines of loving grace” stuff in the future. I suspect a lot of it is just fluff.

Rekrul says:

AI shouldn’t be predicting anything. That’s like arresting someone because an AI predicted that they were going to commit a crime.

AI has come a long way, but it’s still a long way from being reliable. AI can’t even reliably tell spam from non-spam in your email, but people want to trust it to reliably predict when a person is thinking of harming themselves?

What if they make a post about a movie that includes suicide? What if they post a piece of fiction that includes suicide? What if they simply post the wrong words?

PaulT (profile) says:

Re: Re:

“That’s like arresting someone because an AI predicted that they were going to commit a crime.”

No, it’s really not. There are 2 things involved here. One is the AI prediction. The other is the action taken based on the prediction. The problem in your Minority Report example is that the person is arrested before they committed the crime. There’s a wealth of other actions that can be taken based on the prediction that are not problematic in any way. If the reaction was simply to prioritise resources to enable police to catch the guy in the act, the AI prediction would not be a problem in any way.

“What if they make a post about a movie that includes suicide? What if they post a piece of fiction that includes suicide? What if they simply post the wrong words?”

I would hope that the AI is simply flagging the account up for investigation by a human rather than taking action directly. But, given that, surely an AI flagging such things is better than waiting around and hoping that one of the person’s “friends” reports them instead?

Again, the prediction is not a problem, it’s the action taken based on that information. If someone loses his rent money on a prediction about a horse race that turned out to be wrong, it’s the action of betting the whole of his rent that’s the problem, not that the guy he spoke to tried to make a prediction on the outcome of the race.

Rekrul says:

Re: Re: Re:

No, it’s really not. There are 2 things involved here. One is the AI prediction. The other is the action taken based on the prediction. The problem in your Minority Report example is that the person is arrested before they committed the crime. There’s a wealth of other actions that can be taken based on the prediction that are not problematic in any way. If the reaction was simply to prioritise resources to enable police to catch the guy in the act, the AI prediction would not be a problem in any way.

Facebook’s page on this mentions "first responders" and "wellness checks". So in other words, they send police and other doctors to check up on the person. I’m too lazy to search right now, but haven’t there been stories right here on Techdirt of "wellness checks" going horribly wrong? I know you can find news reports of such things on YouTube.

And how exactly do these wellness checks work in such cases? Is a simple denial of suicidal thoughts enough to satisfy the police, or does the person also have to submit to psyche evaluation? In other words, are they considered guilty until proven innocent?

I would hope that the AI is simply flagging the account up for investigation by a human rather than taking action directly. But, given that, surely an AI flagging such things is better than waiting around and hoping that one of the person’s "friends" reports them instead?

I’d agree with you, but…

How many times have you seen people go overboard and report jokes or completely innocent things just because they’re afraid of "missing something" and decide to err on the side of caution? Having an AI flag even more posts for them to look at increases the pool of material for them misinterpret. Preventing suicide is a noble cause, but given the history of people freaking out over jokes and other harmless stuff, what assurance is there that a perfectly happy, well-adjusted person won’t have their life turned upside-down by someone who misinterpreted a joke or sarcastic remark and labeled them possibly suicidal? In an ideal world, they’d be checked on, declared OK and that would be the end of it. However it’s not an ideal world and an accusation of suicidal thoughts could lead to very real consequences such as family and friends forever being overly critical of everything they say, gossip behind their backs, etc.

It’s the same problem as with keyword flagging in the intelligence community. This very site has argued that collecting and going through everything leads to a needle in a haystack scenario. Wouldn’t using AI to flag every post that might be suspicious lead to the same outcome?

PaulT (profile) says:

Re: Re: Re: Re:

“Facebook’s page on this mentions “first responders” and “wellness checks”.”

Yes they do. But that has nothing to do with the scenario you were discussing. In fact, it’s the opposite type of scenario.

Person suspected of being a criminal = you wait until they have committed the crime before you react. Person suspected of being suicidal = you really want to intervene before they do kill themselves. They are extraordinarily different things, which is perhaps why you’re confusing yourself by conflating them.

“haven’t there been stories right here on Techdirt of “wellness checks” going horribly wrong?”

Yes, and the answer to that is “stop giving police military hardware and people on the force itching to use it at any given opportunity” and/or “train officers in how to de-escalate situations without using one of their toys”, not “never tell authorities that someone may be in danger”.

“And how exactly do these wellness checks work in such cases?”

I don’t know. We don’t even know whether human interaction is involved or if they just send automated messages. We don’t then know how reports are dealt with from then on. But, that a procedural issue that’s unrelated to whether or not Facebook should be providing their leads.

“How many times have you seen people go overboard and report jokes or completely innocent things just because they’re afraid of “missing something” and decide to err on the side of caution? “

How many times have you seen a devastating suicide (or worse – some people don’t only want to take themselves out), only for people to then realise all the warning signs they wish they had acted upon that could easily have saved lives?

I get what you’re saying, but Facebook are doing the right thing by flagging something, even if it doesn’t guarantee accuracy or success. Their other option – do absolutely nothing – only encourages people to blame them for the full tragedy later on.

Anonymous Coward says:

TD is becoming a bad source.

“Maybe we don’t want a company like Facebook delving into our mental states. You can probably make a strong case for that. But…”

This would be the point where you should have stopped, fleshed out that strong case, and at least used it as a proper counter balance. Frankly, that strong case alone would be much less negligent reporting then this.

AI digging through user data will ruin far more lives then suicide- I’d argue strongly that it will lead to far MORE suicides long term… it’s an uncomfortable area to argue given recent events (logan paul); which is exactly why nyt has chosen that context to frame this manipulative planted story you’ve lapped up and regurgitated with gusto. What- you’re against facebook AI mining user data? You must be Pro suicide… Nuance challenged people desperately trying to justify their FB addictions will love this… Ny times, hosted by amazon, both major advertisers on facebook- it’s really not hard to see incentives here; no stupid conspiracies necessary.

I can’t help but think- Does that pay well? Or is it just an exercise in maintaining corporate value by not pissing off potential advertisers or M&A teams? Or are you already hopelessly tangled in the very webs of dark knowledge, parallel construction, extortion and neo-slavery that big data+ AI seeks to vastly expand and automate? Maybe that last one’s over the top- maybe you just had a rushed crappy day…but there’s a odor I sense here and it doesn’t smell right at all, maybe if I point it out you could clean it up. hope springs eternal.

You’ve reported on both snowden leaks, and the vault stuff- and then at some point- radio silence on many important topics- like someone had your gonads in a vice and their hand on the crank. Prime example= Intel IME (the ring -3 hardware backdoor that’s not a backdoor- because intent, I guess…) got hacked; major news on every respectable tech site- and TD’s busy pushing this oblivious (like you forgot the very same stories you reported on already) propaganda narrative about fbi/apple and phone encryption= subtly leading people to the very incorrect conclusion that their cellphones are secure, while advertising for Apple, and giving the FBI the perfect storm of public commentary against backdooring encryption- EXACTLY what they need to push for increased access by established means that have NOTHING to do with encryption (why break the lock when you can just take the key)- ffs NIST was already caught the last time they backdoored encryption with Dual_EC_DRBG… And the ULTIMATE f’ing IRONY is that the worst case scenario you warn about in these slight of hand propaganda phone/encryption pieces was literally playing out in realtime with the IME hack! A universal backdoor- on the loose in the wild- That’s not even the first time that’s happened….

Boingboing covered it- f’n boingboing has somehow become a better site for uncovering ‘tech dirt’ then TechDirt.

I used to love this site, I guess I still do on some level or I wouldn’t bother typing all this- but you guys need to get your shit together- Stop being afraid of coming off as paranoid and just report the damn facts people need to know… or talk to your handlers and make it clear how much collateral damage they’re causing- your reputation is being ruined by this transparent and largely ineffective bullshit- doesn’t matter how many shills line up in the comments. 2018- people are waking up slowly but surely. Fake news is everywhere- not trumpkins ‘fake news’- more like MIT prof Noam Chomsky’s work. No one is immune to human nature.

Anonymous Coward says:

Vital interest

The GDPR does allow processing of personal data if it’s in the vital interest of the data subject or another person. And it allows for processing of special categories of personal data, including health data, if it’s in the vital interest of the data subject or another person and where the data subject is physically or legally incapable of giving consent.

So clearly the drafters took the interest of saving a person’s life into consideration. But as is clear from the comments here, not everyone appreciates a company like Facebook monitoring what they say to determine their health status. So I don’t see what’s wrong with asking people for their consent before doing so.

It shouldn’t be hard. Facebook could just add a “life saver” setting or whatev that you can turn on or off, with the information that if you turn it on whatever you write on Facebook and/or in messages will be monitored for signs that you may hurt yourself and what Facebook might do to help you if it detects such signs. At least it will be a conscious decision people make on whether or not they want to give Facebook that kind of power.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...