Facebook's Latest Privacy Screwup Shows How Facebook's Worst Enemy Is Still Facebook

from the get-your-act-together dept

There’s another Facebook scandal story brewing today and, once again, it appears that Facebook’s biggest enemy is the company itself and how it blunders into messes that were totally unnecessary. When the last story broke, we pointed out that much of the reporting was exaggerated, and people seemed to be jumping to conclusions that weren’t actually warranted by some internal discussions about Facebook’s business modeling. The latest big scandal, courtesy of a big New York Times story, reveals that Facebook agreed to share a lot more information than previously known or reported with a bunch of large companies (though, hilariously, one of those companies is… The NY Times, which The NY Times plays down quite a bit).

The social network permitted Amazon to obtain users? names and contact information through their friends, and it let Yahoo view streams of friends? posts as recently as this summer, despite public statements that it had stopped that type of sharing years earlier.

As Kash Hill notes in a separate story at Gizmodo, this suddenly explains a story she had explored years ago, where Amazon rejected a review of a book, claiming the reviewer “knew the author,” (which was not true). However, the reviewer had followed the author on Facebook, and Amazon magically appeared to know about that connection even if the reviewer did not directly share her Facebook data with Amazon.

The NY Times report further explains another bit of confusion that Hill has spent years trying to track down: how Facebook’s People You May Know feature is so freaking creepy. Apparently, Facebook had data sharing agreements with other companies to peek through their data as well:

Among the revelations was that Facebook obtained data from multiple partners for a controversial friend-suggestion tool called ?People You May Know.?

The feature, introduced in 2008, continues even though some Facebook users have objected to it, unsettled by its knowledge of their real-world relationships. Gizmodo and other news outlets have reported cases of the tool?s recommending friend connections between patients of the same psychiatrist, estranged family members, and a harasser and his victim.

Facebook, in turn, used contact lists from the partners, including Amazon, Yahoo and the Chinese company Huawei ? which has been flagged as a security threat by American intelligence officials ? to gain deeper insight into people?s relationships and suggest more connections, the records show.

As Hill noted on Twitter, when she asked Facebook last year if it uses data from “third parties such as data brokers” to figure out PYMK, Facebook’s answer was technically correct, but totally misleading:

Specifically, Facebook responded: “Facebook does not use information from data brokers for People You May Know.” Note that the question was if Facebook used information from “third parties” and the “data brokers” were just an example. Facebook responded that it didn’t use data brokers, which appears to be correct, but left out the other third parties from which it did use data.

And this is why Facebook is, once again, its own worst enemy. It answers these kinds of questions in the same way that the US Intelligence Community answers questions about its surveillance practices: technically correct, but highly misleading. And, as such, when it comes out what the company is actually doing, the company has completely burned whatever goodwill it might have had. If the company had just been upfront, honest and transparent about what it was doing, none of this would be an issue. The fact that it chose to be sneaky and misleading about it shows that it knew its actions would upset users. And if you know what you’re doing will upset users, and you’re unwilling to be frank and upfront about it, that’s a recipe for disaster.

And it’s a recipe that Facebook keeps making again and again and again.

And that’s an issue that goes right to the top. Mark Zuckerberg has done too much apologizing without actually fixing any of this.

One bit in the NY Times piece deserves a particular discussion:

Facebook also allowed Spotify, Netflix and the Royal Bank of Canada to read, write and delete users? private messages, and to see all participants on a thread ? privileges that appeared to go beyond what the companies needed to integrate Facebook into their systems, the records show. Facebook acknowledged that it did not consider any of those three companies to be service providers. Spokespeople for Spotify and Netflix said those companies were unaware of the broad powers Facebook had granted them. A Royal Bank of Canada spokesman disputed that the bank had any such access.

Spotify, which could view messages of more than 70 million users a month, still offers the option to share music through Facebook Messenger. But Netflix and the Canadian bank no longer needed access to messages because they had deactivated features that incorporated it.

This particular issue has raised a lot of alarm bells. As Alvarao Bedoya points out, disclosing the content of private communications is very much illegal under the Stored Communications Act. But, the NY Times reporting is not entirely clear here either. Facebook did work hard for a while to try to turn its Messenger into more of a “platform” that would let you do more than just chat — so I could see where it might “integrate” with 3rd party services to enable their features within Messenger. But the specifics of how that works would be (1) really, really important, and (2) should be 100% transparent with users — such that if they’re agreeing to, say, share Spotify songs via Messenger, they should absolutely be told that this means Spotify has access to whatever they have access to. A failure to do that — as appears to be the case here — is yet another braindead move by Facebook.

Over and over and over again we see this same pattern with Facebook. Even when there are totally reasonable and logical business and product decisions being made, the company’s blatant unwillingness to be transparent about what it is doing, and who has access to what data, is what is so damning for the company. It is a total failure of the management team and until Facebook recognizes that fact, nothing will change.

And, of course, the most annoying part in all of this is that it will come back to bite the entire internet ecosystem. Facebook’s continued inability to be open and transparent about its actions — and give users a real choice — is certainly going to lead to the kinds of hamfisted regulations from Congress that will block useful innovations from other companies that aren’t so anti-user, but which will be swept up in whatever punishment Facebook is bringing to the entire internet.

Filed Under: , , , ,
Companies: amazon, facebook, netflix, new york times, spotify

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Facebook's Latest Privacy Screwup Shows How Facebook's Worst Enemy Is Still Facebook”

Subscribe: RSS Leave a comment
43 Comments
GJ Guepe says:

You view total surveillance capitalism as a mere PR problem.

If the company had just been upfront, honest and transparent about what it was doing, none of this would be an issue.

I think it’s unacceptable and that the revolt point is closer than you believe. We The People don’t have to allow Facebook to exist at all. (Which, incidentally is what you clowns who believe that corporations have absolute right to control "platforms" always overlook. All can easily change overnight. Literally all we have to do revise the FICTION of corporations.)

James Burkhardt (profile) says:

Re: You view total surveillance capitalism as a mere PR problem.

You miss the point, that if they had been transparent, people could have made that choice, and the policies could have been crushed under the weight of protest or departure. Facebook may not have risen to prominence.

Corporate fictions aren’t why this happened. A lack of understanding of the public is why this happened.

James Burkhardt (profile) says:

Re: Re: Re: You view total surveillance capitalism as a mere PR

Well, no. The article is blaming facebook for a lack of transparency. This is all on facebook. I was only defending the idea that Transparency would have prevented the issue not because we are all secretly ok with it, but because Facebook would have either dropped the users (who were unhappy) or the policy (because they were losing too many/too much bad PR), in contrast to GJ Guepe, who thinks that the arguement is that everyone would have accepted it if it was transparent. He was misreading the premise.

Anonymous Coward says:

Re: Re: Re: You view total surveillance capitalism as a mere PR

How is it victim blaming to say that if people knew what a company is doing they’d choose differently? Victim blaming is saying they should’ve known anyway; saying they didn’t know and thus made a decision that they wouldn’t have had they really known what was happening is very, very different.

Anonymous Coward says:

Re: Re: Re: You view total surveillance capitalism as a mere PR

They didn’t entirely need to be transparent. I know for certain, 100% absolutely, that FB has used 3rd party data for PYMK for a lot of years.

I don’t have a Facebook presence but I developed a client for FB in 2013. To test it I had to create an account. I supplied my first name and zero other data. I even gave them a bogus birthday. During the course of testing I friended a member of my family so that I could send and receive messages and make sure the scopes I used provided the data I needed.

Shortly thereafter I started getting PYMK suggestions for people I actually knew who I had worked with years before, even before Facebook was available to the public and long before I created that test account. My relationships with those people ended before FB was available to the public. There is zero chance Facebook knew about them on their own, without outside data.

That invasiveness has prevented me from creating a real FB account as well as an account on any other social media. Apart from FB’s inferred relationships with people I have spoken to in a very long time (and the NSA and other 3-letter fed orgs, of course) I have zero attributable footprint on the net. It’s just way too damn creepy and there is apparently no way to opt out of it.

Anonymous Coward says:

Re: Re: Re: You view total surveillance capitalism as a mere PR

That said, I don’t have a Facebook account because back in the day, I knew people who knew Mark. I couldn’t understand why anyone would want to place their personal information on a server owned and managed by a guy with such a bad record of managing other people’s personal information.

As the company matured, I continued to see zero informative answers to the very fundamental questions I had raised at the beginning, other than some handwavy “acceptable use” promises that weren’t technically feasible at actually protecting anything.

Eventually they brought on people I knew to form a security team (when was that? 2013?) but they still hadn’t directly answered any of the fundamental architecture questions surrounding data security: the new security team appeared to be in place to ensure that the door swung shut after the horse bolted.

So Facebook may not have been transparent, but they were transparently evasive from the beginning. I may grudgingly accept that from a politician, but I’m not going to accept it from any government employee or private company I do business with.

Thing is, if Facebook had been transparent from the start, it really might not have made much difference until the situation was abused. People would just excuse it with “sure, that’s POSSIBLE, but they’d never actually do that” followed by “well, they actually did it, but it wasn’t intentional, and they’ve promised never to do it again.”

If people are getting something they consider to be of value, they’re willing to give up quite a bit in order to retain that value.

This is why I don’t like cloud infrastructure… the provider has to continually give you what you want in order for you to retain anything of value.

Christenson says:

Re: Re: Re:2 Fundamental Architecture Questions

Fundamental Architecture Questions

This is the crux of any mass market platform on the internet. You get lots of data on lots of people, in a conveniently standardized form, which is a moral hazard, a temptation. Facebook is just the demon du jour; we could go as far as including Google, the US credit bureaus, the IRS, and even Techdirt, which I like — remember when “blue” complained about someone not posting for a few years??

Such information will always be a temptation for abuse. The hazard is increased possibly to the point of certainty, when “You” are the product and not paying the bills directly.

Diversity and heterogeneity isn’t really a solution. Nobody is going to have Mastodon data formats #1 through #N just to keep privacy, it is just too much work, and if it is public, there will always be “bad actors” out there scraping it.

I’m not sure exactly how the credit bureaus do it, but I think there are some measures that can work to make the internet work a bit more like we might expect.

That expectation: The degree of customization is at the endpoint’s discretion, and the more specific the customization, the more transparency required. If I say my name on Techdirt is Blue, then it is Blue, not the user from 29.112.258.4.

Some ideas:
Generic browsing with poor fingerprinting properties. No, server, you may *not know what browser/OS/fonts/machine name/screen size/whatever I am using at the moment.
Forgetting all details except final results, and sunsetting unused data.
An attitude that says “Well, you want to target Techdirt/Facebook/Daily Stormer users with property X?” Fine, but we won’t tell you who those users are, we will tell the users what property X was, and we’ll make a report available on all such transactions.

Anonymous Coward says:

Re: You view total surveillance capitalism as a mere PR problem.

Literally all we have to do revise the FICTION of corporations.)

And destroy any business larger than a partnership, all churches,oh and congress itself. Corporations exist to allow property in particular to be owned by am organization, rather than by the people running it.

Anonymous Coward says:

Re: You view total surveillance capitalism as a mere PR problem.

Corporations are indeed a fiction. Like money, nations, courts, laws, rights, contracts, debts… All fiction, and indeed all things that we the people can change. But emphasizing FICTION as a way of attacking one particular thing you don’t like is a silly, empty argument.

Anonymous Coward says:

“Facebook also allowed Spotify, Netflix and the Royal Bank of Canada to read, write and delete users’ private messages, and to see all participants on a thread”

This is unacceptable.

Read, Write, delete privileges given to anyone with enough money. This is something that should really be divulged in huge font where it will not be missed, otherwise it is fraudulent and possibly criminally so.

Mason Wheeler (profile) says:

If the company had just been upfront, honest and transparent about what it was doing, none of this would be an issue. The fact that it chose to be sneaky and misleading about it shows that it knew its actions would upset users.

The second sentence directly contradicts the first. If they’re doing things that they know will upset users, then it would be an issue if the users knew about it, which is why Facebook chose to be sneaky and evasive instead.

The real problem is the blatant privacy-violating they engaged in. Compared to that, trying to hide it is a much smaller offense IMO.

Mike Masnick (profile) says:

Re: Re:

I think what you actually meant to say is "yet another example of willfully malicious behavior by a company that was built from the ground up on exactly that sort of behavior".

I think it would both be a mistake to believe this and lead to very bad outcomes to believe this. The company is most certainly not willfully malicious. Most people working there really do believe that they’re making good choices for their users and building better services that are useful too them. Where they fall down (repeatedly) is in deciding for those users what will be "best" and part of that is driven, stupidly, by a focus on "growth" over "value."

A smarter company recognizes that value over time leads to growth. Growth over value does the opposite. Over time. It’s all about time horizons.

Mason Wheeler (profile) says:

Re: Re: Re:

How much does it really matter what “most people working there” believe? What’s truly relevant–as a general principle for any sizable organization, not just this one–is the character of the people at the top. And as you put it, “it would be a mistake to believe” that the people running Facebook aren’t willfully malicious.

Spointman (profile) says:

Re: Re: Re:

Mike, you might be missing the greater point here. As you’ve pointed out before, Facebook’s users are NOT its customers. And so long as its actual customers continue paying FB, it doesn’t care what happens to its users. FB has a demonstrated history where screwing over its users leads to increased profit, not decreased. And that will not change until one of two things happens. Either its paying customers stand up and say, “This is not OK” (hah!), or its existing users start quitting the platform faster than new users join (which indirectly reduces the amount of revenue customers are willing to pay). Given that FB’s been putting a lot of effort into international growth, it’s pretty clear that they realize they need sheer numbers of uninformed users rather than “sophisticated” users who care about things like privacy.

Anonymous Coward says:

Re: Re: Re:2 Re:

The 4th amendment says:

The right of the people to be secure in their persons, houses, papers, and effects,[a] against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

https://en.wikipedia.org/wiki/Fourth_Amendment_to_the_United_States_Constitution

There is nothing that I can see in that wording that limits the application of the 4th amendment to government.

Anonymous Coward says:

Re: Re: Re:3 Re:

the first ten amendments, known collectively as the Bill of Rights, offer specific protections of individual liberty and justice and place restrictions on the powers of government

Where does that mention companies and corporations and their powers. Those are thing that the government can regulate by the means of passing laws, so long as the government does not exceed the bounds set by the constitution.

Anonymous Coward says:

Re: Re: Re:4 Re:

In a time when most people lived off the land, the writers of the Bill of Rights apparently never foresaw the rise of the mega-corporation in America, just as they never foresaw the invention of the machine gun. Maybe if they had been aware of the ongoing ravages of the East India Company, the most genocidal corporation in the history of the world, they might have amended the Constitution to protect people against the potential tyranny of the private for-profit corporation as well as the government.

https://yourstory.com/2014/08/bengal-famine-genocide/

Thad (profile) says:

Re: Re: Re:3 Re:

There is nothing that I can see in that wording that limits the application of the 4th amendment to government.

Ah, you’re doing that thing where you read the literal text of the Constitution, remove it from the context of (1) the circumstances under which it was written and (2) the past 200 years of case law interpreting it, and act like you’re making an argument that is very clever.

In the words of David St. Hubbins, "There’s such a fine line between stupid and clever."

Anonymous Coward says:

Re: Re: Re:

“The company is most certainly not willfully malicious.”

Yes, it is.

“Most people working there really do believe that they’re making good choices for their users and building better services that are useful too them.”

Most people working at Facebook have no clue what the executive level is doing.

It’s comparable to people working AT&T customer service and having to listen to complaints of customers as their bills go up again since AT&T knows competition is non-existent.

It seems the NYT has confidential information knowing certain “customers” have access to data many other businesses do not, indicating a certain price point is met to obtain this information (or an agreed relationship).

“You miss the point, that if they had been transparent, people could have made that choice, and the policies could have been crushed under the weight of protest or departure. Facebook may not have risen to prominence.”

He didn’t miss the point. You did. The majority of Facebook users couldn’t care less about the information shared. Many claim “I ain’t got nothing to hide” as a justification to get that latest user-specific feed article into their eyeballs.

Those few who understand what’s at stake are trying to raise the alarm, but it’s going to be unheard over the laughter at imbeciles mocking another meme or tweet as their privacy is eroded.

In engineering, there’s a natural fact even Einstein could not argue: it’s impossible to fix stupid.

As much as I hate to say it, users are more responsible for the issues of Facebook than the company itself.

These people allow phones to track their every movement, bring microphones into their homes (and actually set them up for use), and don’t even bother using a VPN to protect their online history.

Facebook is 100% malicious because it’s taking advantage of this willfully.

Do not ever defend this company again.

Anonymous Coward says:

Facebook is too big to jail

As Alvarao Bedoya points out, disclosing the content of private communications is very much illegal under the Stored Communications Act.

So what? Facebook is too big to jail. It won’t even be prosecuted.

18 USC 2702 may say…

(a) Prohibitions.—Except as provided in subsection (b) or (c)—

(1) … shall not knowingly divulge … the contents…

(2) … shall not knowingly divulge … the contents…

But the code there is just saying that. It doesn’t mean really mean it. That provision code is inoperative when it comes to Facebook. The code is a dead letter.

It will not be applied.

Just watch. The code only applies to the less wealthy.

Anonymous Coward says:

This latest NY Times scoop on Facebook should not surprise anyone who was paying attention to any of the many other Facebook scandals. Facebook has demonstrated, time after time, literally throughout its entire existence, that it’s a sneaky, underhanded company that in all probability will never change even slightly as long as that smarmy weasel Mark “they trust me,dumb fucks” Zuckerberg is at the helm.

The only thing more predictable than Mark Zuckerberg getting caught violating people’s trust once again is seeing Mike Masnick trying to defend him against justifiably angry Techdirt commenters.

Anonymous Coward says:

I believe that Facebook’s repeated anti-user behavior shows that they and other companies like them are long past due to be regulated. Continued apologies from Zuckerberg and Sandberg and empty promises that they’ll try to get it right next time aren’t going to cut it. The Facebook top brass are just as lacking in integrity as the profit-hungry execs of Electronic Arts.

Your boilerplate warnings about how “hamfisted regulations” will block “useful innovations” ring hollow. It’s very similar to the rhetoric that pro-ISP industry groups use to defend money-grubbing ISPs and Ajit Pai’s dismantling of pro-consumer regulations. Facebook and other social media companies like it have shown time and again that they don’t care about their users. They need to be brought to heel with regulatory power.

Anonymous Coward says:

Re: Re:

The pinch in this is that while deploying a regulatory cudgel might hurt Facebook and bring them to some sort of heel, it will be far more painful for that part of the Internet iceberg that sits below the waterline — specialized communities, nascent alternative platforms, and Internet-based small businesses — as these folks won’t have the resources to comply in detail with heavy-handed regulatory mandates, yet represent far easier targets for regulators looking for “quick wins”.

As a result, it will take a sustained effort by thinkers and policymakers to develop the conceptual basis needed for scalable, efficient regulation, not a “one and done” grandstanding bill hacked together from lobbyist input, emotional impulses, and misunderstandings. Thankfully, this ball is already rolling, starting with Balkin’s work on information fiduciaries, but more refinement will be needed to develop it into something Congress can use, and pushing for Congress to “do something!” in the meantime simply leads to clumsy, buggy lawmaking, rife with unintended consequences.

That Anonymous Coward (profile) says:

“A Royal Bank of Canada spokesman disputed that the bank had any such access.”

A thought that occurs to me – is this sort of the problem they had with Cambridge?

They had no real idea how much access they got out of the partnership because FB only tells them about what they want to know. Why build controlled defined channels when you can just give them the firehose of data & tell them how to grab the info they want. No one would look at the rest of the stream to see what else they had access to, they are the good guys they are giving us data, we give them data and everyone is happy!

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...