Appeals Court: Yes, The FTC Can Go After Companies That Got Hacked Over Their Weak Security Practices

from the secure-your-sites-kids dept

Way back in 2004, we noted that the FTC went after Tower Records for getting hacked and leaking customer records. At the time, we wondered if this was appropriate. Companies get hacked all the time, even those with good security practices. So, at what point can it be determined if the company is being negligent, or if it’s just that those looking to crack their systems are just that good. Well, the FTC had decided that it can draw the line, and for companies that do a particularly egregious job in not protecting user data, it’s made it clear that it’s going to go after them. A few years back, the FTC went after Wyndham Hotels for failing to secure user data, and Wyndham tried to argue that the FTC had no authority to do so. Last year, a district court sided with the FTC and now the Third Circuit appeals court has upheld that ruling, giving the FTC much more power to crack down on companies who fail to protect user data from leaking.

The ruling doesn’t fully answer the question of where can the FTC draw that line, but it certainly suggests that if your security is laughably bad then, absolutely, the FTC can go after you. And, yes, Wyndham’s security was laughably bad. From the court ruling:

The company allowed Wyndham-branded hotels to store payment card information in clear readable text.

Wyndham allowed the use of easily guessed passwords to access the property management systems. For example, to gain ?remote access to at least one hotel?s system,? which was developed by Micros Systems, Inc., the user ID and password were both ?micros.?…

Wyndham failed to use ?readily available security measures??such as firewalls?to ?limit access between [the] hotels? property management systems, . . . corporate network, and the Internet.? …

Wyndham allowed hotel property management systems to connect to its network without taking appropriate cybersecurity precautions. It did not ensure that the hotels implemented ?adequate information security policies and procedures.? … Also, it knowingly allowed at least one hotel to connect to the Wyndham network with an out-of-date operating system that had not received a security update in over three years. It allowed hotel servers to connect to Wyndham?s network even though ?default user IDs and passwords were enabled . . . , which were easily available to hackers through simple Internet searches.? … And, because it failed to maintain an ?adequate[] inventory [of] computers connected to [Wyndham?s] network [to] manage the devices,? it was unable to identify the source of at least one of the cybersecurity attacks.

Wyndham failed to ?adequately restrict? the access of third-party vendors to its network and the servers of Wyndham-branded hotels. … For example, it did not ?restrict[] connections to specified IP addresses or grant[] temporary, limited access, as necessary.?

It failed to employ ?reasonable measures to detect and prevent unauthorized access? to its computer network or to ?conduct security investigations.?

It did not follow ?proper incident response procedures.? … The hackers used similar methods in each attack, and yet Wyndham failed to monitor its network for malware used in the previous intrusions.

So, yeah. This wasn’t a situation where determined malicious hackers had to carefully dismantle a security apparatus. There was no security apparatus, basically. The ruling also mentions that the Wyndham website claimed to encrypt credit card data and use firewalls and other things — none of which it actually did. Oops. And, of course, hackers broke in multiple times and Wyndham did basically nothing.

As noted, on three occasions in 2008 and 2009 hackers accessed Wyndham?s network and the property management systems of Wyndham-branded hotels. In April 2008, hackers first broke into the local network of a hotel in Phoenix, Arizona, which was connected to Wyndham?s network and the Internet. They then used the brute-force method?repeatedly guessing users? login IDs and passwords?to access an administrator account on Wyndham?s network. This enabled them to obtain consumer data on computers throughout the network. In total, the hackers obtained unencrypted information for over 500,000 accounts, which they sent to a domain in Russia.

In March 2009, hackers attacked again, this time by accessing Wyndham?s network through an administrative account. The FTC claims that Wyndham was unaware of the attack for two months until consumers filed complaints about fraudulent charges. Wyndham then discovered ?memory-scraping malware? used in the previous attack on more than thirty hotels? computer systems…. The FTC asserts that, due to Wyndham?s ?failure to monitor [the network] for the malware used in the previous attack, hackers had unauthorized access to [its] network for approximately two months.? … In this second attack, the hackers obtained unencrypted payment card information for approximately 50,000 consumers from the property management systems of 39 hotels.

Hackers in late 2009 breached Wyndham?s cybersecurity a third time by accessing an administrator account on one of its networks. Because Wyndham ?had still not adequately limited access between . . . the Wyndham-branded hotels? property management systems, [Wyndham?s network], and the Internet,? the hackers had access to the property management servers of multiple hotels…. Wyndham only learned of the intrusion in January 2010 when a credit card company received complaints from cardholders. In this third attack, hackers obtained payment card information for approximately 69,000 customers from the property management systems of 28 hotels.

The FTC alleges that, in total, the hackers obtained payment card information from over 619,000 consumers, which (as noted) resulted in at least $10.6 million in fraud loss. It further states that consumers suffered financial injury through ?unreimbursed fraudulent charges, increased costs, and lost access to funds or credit,? …, and that they ?expended time and money resolving fraudulent charges and mitigating subsequent harm.?

And yet, still, Wyndham insisted that the FTC had no mandate to go after them for this rather egregious behavior. The appeals court agrees with the lower court in saying “of course the FTC can go after such behavior.” The main question: Is this an “unfair” practice by Wyndham? The company argued that it’s not unfair because it’s the victim here. The court doesn’t buy it.

Wyndham asserts that a business ?does not treat its customers in an ?unfair? manner when the business itself is victimized by criminals.?… It offers no reasoning or authority for this principle, and we can think of none ourselves.

Also: it’s generally not a good thing when a court refers to your legal argument as “a reductio ad aburdum” (i.e., taking something to such an extreme as to be ridiculous).

Finally, Wyndham posits a reductio ad absurdum, arguing that if the FTC?s unfairness authority extends to Wyndham?s conduct, then the FTC also has the authority to ?regulate the locks on hotel room doors, . . . to require every store in the land to post an armed guard at the door,? … and to sue supermarkets that are ?sloppy about sweeping up banana peels,? … The argument is alarmist to say the least. And it invites the tart retort that, were Wyndham a supermarket, leaving so many banana peels all over the place that 619,000 customers fall hardly suggests it should be immune from liability under § 45(a).

Going for a due process move, Wyndham tries to argue that there was not “fair notice” of what kinds of security practices the FTC required. I’m actually marginally sympathetic to this argument. If this is ever amorphous, then that is really challenging for companies who just don’t know if their security practices meet the vague non-public standard of “okay” for the FTC. But, if you’re running a company — especially one as large as Wyndham Hotels — it’s not unreasonable to suggest that your tech staff at least understand some basic fundamentals about security, like not using default passwords, encrypting credit card data, and using firewalls. This isn’t advanced computer security here. This is pretty basic stuff. Furthermore, the court basically says Wyndham doesn’t need specific rules from the FTC, but rather just should know that the law about “unfair” practices exists.

Wyndham is entitled to a relatively low level of statutory notice for several reasons. Subsection 45(a) does not implicate any constitutional rights here…. It is a civil rather than criminal statute…. And statutes regulating economic activity receive a ?less strict? test because their ?subject matter is often more narrow, and because businesses, which face economic demands to plan behavior carefully, can be expected to consult relevant legislation in advance of action.?

In this context, the relevant legal rule is not ?so vague as to be ?no rule or standard at all.??… Subsection 45(n) asks whether ?the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.? While far from precise, this standard informs parties that the relevant inquiry here is a cost-benefit analysis,… that considers a number of relevant factors, including the probability and expected size of reasonably unavoidable harms to consumers given a certain level of cybersecurity and the costs to consumers that would arise from investment in stronger cybersecurity. We acknowledge there will be borderline cases where it is unclear if a particular company?s conduct falls below the requisite legal threshold. But under a due process analysis a company is not entitled to such precision as would eliminate all close calls.

And, the court notes, Wyndham’s behavior here is so egregious that no reasonable person could find it surprising that the FTC went after the company for its [lack of] security practices.

As the FTC points out in its brief, the complaint does not allege that Wyndham used weak firewalls, IP address restrictions, encryption software, and passwords. Rather, it alleges that Wyndham failed to use any firewall at critical network points, did not restrict specific IP addresses at all, did not use any encryption for certain customer files, and did not require some users to change their default or factory-setting passwords at all.

Which leads to the kicker in the following sentence:

Wyndham did not respond to this argument in its reply brief.

Ouch.

The court also notes that maybe Wyndham’s response would be more reasonable if the company had only been hacked once. But three times is a bit much:

Wyndham?s as-applied challenge is even weaker given it was hacked not one or two, but three, times. At least after the second attack, it should have been painfully clear to Wyndham that a court could find its conduct failed the cost-benefit analysis…. [C]ertainly after the second time Wyndham was hacked, it was on notice of the possibility that a court could find that its practices fail the cost-benefit analysis.

And thus, while I’m still a little nervous about going after companies who get hacked, it seems in this case, where there appears to be overwhelming evidence of near total gross negligence on the part of Wyndham to secure user data, it does seem reasonable for the FTC to be able to proceed, and now both a district and appeals court agree.

Filed Under: , , , , , ,
Companies: wyndham

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Appeals Court: Yes, The FTC Can Go After Companies That Got Hacked Over Their Weak Security Practices”

Subscribe: RSS Leave a comment
37 Comments
Mason Wheeler (profile) says:

*sigh* That’s not what reductio ad absurdum means. It means “reduction to a [logical] absurdity,” or in other words, a contradiction. Like so:

“Let us imagine that X is true. If so, [series of logical steps] and we conclude that Y = Z. But we know that Y does not equal Z, therefore X cannot be true.”

It has nothing to do with “making this proposition look silly [absurd] by taking it to a ridiculous extreme.”

Anonymous Coward says:

Why not start at the source?

The biggest act of cyber terrorism to date, was the decision to make HTML the default MIME type in one very popular email client. There is no question in my mind that there were software engineers who fought against that move, and that some of them kept their emails.

Lucky is the lawyer who finds such an engineer. since a significant percentage of ALL security breaches worldwide derive; at some level; from that one criminally negligent decision. Further, no one who is, and was in the industry at that time would reasonably argue that they didn’t have some understanding of what they were unleashing on the public when they did it.

Why not start at the source? Because the DOJ remembers what happened the last time they went down that road: Diebold, Bush, and the DOJ lawsuit quietly dismissed after the election.

Sheogorath (profile) says:

Re: Why not start at the source?

Does that make Tim Berners-Lee a criminal, then? HTML is what the World Wide Web depends on. See the little http:// in the address bar? That stands for Hypertext Transfer Protocol. Oh, and this article isn’t about someone’s email getting cracked because the default Internet media type (not MIME) is HTML, it’s about thousands of customers’ sensitive data being breached three times because a hotel company couldn’t be bothered to do a single thing to improve their abysmally shitty security. Do stay on track, please.

Mason Wheeler (profile) says:

Re: Why not start at the source?

Meh. That’s small potatoes compared to the massive vulnerability that lies at the heart of every consumer-grade computer operating system on every device in use today: the C language. Unfortunately, we’re never gonna get justice because we know exactly who did it–Dennis Ritchie–and he’s already dead.

HTML email may make phishing easier, but buffer overflows in C code have been at the heart of all sorts of truly horrendous widespread attacks, from the Morris Worm, to Blaster and Sasser, to Heartbleed and Stagefright.

It’s not like we didn’t know it was a bad idea. Long before the first buffer overflow attacks appeared on the Internet, Tony Hoare gave a warning in his Turing Award lecture about the dangers of designing a language without buffer safety built in:

Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interest of efficiency on production runs. Unanimously, they urged us not to—they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980, language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law.

Unfortunately, when Dennis Ritchie created C, he left buffer safety out, and in fact designed the language in such a way that it could not be put in later without breaking tons of existing code. Even more unfortunately, the language then became popular for the one thing that an insecure language is least suited to: creating security-critical software, such as operating systems and network-facing apps.

Hoare’s right: that really should be against the law. I’m not exaggerating or being facetious here. With what we know, especially since the Morris Worm, in any sort of rational world it should be regarded as an act of criminal negligence to write network-facing software in C or any similarly insecure language. But unfortunately it’s not, and we’ve been paying the price for over 25 years now.

John Fenderson (profile) says:

Re: Re: Why not start at the source?

“I’m not exaggerating or being facetious here.”

You’re doing one of the two, or else you’re just wrong. It is 100% possible to write secure code in C, and is commonly done. The use of C all by itself means nothing, except that you be more confident in the security of properly written C code because of the reduced attack surface involved.

Mason Wheeler (profile) says:

Re: Re: Re: Why not start at the source?

It is 100% possible to write secure code in C,

True, theoretically.

and is commonly done.

False, and therein lies the problem. If anything, the opposite is true: it’s very commonly done wrong, even by highly-trained people who should know what they’re doing. The Heartbleed bug is a shining example. The guy who inserted the vulnerability was no novice; he just made a mistake, and to err is human. Unfortunately, in such large-scale matters of security, to err is also unforgivable, which means that the only rational course of action is to remove human error from the picture as much as possible.

If secure C/C++ code were so common, such code would not have security holes. But all major operating systems receive updates on a regular basis, frequently fixing security holes caused by buffer overruns. Therefore it is quite common to write insecure code in insecure languages, even by the kind of highly-experienced wizards who end up working on OSs. (See above re: reductio ad absurdum.)

Here we are in 2015, and we’re still stuck hitting the same 1980s-era bugs over and over and over. It’s absolutely ridiculous! (See above re: the other meaning of absurd.) C is a language that’s a quarter-century late for its own funeral.

Anonymous Coward says:

Re: Re: Why not start at the source?

Java was meant to be more secure than C, by automating memory management, and sandboxing parts of programs, and just look at its security record. The C programmer is much less reliant on other peoples code for security than the Java programmer, who has to rely on a much more complex system provided by others, and has much less control over what the code actually does.

Mason Wheeler (profile) says:

Re: Re: Re: Why not start at the source?

1) I never said anything about Java.

2) The cool thing about “relying on other people’s code for security,” as you put it, is that when they fix a sandboxing bug in the JVM, it fixes everyone’s security bugs for free. There’s nothing even remotely equivalent that you can do for C or C++ code.

Anonymous Coward says:

Re: Re: Re:2 Why not start at the source?

Heartbleed, to take the example you quoted, was a security issue where many people were impacted by common code. It was however a bug, as is typical of bugs in C/C++, that was the responsibility of the person who implemented the library in which the bug was reported. It was fixed, and the fix was being pushed out by the distributions before the news of the bug made it onto the news sites.
Many bugs in complex languages like Java occur in the language implementation, but are reported to application and library developers, who have to try and identify what has gone wrong and where, so that they can report a bug to the language developers. This both slows down the fixing of the bug; and can lead to a back and forth as to whose code has the bug in it. Also the application developer is then dependent on someone else fixing the bug, and getting the fix out to his users.

Anonymous Coward says:

Re: Re: Re: Why not start at the source?

“java was meant to”…

Poppycock.

Java was an appliance language that was on a nearby shelf when the world wide web took off. What you are referring to is marketing tripe that was created in order to sell java to developers. It is, and has always been a major security SNAFU at multiple layers.

tqk (profile) says:

Where was PCI?

“Payment Card Industry” (PCI) is supposed to set standards for businesses allowed to accept credit card payments. I did lots of work for clients who were terrified of the prospect of PCI de-certifying them. Every IT process the business was engaged in had to meet those “exacting standards” to continue.

How did Wyndham get away with this, even two times? PCI should have read them the riot act after the second intrusion, leading to a full network audit and compliance check.

Anonymous Coward says:

"Everything was working, so we fired the sysadmin."

The unfortunate reality, is the sysadmin is probably a noob just out of school, (if he had any) and will probably end up getting scape-goated during this process.

And after all is said and done, the lawyers will all meet in the bar next to the court house for burbons, while the sysadmin, (who likely lamented the network loudly while threatening sebuku) languishes in jail.

I’ve seen quite a few articles (on sites inferior to Tech Dirt) about technocracy and the cultural separation in society. There appears to be an effort among a social elite to foster a cultural view that technicians are the current aristocratic class. It behooves technicians to consider whether this is diversionary. Particularly when people start asking us to 3d print guillotine parts.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...