We only occasionally talk about video game DLC, or downloadable content, here at Techdirt. When digital distribution became a thing some years back, game makers came up with DLC as a way to achieve several goals: extend the shelf-life of games, make games more saleable through the promise of extra content, and, of course, make more money. I remember when the wave of DLC started and the general negative reaction brought by the gaming public to it. Most concerns centered around game makers charging for features that once were included in the games for the original asking price. Some makers legitimized these concerns through their actions, but others did wonderful things with DLC that gamers would not wish to be without. But, as Hunter S. Thompson once imagined he could see the crest of hippie culture along the Rocky Mountains before its eventual recession, I too can see the crest of DLC greed in our time in the insanity of Train Simulator 2016‘s laughable DLC offerings.
This all becomes evident as Kotaku’s Alex Walker went on a quest to find the most ridiculous DLC costs for games on the market today.
My first thought was the Dynasty Warriors series. They, like many anime brawlers, have an absurd amount of costume and armour packages that are far more expensive than they should be. But then I came across Train Simulator 2016: Steam Edition. It’s US$45, which is fairly standard for niche titles with a hardcore fanbase. Dovetail Games were even generous enough to have a special on the DLC. And then I saw how much DLC there was.
As you can see at the bottom of the image, there are 230 available DLC options for sale. Next to it is an option to see them all. Walker saw them all. The results, and keep in mind that most of these are on sale for nearly half off, are hilariously expensive.
Yes, that’s over $3,000 if you were to buy all of the game’s DLC when most of it is on sale. None of this is to say, of course, that a game maker can’t charge what they like for their game, their DLC, their box art, their communications, their support, or anything else. They most certainly can. But what this should herald for most of us is the ultimate example of DLC done wrong. Whatever costs and effort might go into making a game, the end result shouldn’t be the cost of a used car in payment for the full content. There are ways to DLC right and it’s not evil to charge for great content, but this kind of thing we see above is so far removed from how games were charged for only a few years ago that it’s plainly obvious that something ain’t right here.
“Lots of copies keeps stuff safe” is an archivist mantra for preserving data for a long, long time. It certainly looks like there’s no end to the development of data storage. We have magnetic tape (multiple varieties), CDs, DVDs, Blu-ray, HD-DVDs, hard drives, solid state drives and the list goes on and on. Certain industries seem to make money every time there’s a shift from, say, LPs to cassettes to CDs (to streaming?), but what happens when everyone can store every song ever recorded in the palm of their hand? Technology isn’t there yet, but it might be soon.
Add Motherboard to the quickly growing list of news websites killing their comment section because they’re so breathlessly in love with reader interaction and visitor conversation. Like The Verge, Recode, Popular Science, The Daily Beast and numerous other websites before it, Motherboard has decided that there’s simply no value whatsoever to having a healthy, on-site local community. As such the website is shoving any and all reader interaction over to less transparent and noisier discourse avenues like Facebook, Twitter and e-mail because comments as a “medium” are inherently somehow unhealthy:
“We at Motherboard have decided to turn off our comments section, a decision we’ve debated for a year or more. What finally turned the tide was our belief that killing comments and focusing on other avenues of communication will foster smarter, more valuable discussion and criticism of our work. What percentage of comments on any site are valuable enough to be published on their own? One percent? Less? Based on the disparity in quality between emails we get and the average state of comments here and all over the web, I think the problem is a matter of the medium.”
One, just because only some readers can be bothered to comment doesn’t magically devalue the entire comment section, as many reader simply lurk. I’m a lurking reader quick to head to the comment section to see if there’s anything a reporter may have overlooked, misunderstood, or missed entirely. Did that tech blogger screw up the Wi-Fi specs on device Y or battery size of gadget Z? Does anybody else think this story makes light of X or misinterprets Y? Does anybody else in here feel the way I do? As a writer I find comments similarly valuable, even if you sometimes have to dig through detritus.
And that’s just it: news comments foster community, but they also provide transparency, accountability, and crowdsourced fact checking right below the article, and that’s what many sites like least of all. They just won’t admit it.
In contrast, Motherboard pretends that their reporting will become just that much better if it doesn’t have to worry about pesky public reader interaction:
“Good comment sections exist, and social media can be just as abrasive an alternative. But for a growing site like ours, I think that our readers are best served by dedicating our resources to doing more reporting than attempting to police a comments section in the hopes of marginally increasing the number of useful comments. That doesn’t offer any real value to other readers of the site, and we’d all wager that the scorched Earth nature of comments section just stifles real conversation.”
Unlike other news websites, Motherboard at least admits that it doesn’t want to spend the time and money to cultivate a thriving local community. Still, it’s a bit disingenuous to suggest that weeding the troll comment garden comes at the cost of better reporting. In fact, some studies have shown that simply having a writer show up in the comment section and briefly treat site visitors like human beings raises the discourse bar dramatically. And as several websites have noted, having a healthy comment section pays dividends in the form of loyal visitors. By blocking comments, you’re sending that community elsewhere (not that Techdirt minds — Motherboard readers are welcome to comment here).
Motherboard seems to miss absolutely all of the benefits of on-site community, consistently coming back to this strange idea that as a “medium” comments are inherently flawed:
“Comment sections inspire quick, potent remarks, which too easily veer into being useless or worse. Sending an email knowing that a human will actually see it tends to foster thought, which is what we want.
Because nitwits never send barely coherent single-sentence idiot bile via e-mail, right? Comments are simply a blank slate input field. How is that a flawed “medium”? The flaw is it forces outlets to work just a little bit harder, and doesn’t allow them to filter what gets said and heard. As such, Motherboard yearns to head back to the era of “letters to the editor,” which it may or may not respond to or publish:
“So in addition to encouraging that you reach out to our reporters via email or social media, you can now also share your thoughts with editors via email@example.com. Once a week or thereabouts we’ll publish a digest of the most insightful letters we get.”
Or hey, we might not. And that’s the problem: when only outlet-approved voices are made public you’ve muted an entire avenue of news dialogue correction and thrown the baby out with the bathwater, all in a misguided belief that we should try and force the open Internet back into the Walter Cronkite era of audience interaction. Of course all of these news editors and authors are just so dumbstruck and dizzy with the idea of not having to interact with snotty critics anymore, they can’t see the forest (news as a healthy, fluid public conversation) for the trees (bile-lobbing blowhards).
Last year, Dr. Edward Tobinick sued Yale physician Steven Novella over a blog post Novella had written that questioned and criticized Tobinick’s off-label use of immune-suppressing drugs to treat… Alzheimer’s patients. Here’s a short quote from the post at the center of the lawsuit:
The claims of Tobinick, however, are not in the gray area—they are leaps and bounds ahead of the evidence. Further, the conditions he claims to treat are not clearly immune-mediated diseases. It’s one thing to use an immune-suppressing drug to treat a disease that is known to be caused by immune activity, and probably the kind of immune activity suppressed by the drug.
Tobinick, however, is claiming that a wide range of neurological conditions not known to be immune mediated are treated by a specific immunosuppressant.
Tobinick first demanded Novella take the post down. When Novella refused, Tobinick sued him and Yale University. Tobinick didn’t allege defamation, as one would expect. (At least, not originally, allegations of libel were added to an amended complaint.) Instead, Tobinick claimed Novella’s post was “false advertising” and actionable under trademark law.
There are very few cases where plaintiffs have been successful misusing intellectual property laws to shut down critics. This one is no exception. Back in June, the court granted Novella’s anti-SLAPP motion, striking Tobinick’s motions for unfair competition, trade libel and libel per se. All that was left unaddressed was Tobinick’s Lanham Act claim.
Now, the court has handed a victory to Novella, granting his motion for summary judgment and ordering the case closed. The court finds no merit to Tobinick’s argument that Novella’s critical blog posts were “commercial speech” and therefore actionable under the Lanham Act.
[T]he Court finds that the speech at issue here—that is, the First and Second Articles, published on www.sciencebasedmedicine.org —is not commercial speech. The Articles proposed no commercial transaction, and consequently do not fall within the “core notion” of protected speech. See Bolger, 463 U.S. at 66. Furthermore, the Articles do not fall within the scope of the definition expounded in Central Hudson, “expression related solely to the economic interests of the speaker and its audience.” 447 U.S. at 561. Both articles clearly state their intent to raise public awareness about issues pertaining to Plaintiffs’ treatments.
Thus, the First and Second Articles can only potentially qualify as commercial speech under Bolger. Yet the Articles differ from the pamphlets at issue in Bolger in a number of ways. First, the Articles are not conceded to be advertisements. Second, the only products referenced in the First Article are Plaintiffs’ treatments. To the extent that the Second Article mentions Defendant Novella’s practice, it is in direct response to the instant litigation as opposed to an independent plug for that practice.
The main thrust of Tobinick’s Lanham Act argument was that because Novella made money indirectly from the website, it was commercial speech. The court doesn’t care for this argument either, and points out that even certain commercial speech is still protected under the First Amendment and not subject to Lanham Act claims.
The third and final factor from Bolger, whether there was an “economic motivation” for the speech, is the primary basis for Plaintiffs’ opposition to summary judgment. Essentially, Plaintiffs contend that the Articles are commercial speech because SGU Productions, a for-profit company controlled by Defendant Novella, earns money by selling advertisements on its website (skepticsguide.net), advertisements in a podcast, memberships, and goods such as t-shirts…
Thus, even if Defendant Novella directly earns money from an organization sponsoring or producing the speech, this alone would not make the speech commercial. Furthermore, the specific evidence elicited in this case regarding SGU does not point to a strong economic motivation for the speech. Although Plaintiffs argue that “[t]he flow of money to Novella . . . is significant, as [Jay] Novella testified to over $200,000 last year,” Jay Novella also testified that, despite this profit, SGU “made no profit after expenses” because “we reinvest the vast majority of the money back into the company when we have a positive cash flow.”
The Court therefore finds that Defendant Novella’s speech in the First and Second Articles does not qualify as commercial speech, such that the Articles can form the basis of a Lanham Act claim.
Once again, we see a plaintiff learning the hard (and expensive) way that speech that may harm your commercial interests isn’t automatically a.) defamatory or b.) a violation of intellectual property laws. Of course, many litigants already know this. They’re apparently just hoping the courts don’t.
With the granting of the anti-SLAPP motion, it looks like Tobinick will be paying the costs of defending against his bogus lawsuit. But it’s not as though people looking to censor critics will be any less willing to engage in Hail Mary-esque lawsuits. Many defendants simply aren’t willing to put themselves through the financial and mental pain and suffering that accompanies litigation. Because of this, this string of IP law-abusing legal failures won’t prevent similarly bogus attempts from being made in the future.
Patent trolls are a tax on innovation. The classic troll model doesn’t include transferring technology to create new products. Rather, trolls identify operating companies and demand payment for what companies are already doing. Data from Unified Patents shows that, for the first half of this year, patent trolls filed 90% of the patent cases against companies in the high-tech sector.
Core Wireless Licensing S.A.R.L. is one of the patent trolls attacking the high-tech sector. Core Wireless is incorporated in Luxemburg, and is a subsidiary of an even larger troll, Canada-based Conversant. It owns a number of patents that were originally filed by Nokia. It has been asserting some of these patents in the Eastern District of Texas. In one case, a jury recently found that Apple did not infringe five of Core Wireless’s patents. In another case, it is asserting eighteen patents against LG. One of its arguments in the LG case came to our attention as an example of what patent trolls think they can get away with.
In patent litigation, patent owners and alleged infringers often disagree about the meaning of words in patent claims and ask the court to resolve the differences (a process known as “claim construction”). In Core Wireless’ case against LG, the majority of the disputes seem like usual ones in terms of patent litigation.
Except for the dispute about “integer.”
You may have learned what an “integer” was in high school. It’s a common concept many teenagers learn about when they take algebra. In Ontario, Canada, for example (where Conversant is based), teachers discuss integers in the 9th and 10th grades. As defined in the Ontario Curriculum, an integer is: “Any one of the numbers . . . , –4, –3, –2, –1, 0, +1, +2, +3, +4, . . . ” Here’s a PBSMathClub video with a helpful explanation:
It’s pretty clear what an “integer” is. Hereare a fewmoredefinitions from various sources, all confirming the same thing: “integers” are all of the whole numbers, whether positive or negative, including 0.
But Core Wireless, the patent owner, told the court that an “integer” is “a whole number greater than 1.” Core Wireless is saying that not only are negative numbers not integers, neither are 0 or 1.
The integers are the natural numbers (whole numbers greater than zero), their negatives, and the number zero (very important). So saying that the integers are all whole numbers greater than one is a bit like saying that sweet and sour chicken is just sour sauce because you’re missing its negative, and the chicken, which is very important. Or that a turducken is just turkey: we all know that the duck and the chicken are essential.
To be clear: the law allows patent applicants to redefine words if they want. But the law also says they have to be clear that they are doing that (and in any event, they shouldn’t be able to do it years after the patent issues, in the middle of litigation). In Core Wireless’ patent, there is no indication that it used the word “integer” to mean anything other than what we all learn in high school. (Importantly, the word “integer” doesn’t appear in the patent anywhere other than in the claims.)
It appears that Core Wireless is attempting to redefine a word—a word the patent applicant freely chose—because presumably otherwise its lawsuit will fail. The Supreme Court has long disapproved of this kind of game playing. Back in 1886, it wrote:
Some persons seem to suppose that a claim in a patent is like a nose of wax which may be turned and twisted in any direction, by merely referring to the specification, so as to make it include something more than, or something different from, what its words express.
Just last year, the Supreme Court issued an opinion in a case called Nautilus v. Biosig Instruments emphasizing that patent claims must describe the invention with “reasonable certainty.” Using a word with a well-known and precise definition, like “integer,” and then insisting that this word means something else entirely is the very antithesis of reasonable certainty.
We hope the district court applies long-standing Supreme Court law and doesn’t allow Core Wireless to invent a new meaning for “integer.” Patent claims are supposed to provide notice to public. The public should not be forced to guess what meaning the patent owner might later invent for the claims, on penalty of infringement damages.
Ultimately, this is just one baseless argument in a bigger case. But it reveals a deeper problem with the patent litigation system. A patent owner wouldn’t argue that “integer” doesn’t include the number one unless it thought it might get away with it. The Patent Office and lower courts need to diligently apply the Supreme Court’s requirement that claims be clear. We also need legislative reform to discourage parties from making frivolous arguments because they think they can get away with it. This should include venue reform to prevent trolls from clustering in the troll-friendly Eastern District of Texas.
Craig Mod has a fascinating article for Aeon, talking about the unfortunate stagnation in digital books. He spent years reading books almost exclusively in ebook form, but has gradually moved back to physical books, and the article is a long and detailed exploration into the limits of ebooks today — nearly all of which are not due to actual limitations of the medium, but deliberate choices by the platform providers (mainly Amazon, obviously) to create closed, limited, DRM-laden platforms for ebooks.
When new platform innovations come along, the standard progression is that they take the old thing — whatever it is they’re “replacing” — and create a new version of it in the new media. Early TV was just radio plays where you could see the people, for example. The true innovation starts to show up when people realize that you can do something new with the new media that simply wasn’t possible before. But, with ebooks, it seems like we’ve never really reached that stage. It’s just replicated books… and that’s it. The innovations on top of that are fairly small. Yes, you can suddenly get any book you want, from just about anywhere and start reading it almost immediately. And, yes, you can take notes that are backed up. Those are nice. But it still just feels like a book moved from paper to digital. It takes almost no advantage of both the ability to expand and change the canvas, or the fact that you’re now a part of a world-connected network where information can be shared.
While I don’t think (as some have argued) that Amazon has some sort of dangerous “monopoly” on ebooks, Mod is correct that there’s been very little pressure on Amazon to continue to innovate and improve the platform. And, he argues (quite reasonably), if Amazon were to open up its platform and let others innovate on top of it, the whole thing could become much more valuable:
It seems as though Amazon has been disincentivised to stake out bold explorations by effectively winning a monopoly (deservedly, in many ways) on the market. And worse still, the digital book ‘stack’ – the collection of technology upon which our digital book ecosystems are built – is mostly closed, keeping external innovators away.
To understand how the closed nature of digital book ecosystems hurts designers and readers, it’s useful to look at how the open nature of print ecosystems stimulates us. ‘Open’ means that publishers and designers are bound to no single option at most steps of the production process. Nobody owns any single piece of a ‘book’. For example, a basic physical book stack might include TextEdit for writing; InDesign for layout; OpenType for fonts; the printers; the paper‑makers; the distribution centres; and, finally, the bookstores that stock and sell the hardcopy books.
And, on top of this, people creating “ebooks” are limited to the options given to them by Amazon and Apple and Google. And then it all gets locked down:
Designers working within this closed ecosystem are, most critically, limited in typographic and layout options. Amazon and Apple are the paper‑makers, the typographers, the printers, the binders and the distributors: if they don’t make a style of paper you like, too bad. The boundaries of digital book design are beholden to their whim.
The fact that all of these platforms rely on DRM — often at the demands of short-sighted publishers — only makes the problem worse:
The potential power of digital is that it can take the ponderous and isolated nature of physical things and make them light and movable. Physical things are difficult to copy at scale, while digital things in open environments can replicate effortlessly. Physical is largely immutable, digital can be malleable. Physical is isolated, digital is networked. This is where digital rights management (DRM) – a closed, proprietary layer of many digital reading stacks – hurts books most and undermines almost all that latent value proposition in digital. It artificially imposes the heaviness and isolation of physical books on their digital counterparts, which should be loose, networked objects. DRM constraints over our rights as readers make it feel like we’re renting our digital books, not owning them.
If ebook platforms and technology were more open, it’s quite conceivable that we’d be experiencing a different kind of ebook revolution right now. People could be much more creative in taking the best of what books provide and leveraging the best of what a giant, connected digital network provides — creating wonderful new works of powerful art that go beyond the standard paper book. But we don’t have that. We have a few different walled gardens, locked tight, and a weak recreation of the paper book in digital form.
It’s difficult to mourn for lost culture that we never actually had, but it’s not difficult to recognize that we’ve probably lost a tremendous amount of culture and creativity by not allowing such things to thrive.
Bloomberg has a weird story about Unwired Planet’s patent trolling. As we’ve discussed, Unwired Planet is a company that’s gone through many forms over the years, from Phone.com to Openwave and then Unwired Planet. It’s true that the company was something of a pioneer in early WAP browsers, but WAP browsers were a joke that never caught on. The mobile internet didn’t really catch on until the rise of smartphones and higher bandwidth wireless data connections — which Unwired Planet had nothing to do with. So like many failed tech companies, it decided to go full on patent troll. A few years ago, we wrote about it buying more than 2,000 patents from Ericsson that it was then using to shake down companies that didn’t fail in the same space that Unwired Planet did fail in.
The Bloomberg article is mostly unremarkable, other than calling the company the “inventor” of the mobile internet. That’s misleading. It was one hyped up company that helped push a failed vision of a mobile internet, that eventually went nowhere. And now it’s patent trolling. But the other bizarre part of the article is that it quotes Stanford professor Stephen Haber as claiming that consumers benefit from patent trolls:
“The losers from a world without patent litigation would, in the end, be consumers,” said Haber. Inventors won’t innovate unless they can ensure they are paid for their invention, he argued.
He may argue that, but he’s wrong. Like, really wrong. Actual research shows that the leading reasons for innovating have absolutely nothing to do with patents. Rather, people and companies tend to innovate because (1) they need something themselves or (2) they see a need in the market. And the “ensure they are paid for their invention” makes no sense. If they have an invention people want, then they can sell that product and make money that way. You don’t need patents for that. Yes, some others may enter the market as well, but that’s called competition, and that’s a good thing.
Amazingly, if you look at Stephen Haber’s official bio, you’d think he’d know this. After all, it says:
Haber has spent his academic life investigating the political institutions and economic policies that delay innovation and improvements in living standards. Much of that work has focused on how regulatory and supervisory agencies are often used by incumbent firms to stifle competition, thereby curtailing economic opportunities and slowing technological progress.
Regulatory agencies used by incumbent firms to stifle competition is basically the definition of the patent system. Yet, instead, Haber has been spending the last few years preaching the wonders of patent trolling, insisting that lots of litigation is just fine and that there’s no evidence that it’s harming consumers. That’s ridiculous. Tons of studies have shown the massive costs of patent trolling on innovation.
Having a Stanford professor spout such nonsense reflects incredibly poorly on Stanford.
Before and after the FCC imposed new net neutrality rules, you’ll recall there was no limit of hand-wringing from major ISPs and net neutrality opponents about how these “draconian regulations from a bygone era” would utterly decimate the Internet. We were told investment would freeze, innovation would dry up like dehydrated jerky, and in no time at all net neutrality would have us all collectively crying over our busted, congested, tubes.
And, of course, shockingly, absolutely none of that is happening. Because what the ISPs feared about net neutrality rules wasn’t that it would senselessly hurt their ability to invest, but that it would harm their ability to take aggressive and punitive advantage of the lack of competition in last mile broadband networks. Obviously ISPs can’t just come out and admit that, so what we get instead is oodles of nonsense, including bogus claims that net neutrality violates ISPs’ First Amendment rights.
About a year ago, you’ll recall that companies like Netflix, Cogent, and Level 3 accused most of the major ISPs of intentionally letting their peering points get congested. The goal, these companies claimed, was to kill the long-standing idea of settlement-free peering, and drive services like Netflix toward striking new interconnection deals that would, presumably, be jacked up over time. One year on and Cogent CEO Dave Schaeffer notes that most of the congestion that plagued these interconnection points last year has somehow magically disappeared:
“Speaking to investors during the Deutsche Bank 23rd Annual Leveraged Finance Conference, Dave Schaeffer, CEO of Cogent, said that the FCC’s adoption of net neutrality rules that include Title II regulation, and passage of similar rules in the European Union, have led to ports on other networks becoming unclogged. “The adoption of the Open Internet order and Title II jurisdictional authority were mirrored in the EU and on June 30 the European Commission adopted a set of regulations that were passed by the parliament and the council,” Schaeffer said. “As a result of that we have seen significant port augmentations.”
Schaeffer proceeded to note that AT&T and Verizon “are nearly congestion free” and would be completely congestion free sometime in the fourth quarter. Negotiations with other ISPs appear to also be going well. Funny how that works, huh? And note the FCC didn’t even have to do all that much; we simply needed the mere threat of a regulator actually doing its job to make the mega-ISPs play nice. In other words, net neutrality rules that were supposed to destroy the Internet have instead resulted in companies that were at each other’s throats a year ago suddenly getting along famously, and the Internet itself working better than before.
Sure, some ISP think tankers are being paid to pretend the last few weeks that network investment has dried up, but there’s absolutely no indication that’s the case. In fact, the biggest ISPs historically opposed to net neutrality have announced major deployment projects since, including Comcast’s plan to deploy two gigabit fiber to 18 million homes, Verizon’s plan to invest heavily in the fifth-generation of wireless technology, and AT&T’s $68 billion acquisition and subsequent plans for fixed-wireless broadband and (when they can bothered to get around to it) gigabit fiber.
Granted, ISPs will argue that it’s still early and that the sky will likely fall due to net neutrality any day now. A more likely explanation is that incumbent ISPs and their army of paid mouthpieces were utterly and unmistakably full of shit.
For just $25.99, you can brush up your security skills with this CompTIA Advanced Security Practitioner (CASP) training course. Obviously, you won’t get the certification with just the deal, but you’ll be better prepared to take the CASP certification exam. You will get access to courses on risk mitigation, security-privacy policies and 15 other courses that follow CompTIA authorized objectives. Keep your IT skills up to date and get this deal while it lasts.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
We’ve obviously written a few times now about the big OPM hack that was revealed a few months ago, in which it appears that hackers (everyone’s blaming China for this) were able to get in and access tons of very, very private records of current and former government employees — apparently including tons of SF-86 forms. Those forms are required to be filled out for anyone in a national security job in the government, and it basically requires you to ‘fess up to anything you’ve ever done that might, at some point, reflect badly on you. The basic idea behind it is that if you’ve already admitted to everything, then it makes it much harder for anyone to somehow blackmail you into revealing US national security secrets. But, of course, that also makes those documents pretty damn sensitive. And, by now of course you’ve heard that the Office of Personnel Management was woefully unprepared to properly protect such sensitive data.
Two recent statements made by top intelligence community leaders again should raise questions about why these guys have been put in charge of “defending” against computer attacks. First up, we have the head of the NSA, Admiral Mike Rogers. Back in August, we noted that Senator Ron Wyden had asked the National Counterintelligence and Security Center (NCSC) if it had even considered the OPM databases “as a counterintelligence vulnerability” prior to these attacks. In short: did the national security community who was in charge of protecting computer systems even realize this was a target. As Marcy Wheeler pointed out last month, Admiral Rogers more or less admitted that the answer was no:
After the intrusion, “as we started more broadly to realize the implications of OPM, to be quite honest, we were starting to work with OPM about how could we apply DOD capability, if that is what you require,” Rogers said at an invitation-only Wilson Center event, referring to his role leading CYBERCOM.
NSA, meanwhile, provided “a significant amount of people and expertise to OPM to try to help them identify what had happened, how it happened and how we should structure the network for the future,” Rogers added.
In other words, the guy who is literally in charge of the “US Cybercommand” organization that is supposed to protect us from computer-based attacks didn’t realize until after the hack that this might be a relevant target.
Then, fast forward to last week, where Rogers’ boss, Director of National Intelligence James Clapper, testified at a Congressional hearing about the hack. After admitting that CIA employees had to be quickly evacuated from China after the hack, he more or less said that the US shouldn’t retaliate, because this was “just espionage” and that the US has basically done the same thing back to them. At least that’s the implication of his “wink wink, nod nod” statement to the Senators:
Director of National Intelligence James R. Clapper Jr., testifying before the Senate Armed Services Committee, sought to make a distinction between the OPM hacks and cybertheft of U.S. companies’ secrets to benefit another country’s industry. What happened in OPM case, “as egregious as it was,” Clapper said, was not an attack: “Rather, it would be a form of theft or espionage.”
And, he said, “We, too, practice cyberespionage and . . . we’re not bad at it.” He suggested that the United States would not be wise to seek to punish another country for something its own intelligence services do. “I think it’s a good idea to at least think about the old saw about people who live in glass houses shouldn’t throw rocks.”
Now, he’s actually making a totally valid point concerning what the US’s response should be. Escalating this issue by hitting back at China isn’t going to help anything. Rather, of course, the US government should have done a much better job protecting the information in the first place.
But when you look at these statements together, it shows the somewhat cavalier attitude of the US intelligence community towards actually protecting key US assets. And that’s because the US intelligence community is — as Clapper basically admits — much more focused on hacking into other countries’ systems. For a while now, people have questioned why the NSA should be handling both the offensive and defensive “cybersecurity” programs. The theory has long been that because the NSA is so damn good at the offensive side, it’s better positioned to understand the risks and challenges on the defensive side. Yet, given that the NSA’s overall mission is so focused on breaking into other systems, it seems that whenever the two conflict, the offensive side wins out and less is done to protect us. The simple fact that the US intelligence community is basically admitting that we do exactly these kinds of attacks on China, yet never considered the same might be done to us, should raise pretty serious questions about why we let the intelligence community handle protecting us against such intrusions in the first place.