Air Force, Lockheed Martin Combine Forces To 'Lose' 100,000 Inspector General Investigations

from the Up-in-the-air!-Into-the-wi34dz.eea.3rdek))we$#21....-[A]BORT,-[R]ETRY,-[F]AIL dept

In an era where storage is insanely cheap and the warning to schedule regular backups has been ringing in the ears of computer users for more than four decades, there’s seemingly no explanation for the following:

The U.S. Air Force has lost records concerning 100,000 investigations into everything from workplace disputes to fraud.

A database that hosts files from the Air Force’s inspector general and legislative liaison divisions became corrupted last month, destroying data created between 2004 and now, service officials said. Neither the Air Force nor Lockheed Martin, the defense firm that runs the database, could say why it became corrupted or whether they’ll be able to recover the information.

The Air Force didn’t lose investigations dating back to the mid-60s and stored on archaic, oddball-sized “floppies.” It lost more than a decade’s-worth of investigatory work — from 2004 going forward, right up to the point that Lockheed discovered the “corruption” and spent two weeks trying to fix before informing its employer. At which point, the USAF kicked it up the ladder to its bosses, leaving them less than impressed.

In a letter to Secretary James on Monday, Sen. Mark Warner, D-Va., said the lost database “was intended to help the Air Force efficiently process and make decisions about serious issues like violations of law and policy, allegations or reprisal against whistleblowers, Freedom of Information Act requests, and Congressional inquiries.”

“My personal interest in the [Inspector General’s] ability to make good decisions about the outcomes of cases, and to do so in a timely manner, stems from a case involving a Virginia constituent that took more than two years to be completed, flagrantly violating the 180-day statutory requirement for case disposition,” Warner wrote.

Some notification is better than no notification, even if the “some” notification is extremely minimal and arrives well after the fact. Senator Warner remains underwhelmed.

“The five-sentence notification to Congress did not contain information that appeared to have the benefit of five days of working the issue,” Warner wrote.

The Air Force says there’s no evidence of malicious intent, as far as it can tell. But there’s also no evidence of competence. Why is it that files related to oversight of a government agency have no apparent redundancy? It’s small details like these that show the government generally isn’t much interested in policing itself.

If anything’s going to be recovered, it’s going to be Lockheed’s job, and it’s already spent a few weeks trying with little success. There may be some files stored locally at bases where investigations originated, but they’re likely to be incomplete.

While I understand the inherent nature of bureaucracy makes it difficult to build fully-functioning systems that can handle digital migration with any sort of grace, it’s completely incomprehensible that a system containing files collected over the last decade would funnel into a single storage space with no backup. It’s one thing if this was just the Air Force’s fault.

But this is more Lockheed’s fault — and despite its position as a favored government contractor — it’s also known for its innovation and technical prowess. Neither of those qualities are on display in this public embarrassment. And if it can’t recover the data, it’s pretty much erasing more than a decade’s-worth of government mistakes, abuse, and misconduct. And while no one’s going to say anything remotely close to this out loud, there has to be more than a few people relieved to see black marks on their permanent records suddenly converted to a useless tangle of 1s and 0s.

Filed Under: , , , ,
Companies: lockheed martin

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Air Force, Lockheed Martin Combine Forces To 'Lose' 100,000 Inspector General Investigations”

Subscribe: RSS Leave a comment
50 Comments
Mason Wheeler (profile) says:

In an era where storage is insanely cheap and the warning to schedule regular backups has been ringing in the ears of computer users for more than four decades, there’s seemingly no explanation for the following:

Unfortunately, it’s not quite that simple. Making a backup is easy; restoring it afterwards is not always so easy, for a variety of technical reasons. Horror stories about losing everything, thinking you had it backed up, and then not being able to restore from backups abound.

Anonymous Coward says:

Re: Re: Re:

Copying stuff to external drive(s) works when you’ve got individual files that mean something, like MP3s, .docs, etc, and when your external drives can actually hold all of the data.

It doesn’t work as well when you’re talking about multiple terabytes of database spread across a raid.

It works even less well if you’re not permitted to take the system down for maintenance – ie you can’t simply snapshot the system.

I agree that if you can’t restore it, it is worthless.

I submit, though, that if you try to restore it and find some data is seriously damaged … then go back to the original and find that it was, er, faithfully copied from the original … THEN you have a problem.

Anonymous Coward says:

Re: Re: Re: Re:

It doesn’t work as well when you’re talking about multiple terabytes of database spread across a raid.

I’m sure that many mere ignorant newbies such as yourself actually believe that. You’re wrong. It’s really quite easy for anyone equipped with sufficient intelligence and experience.

For example, I’ve been backing up an operation that has about 4/10 of a petabyte of operational disk. Monthly (full) and daily (partial) backups are done. They all get compressed and encrypted and copied to external USB drives (currently: 4T drives, soon: 6T drives). Yes, they’re tested. Yes, I’ve had to restore from them — many times. Yes, it works. Everything is logged, everything is cataloged, everything is cycled through a retention process that ensures both backup/restore capability and disaster recovery.

It wasn’t hard. I used standard Unix tools and a little scripting. The largest cost was buying all the drives.

I expect anyone of modest ability to be able to the same. Anyone who can’t is incompetent and stupid.

Stephen says:

Re: Re: Re:3 Re:

There is software you can buy which will do backups of large amounts of data. The IT department of the university I used to work for had one such piece of software; and they had hundreds of terabytes of data which had to be backed up every night.

I would also point out that backing up in and of itself is not a full solution. Large organisations (like the US Air Force) also need an offsite back system as well. That is, you need a place away from your main backup server where you store copies of your data. That way if the worse happens and disaster does strike (e.g. a fire or a nuclear blast), wiping out not only your main copy of the data but your main backup as well, you aren’t left with absolutely nothing. You would still have at least one offsite copy. It might be a little dated, but that would be better than nothing at all.

In the case of of this particular fiasco, I suspect the underlying problem is that the US Air Force decided to outsource their storage of that database to Lockheed without fully investigating what it was they would get in exchange.

One of the consequences for the rest of us is that we can now see one of the downsides of storing data out in the cloud: in all likelihood there are no backups of that data. If your cloud service loses it (and these things do happen), it will be a case of Tough Luck Kiddo.

Ninja (profile) says:

To be honest, we’ve been hearing the call for backup for decades but most of us still don’t do it properly even if we are well aware. At least from my experience there are very few people that do backups flawlessly. Kind of a side note I wanted to point out.

Not to ignore the fact that it’s a major company that should know better, the Government should have double checked if there was redundancy too. If I were to hire a company to backup my stuff I would not only want to both see the separate server/farm that’s doing the work but also elect random content in my server/data center to retrieve from the backups and compare hashes. Of course, I would probably be interested in preserving such files whereas we can’t be so sure when concerning the Government.

Also, conspiracy theories. Maybe this was intended? I wouldn’t be surprised.

Mason Wheeler (profile) says:

Re: Re:

Not to ignore the fact that it’s a major company that should know better, the Government should have double checked if there was redundancy too.

…and then you end up with a fun balancing act. The more copies of your data that exist, and the more places they exist in, the more likely it is that one of these copies will be the subject of a data breach at some point. When dealing with sensitive information, this is something that has to be taken into account.

DannyB (profile) says:

Re: Re:

Yep.

One of the biggest problems with backup ages ago was that it was either:
1. expensive and somewhat convenient
2. inexpensive and highly inconvenient

Today it can be inexpensive and fairly convenient.

Today a 2 TB pocket hard drive, which can be disconnected, labelled and then locked in a fire safe, costs less than what once was an expensive, slow, and inconvenient sequential access backup tape that required a very expensive tape drive. And usually required overnight backup. And probably various differential or partial backups in order to not use up too much backup capacity.

Today, you can back up, well, probably everything, to one or two pocket drives in a fairly short time. The more clever can rsync to a backup drive.

For what you once invested in 14 days worth of backup tapes, you can now spend on 14 days worth of pocket hard drives that are easy to use.

With databases, things are more complex. But you could have automated backups to a specified folder. And that folder could get backed up to other storage (like pocket drives) which go in a fire safe. Databases could also be replicated across multiple machines. And with backups.

Databases could be dumped to text SQL scripts that can reconstruct the database, and those are extremely compressible.

These schemes are easy to verify. And at least once in a while you should set up a VM with a database server and try doing a restore of the database. Maybe yearly. And you could just keep a snapshot of that VM (before the restore) to practice doing the restore with again next year. In fact, that VM is worth backing up because it is what you use to do your annual testing of your restore procedure. What handier way to know that you can restore, but even if software changes, you’ve got a VM that less than a year ago was able to restore the backed up media.

These days with clusters, if you can automate builds and deployments of systems, you could automate backups, and restores to separate systems just to prove every night’s backup actually can restore and simply get daily reports on the success.

I could go on, but I agree with your basic premise. This is either extreme incompetence (not surprising for government work) or a conspiracy to cover up something (also not surprising).

Anonymous Coward says:

Not quite enough information to determine what happened.

They may have been doing backups, but they weren’t aware that the backups weren’t any good. Notice that the article mentioned that the files were “corrupted.” A database has a lot of files and virtually none of them are plain text. They tend to be indexes into other files which have blocks of data, etc., etc., etc. It’s entirely possible for those indexes and data blocks to be corrupted due to a programming error and the database continues to look functional and backups continue to be made. But eventually, enough damage occurs to the files and they become so corrupted that the database crashes. Then the backups are examined and then it’s discovered that they too are corrupt. Like I said, the article just doesn’t have enough technical information to make as assessment one way or the other.

PaulT (profile) says:

Re: Not quite enough information to determine what happened.

Sure, but because this is a known risk, you are also meant to periodically test the backups to verify that they recover correctly. Then you store those somewhere safely so that even in cases of absolute catastrophic failure the database is still available. That 12 years of data was lost suggests that any backups were never verified and/or they were in a position to be affected by whatever caused the production version to fail.

Anonymous Coward says:

Re: Re: Not quite enough information to determine what happened.

True enough, but then again, they may not have been permitted to do a test restore of the backups, or for that matter they may not have even been permitted to make backups. For some older database programs the database needs to be quiescent in order to make a backup. And that means that users can’t access the database while it’s being backed up. If the user community has enough pull, it may be impossible to make a backup so instead they may attempt to rely on RAID to mitigate against hardware failure. But that too has its issues since RAID mitigates against hardware failures, but doesn’t mitigate against corruption by faulty software.

PaulT (profile) says:

Re: Re: Re: Not quite enough information to determine what happened.

“they may not have been permitted to do a test restore of the backups, or for that matter they may not have even been permitted to make backups”

If that’s a law or a rule, then responsibility for the incident goes to whoever was responsible for such rules rather than the poor admin who was ordered to follow it. It’s still incompetence in that case, just not on the part of LM’s tech crew.

If there was someone not allowing the tech team to do their job by not allowing them to shut the DB down when required, then this needs to be brought up in the investigation to ensure that lower level lackeys are not blamed for having to follow the chain of command (I know how likely that is but still…).

Either way, this was a predictable risk and should have been mitigated. Presuming no deliberate sabotage, someone somewhere was incompetent.

Anonymous Coward says:

Re: Re: Re: Not quite enough information to determine what happened.

they may not have been permitted to do a test restore of the backups, or for that matter they may not have even been permitted to make backups

Do you have a source for that being the case or are you just making crap up?

Anonymous Coward says:

Re: Re: Re:2 Not quite enough information to determine what happened.

Do you have a source for that being the case or are you just making crap up?

What part of “Not quite enough information to determine what happened” did you not understand?

Some years back, I did work in the military and was assigned to WHCA. Believe me that when a high ranking technically ignorant individual wants someone, they get it even if it’s a rather stupid thing. And if the affected database is being used over a wide geographic area (e.g. Being accessed world wide) there would be more than enough idiots who think the database can NEVER go down because it would impact users. Now with more modern file systems (think ZFS in Solaris), it’s trivial to perform backups of entire file systems snapshotted at a moment in time even if there are other processed actively updating the file system. And that capability is fantastic for databases that have to be up 24/7 because a well designed database engine is capable of recovering a coherent database even if there is a power failure and the system goes down hard. But such a database usually isn’t capable of performing the recovery if the various files are not internally consistent which would be the case if simple file copies were made while the database was being actively updated. Hence the requirement for the database to be quiescent when being backed up. But the question is “Were they using such a system?” Somehow I doubt they were since ZFS was introduced in late 2005 and the article mentioned records from 2004.

Anonymous Coward says:

Re: Not quite enough information to determine what happened.

Reminds me of a startup I worked at many years ago. E-Bay had a crash because a particular SQL statement corrupted their entire database and every time they tried to recover, they replayed the SQL statement and destroyed the restored database.

I realized we could have the same problem, so I commandeered a machine with enough storage to hold the production database and then set it up so it was always 15-30 minutes behind the production database. The idea being we would have time to stop the replication if the production database went down, or we could switch to it with minimal loss if necessary.

Within a week of me leaving, the machine had been repurposed, because obviously we didn’t need a warm spare database. After all we had backups.

About a week after that, a planned database test resulted in a destroyed database and the discovery that the backups didn’t work. They were down for weeks.

A few years before that, I suspect that the IT department wasn’t backing up one of the databases. Even though the head of IT assured me that all the databases were being properly backed up.

I quietly started dumping the database nightly to a development machine. A couple of weeks after I left, I was forwarded a message from the head of IT that said, “Oh, that database. We don’t back up that database. We only backup the MS SQL databases.” With a note attached saying, “Where did you say you put those backups again?”

OldMugwump (profile) says:

Heads need to roll

I see only two possibilities:

1 – Lockheed, a major defense contractor with decades of experience with computers and IT systems, for 12 years running, failed to backup or check their backups of a critical USAF system needed to verify USAF compliance with law.

or,

2 – The system was deliberately corrupted to cover up criminal activity.

It’s hard to tell which is more likely.

Either way, heads need to roll. Every person in the management chain responsible for this debacle needs to be fired, from the CEO on down.

How many millions did the Pentagon pay Lockheed to screw this up?

JoeCool (profile) says:

Re: Re:

She already tried that. She had her personal assistants go through the emails to sort out “personal” email from business email, printed the business emails, then nuked the server drive. Unfortunately for her, there was a backup she clearly didn’t know about that was used to find the thousands of classified emails she was keeping on the server.

Steve Swafford (profile) says:

smh

I can’t believe how cynical I have become but I just can’t imagine believing anything at all that anyone from any dept of the federal government says about anything anymore lol. I seriously question anything and everything they ever say about anything. Why bother? They’re just going to lie more and in the end, no one is going to do anything about any of it anyway. Wtf has happened to me lol

Sasparilla says:

LockMart would benefit the most

This all sounds like having the Fox be in charge of storage / backup of records of wrongdoing in the HenHouse.

Of all the contractors who could benefit from this all the backups don’t work issue, Lockheed Martin is one of the biggest (F-35, F-22, Atlas V and on and on) and probably had alot of entries in that database.

Scandalous is what this is – there is no way one of the biggest prime contractors for the Air Force itself should be in charge of the Air Force Inspector General’s DB and storage backups like this, the temptation for self interested manipulation is too large.

OldMugwump (profile) says:

Re: Let me get this straight

Yes, you understand correctly.

You work for a for-profit firm. Which, being for-profit and owned by investors who hope to make money, is inherently evil.

While the government, on the other hand, works for the peeple. So they’re inherently good, and don’t need any watching.

Because democracy, you see. Each peeple gets a 1-in-300-million say in what the government does. So the government would never do anything to hurt a peeple.

While an investor would of course kill anyone for a penny, if they could get away with it.

Jim says:

do not believe this story

I worked for the federal government for 38 years and know for a fact that all records have a backup hard copy; typically burned into two write only high density disk drive. One is stored on site and the other is stored on the opposite coast just in case of war or natural disaster takes out one copy. This directive has been in place for at least 20 years. to prevent a “accidental” loss of the primary. So do not believe this story

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...