Could Firmware Expiration Dates Fix The Internet Of Broken Things…Before People Get Hurt?

from the the-looming-global-IOT-shitstorm dept

If you hadn’t noticed, the incredibly flimsy security in most Internet of Things devices has resulted in a security and privacy dumpster fire of epic proportions. And while recent, massive DDoS attacks like the one leveled against DNS provider DYN last year are just one symptom of this problem, most security analysts expect things to get significantly, dramatically worse before they get better. And by worse, most of them mean dramatically worse; as in these vulnerabilities are going to result in attacks on core infrastructure that will inevitably result in human deaths… at scale.

Estimates suggest that 21 billion to 50 billion IoT devices are expected to come online by 2020. That’s 21 to 50 billion new attack vectors on homes, businesses and governments. And many of these are products that are too large to replace every year (cars, refrigerators, ovens) but are being manufactured by companies for whom software — and more importantly firmware updates — aren’t a particular forte or priority.

To date, there are a number of solutions being proposed to tackle this explosion in poorly-secured devices, none of which seem to really solve the issue. Agencies like Homeland Security have issued a number of toothless standards the companies that are making these poorly-secured products are free to ignore. And efforts at regulating the space, assuming regulators could even craft sensible regulations without hindering the emerging sector in the first place, can similarly be ignored by overseas manufacturers.

In the wake of the Wannacry ransomware, University of Pennsylvania researcher Sandy Clark has proposed something along these lines: firmware expiration dates. Clark argues that we’ve already figured out how to standardize our relationships with automobiles, with mandated regular inspection, maintenance and repairs governed by manufacturer recalls, DOT highway maintenance, and annual owner-obligated inspections. As such, she suggests similar requirements be imposed on internet-connected devices:

  • A requirement that all IoT software be upgradeable throughout the expected lifetime of the product. Many IoT devices on the market right now contain software (firmware) that cannot be patched even against known vulnerabilities.
  • A minimum time limit by which manufacturers must issue patches or software upgrades to fix known vulnerabilities.
  • A minimum time limit for users to install patches or upgrades, perhaps this could be facilitated by insurance providers (perhaps discounts for automated patching, and different price points for different levels of risk).”
  • Of course, none of this would be easy, especially when you consider this is a global problem that needs coordinated, cross-government solutions in an era where agreement on much of anything is cumbersome. And like previous suggestions, there’s no guarantee that whoever crafted these requirements would do a particularly good job, that overseas companies would be consistently willing to comply, or that these mandated software upgrades would actually improve device security. And imagine being responsible for determining all of this for the 50 billion looming internet connected devices worldwide?

    That’s why many networking engineers aren’t looking so much at the devices as they are at the networks they run on. Network operators say they can design more intelligent networks that can quickly spot, de-prioritize, or quarantine infected devices before they contribute to the next Wannacry or historically-massive DDoS attack. But again, none of this is going to be easy, and it’s going to require multi-pronged, multi-country, ultra-flexible solutions. And while we take the time to hash out whatever solution we ultimately adopt, keep in mind that the 50 million IoT device count projected by 2020 — is expected to balloon to 82 billion by 2025.

    Filed Under: , , , , , ,

    Rate this comment as insightful
    Rate this comment as funny
    You have rated this comment as insightful
    You have rated this comment as funny
    Flag this comment as abusive/trolling/spam
    You have flagged this comment
    The first word has already been claimed
    The last word has already been claimed
    Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

    Comments on “Could Firmware Expiration Dates Fix The Internet Of Broken Things…Before People Get Hurt?”

    Subscribe: RSS Leave a comment
    44 Comments
    Thad (user link) says:

    Re: Terrible idea

    This would turn out to be NOTHING other than better planned and legalized obsolescence.

    …I’m pretty sure planned obsolescence is already legal.

    Business would quickly turn this to their economic favor

    Well, yes, but "stop supporting a device after 2 years and push customers to buy a new one" is already standard practice. The difference here would be changing "push" to "force".

    I have mixed feelings about the idea. Stricter requirements for vendors to patch vulnerabilities are a good idea. Bricking customers’ devices is not — though if the software is open-source and the device can be rooted, that opens up the opportunity for longer-term third-party support.

    It seems to me that the best solution for Android mobile devices is probably for Google to flex more muscle in requiring OEMs to provide long-term support and security updates. Android is open-source, but OEMs can only include the Google Apps (Google Play, Google Maps, etc.) by license with Google. I think the license should require support for long-term, automatic security updates.

    But that doesn’t really apply to most IoT devices. Even for the ones that are using Android (as opposed to GNU/Linux or something else), Gapps is not nearly as essential on an IoT device as it is on a phone or tablet.

    I’m really not sure what the best solution is. I do think there are cases where security practices are so bad that they qualify as negligent and companies should be fined for them (say, having an open telnet port, a hardcoded root password, or, God help you, both). But I don’t really trust lawmakers to be savvy enough to draw the distinction between a company that’s negligent and one that does a good job with security but gets compromised anyway.

    Anonymous Coward says:

    Re: Re: Terrible idea

    The solution would be to allow the users to patch their own stuff. No, I’m not saying disable the auto-updates, but allow the user to patch their own system if they choose.

    The way things are right now, the user cannot patch it even if they wanted to. You can’t make an installer that you can tell someone to just download and run anymore. You have to often do the exact thing that the security system was designed to prevent to change something if the manufacturer looses interest. Most are scared shitless by the length of the online tutorials one must go through to install anything that the manufacturer didn’t explicitly approve, and would rather continue to use the insecure device(s) as a result. Sure you might get the one hobbyist who will do it out of a good 100 people, but the remaining 99 people are vulnerable and that hurts EVERYONE when they get infected.

    The idea of using a firmware with a builtin death sentence is a solution to a problem that was created as a result of taking way the choice of the consumer. Now the "solution" to the broken "solution" is less choice for the consumer???? Now your device is outdated and bricked??? That’s going to cause a lot of problems for people who leave data on these things when they go. I can imagine the lawsuits now…

    Quit making it harder to fix something that is broken. If the users are stupid, let them be stupid and suffer the consequences for it. That’s the one problem that was never fixed. The end user’s not caring about the stuff they were responsible for, and then blaming the manufacturer / their kids / God / etc. when it broke / got infected / turned into a cat / etc. If they would enforce the rules rather than bowed to demands for "simplicity" from idiots that didn’t even spend the time to take the manual out of the packaging, we wouldn’t have any of these issues.

    Thad (user link) says:

    Re: Re: Re: Terrible idea

    People who are not technical enough to keep their devices up to date are not necessarily "stupid", Mr. Coward, they just don’t have the same skillset you do. Calling people names and telling them "if you get a virus, it’s your own fault" is not an effective way to improve Internet security. As you say, it hurts everyone when one person gets infected — so why not try and look for ways to prevent people from getting infected? Calling them stupid does not qualify.

    I very much favor open devices that allow power users to install and run whatever software they want; hence the part where I said this:

    if the software is open-source and the device can be rooted, that opens up the opportunity for longer-term third-party support.

    But not everyone is a power user, and even power users don’t necessarily want to spend their time dicking around with configuration. (When I installed Antergos on my new desktop a couple weeks back, it wasn’t because I’m too stupid to figure out how to install Arch, it’s because I have a finite amount of time on this Earth and I feel that I have already spent enough of it manually editing configuration files.)

    Automatic security updates are a good thing. They should be encouraged, as a necessity for novice users and a convenience for power users. Yes, we should be able to modify our own devices as we see fit. But it is absolutely reasonable and right that we hold manufacturers responsible for the security of the products they sell.

    Anonymous Coward says:

    Re: Re: Re:2 Terrible idea

    People who are not technical enough to keep their devices up to date are not necessarily "stupid", Mr. Coward, they just don’t have the same skillset you do.

    So, they are unable to read then? How about ask questions? Or maybe they just can’t comprehend basic $INSERT_NATIVE_LANGUAGE_HERE. I understand they don’t have the same skillset I do, they shouldn’t need to. That still doesn’t excuse them from not giving a crap at all about the stuff that they use. And if it’s not optional, then they should learn at least enough about it to use it safely. That’s true of anything not just computers.

    If I applied the same excuse you are to something like driving a car, or finance, I’d be on the receiving end of society’s vengeance along with a long visit from society’s finest. Even if I didn’t cause any harm. Yet, with computers for some reason, society doesn’t care until it bites them. Then it’s a problem. See the double standard yet?

    This isn’t about them learning how to set up a VPN or learn x86 ASM, this about them caring enough to do the basic things that they are supposed to do. (RTFM, Think before you click, just because you can doesn’t mean you should, think about the circumstances (Anti-phishing), read first then click ok, don’t reuse passwords, keep regular backups, etc.)

    so why not try and look for ways to prevent people from getting infected?

    Because we do, but most people then come up to us and demand that what ever we come up with be automated so they don’t have to do anything, even if the issue was their own carelessness. Also, it’s not possible to prevent every infection. Even the best protections will fall to a well crafted exploit. So, operator awareness really is the best option. Sadly, we have too many operators who could care less and don’t follow basic safety rules. It’s also hard to feel sympathy for someone who refuses to do anything to protect themselves in their own self-interest and then demands everyone else do the protecting for them. Plus we’ve pretty much hit the bottom of the barrel for finding new ways to protect people that doesn’t involve 1. Telling them to stop using it. Or 2. Taking control of their toys away from them because they refuse to play by the rules like adults. (The latter being what TFA is about.)

    Unless they get hit by something with automatic execution capabilities, it is their fault. Most of these will get patched quickly assuming you have updates turned on. (Most do, as it’s the default.) These types of infections are also rare due to the potential fallout being a huge motivator for a patch to be made ASAP once it’s known. Everything else requires some level of user intervention to successfully infect the machine, which by definition means it is the user’s fault.

    Nevermind that regardless as to how you got infected, you are also responsible for taking steps to safeguard your own data, and keeping a backup for later use if needed. Most people don’t do this. This is not an "if" you get infected question, but a "when" you get infected question. Even the best people that take every precaution will get a virus at some point or another. (Once again see the automatic execution type above.) So not doing this is once again the fault of the end user.

    Not to mention that most couldn’t even recover the system after an infection because they have no recovery media. (Thanks manufacturers that wanted to save $0.02 cents on each machine. That is a valid complaint against them.) Even if they did have the media most wouldn’t know how to or when to use it. That’s why the "default" option when a computer gets infected for most people is to plop down another load of cash for a new one. (Once again, another valid complaint against manufacturers (and retailers) who exploit this fact for financial gain.) This creates a HUGE e-waste problem in addition to their laziness and new debt. (Which should speak volumes the level of apathy that they have for following basic safety guidelines when it comes to computers.)

    even power users don’t necessarily want to spend their time dicking around with configuration.

    I didn’t ask you what you wanted to do. I told you what you needed to do. Life’s hard get used to it. Yes somethings can be made easier, others are overly complicated for no reason. For those things, complaining about the process is valid. But, most people will never make constructive complaints. Why? Because most will look at it for a grand total of five seconds before throwing their hands up, not bothering to read documentation, and start complaining.

    it wasn’t because I’m too stupid to figure out how to install Arch

    Never said you were. Although most people wouldn’t be using Arch either. Most people who I’m referring to use some version of Windows (often what ever came with the computer), and never look at the documentation. (Which is often better quality (though not necessarily better quantity) for proprietary software.) Not reading the documentation isn’t stupid, but if you don’t know how the software works, it’s not the smartest decision either.

    it’s because I have a finite amount of time on this Earth and I feel that I have already spent enough of it manually editing configuration files.

    Then use something that you do know how to use, advocate for a GUI tool to be made (or better, code your own / help someone else with theirs), or swallow your pride, sit down, and do it. If you have to use that program, than realize you’ll need to invest the time required to set it up properly.

    Automatic security updates are a good thing. They should be encouraged, as a necessity for novice users and a convenience for power users.

    Here, here.

    But it is absolutely reasonable and right that we hold manufacturers responsible for the security of the products they sell.

    Except they have a reasonable assumption that the person using their products will use them as intended. (If you use the thing as a Frisbee, don’t expect them to fix it. Similarly, if you use a consumer AP as a $300.00 router, don’t expect software support from them.) They also cannot predict every single possible configuration and environment that their products will be used in, and as such must rely on the local admin (even if it’s a clueless end user) to fill in the blanks. Somethings you just can’t secure without knowledge of the environment it will be used in. (Is there a Firewall? Is a password required? Do we have a proxy server we must go through? Etc.) What may work for one environment will not necessarily work for another, so the manufacturer can’t hard code it and the end user must decide. (And yes, it’s the end user’s job to know enough about the environment to set it up, or know who to contact that does.)

    Granted they should be held accountable for their own bugs, but making devices stop working because the manufacturer doesn’t want to bother with it anymore, isn’t the solution. The manufacturer shouldn’t be rewarded with more money because they decided to retire a product for whatever reason. They should be held accountable for the products they make, and continue to provide security patches for a reasonable amount of time after the product’s retirement to allow consumers to migrate to newer products, or to start maintaining it themselves. (I’d say about ten years is good enough.) Along with hefty legal penalties and fines should they break that mandate. Never should the product stop working due to an arbitrary date passing, and never should the consumer be prevented from maintaining it themselves. (No code signatures that cannot be overridden. Use a hardware jumper / switch for protection, but allow the end user to change the key used to sign if they so desire. (The ability to change the key used to sign is important due to the code signatures being part of the overall security design of the product.)) This should also be enforced by mandate and have even greater legal penalties and fines if the manufacturer chooses not to abide by them. (The former because it’s creating more junk for the landfill, and risks locking the (probably careless) user out of their data. The later because not doing that creates an imbalance of power in which the consumer is completely beholden to the manufacturer’s will, and it makes everyone less safe when one company (or a nation state that manipulates them) can hold the entire internet hostage to get what they want.)

    Yes, this is an end user problem. That’s not to say that there are not other issues, or that there are not people who genuinely try to do what’s expected of them. But, there is a reason that PEBKAC and ID10T exist, and that is a problem of that particular subject’s own creation. Trying to fix the problem, by ignoring it, and throwing extra penalties on top of it, doesn’t fix it.

    paintedjaguar says:

    Re: Re: Re: Terrible idea

    Manual? Please, there are grown people now who have never seen a product that comes with an actual manual. And that’s not due to better UI or more self-documenting design either – one of the most annoying aspects of the idiotic “flat design” craze is the proliferation of icons with no text labels, popups or anything else to let you know what their function is. I suppose you’re just supposed to click and take your chances. As for repairing anything, that’s becoming just another rental stream for “authorized” agents.

    Anonymous Coward says:

    Re: Terrible idea

    I don’t have any IoT devices in my house. I won’t get any with the piss poor security most all of them have. I plan to go with Homekit devices. Just because they have far better security and are encrypted. The downside is you have to be in the Apple World. That’s a con for many people. Also the Homekit device market is not that big in comparison. It is growing. Your selection of devices in a category narrows. The pro is security and you have to be Apple approved. Not anyone can just throw any old thing out there.

    There are a few IoT devices that have good Security like the Ring Doorbell which is kept up to date and they take security seriously. Chamberlain the Garage door company will be within a few months, if they don’t keep being delayed will have new Homekit supported devices. So I should be able to Say to my Apple Watch “Siri, Open garage door” and it should do it. Right now I can do it with a App on the watch, but Siri control would make it simpler and quicker. There’s the EcoBee3 which supports Homekit for House Climate control. Which I think is better then the Nest and can have remote temp sensors.

    I haven’t jumped on the IoT bandwagon because of all the poor security with them. I figured I’d wait until they figured this out before I jump on that bandwagon. It really is the wild west right now with them. They need to get this stuff figured out and fixed before the end result is another failure like with others in the past like X10 devices. I used a few of those in the past a number of years ago.

    Anonymous Coward says:

    We always assumed it would the robots that revolted. Instead it will be our refrigerators. But instead of a physical revolt it will be one done through the internet. Their rule will be one of starkness. Ruled by those who do not need to move or attack, but rather just enslave by destroying our ability to communicate.

    Thad (user link) says:

    Re: Re:

    We always assumed it would the robots that revolted. Instead it will be our refrigerators.

    Oh, there were plenty of people who figured it’d be the refrigerators. I remember reading a Philip K Dick story where every appliance in the home was coin-operated; you had to put money into your fridge to get food out.

    It’s not quite the same thing, but I think it’s impressively close.

    TKnarr (profile) says:

    Firmware updates wouldn’t help the problem any more than software updates have eliminated malware and exploits of standard PCs. It’s a good idea to require network-connected devices to have upgradeable firmware just on general principles, but I think the real solution lies in asking and answering this question:

    "Why do these devices need to be accessible from the Internet in the first place?"

    I’d start by isolating them from the Internet completely, and in fact from the local LAN as much as is practical. The only devices on the LAN that need to talk to IoT devices are the ones that control them. The rest should be going through that hub or controller intermediary. That’s got the advantage of also pressuring IoT makers to conform to standard protocols to avoid having users not buy their devices because they aren’t compatible with the hub/controller the user already has (the likely hub/controller makers are Amazon and Google, both big enough that neither will abandon the market and they can’t lock users into their hardware without giving up ~50% of the market in the process).

    Thad (user link) says:

    Re: Re:

    Firmware updates wouldn’t help the problem any more than software updates have eliminated malware and exploits of standard PCs.

    That’s hardly apples-to-apples, is it? There’s a pretty significant difference between mitigating a problem and eliminating it completely.

    I don’t think anybody’s suggesting that it’s realistic to expect IoT devices to have perfect security. But I think expecting them to have some security is pretty reasonable.

    "Why do these devices need to be accessible from the Internet in the first place?"

    This is, of course, a good question, though I think the answer in most cases is "don’t buy IoT devices."

    I’d start by isolating them from the Internet completely, and in fact from the local LAN as much as is practical. The only devices on the LAN that need to talk to IoT devices are the ones that control them. The rest should be going through that hub or controller intermediary. That’s got the advantage of also pressuring IoT makers to conform to standard protocols to avoid having users not buy their devices because they aren’t compatible with the hub/controller the user already has (the likely hub/controller makers are Amazon and Google, both big enough that neither will abandon the market and they can’t lock users into their hardware without giving up ~50% of the market in the process).

    I’m not sure I follow; it seems to me that you’re merely proposing moving the attack surface to some other device, and then trusting that said device will be provided by Google or Amazon, everyone will buy from those vendors, and relying on a duoculture will increase security rather than decreasing it.

    Roger Strong (profile) says:

    Re: Re:

    "Why do these devices need to be accessible from the Internet in the first place?"

    The simple and honest answer – though they’re often not honest with the customer about it – is to sell your usage details and other personal information as part of their business model.

    I picked up a Withings weigh scale. I liked the idea that it could communicate with my iPad via Bluetooth or Wi-Fi and give me a chart of my progress.

    Turns out that’s not what it does. Instead it sends all the data to a server in France, and your phone gets it from there. There’s no need for this, other then to monetize you. If you want to download the data, you have to register on their web site and get it from there.

    Their software is really insistent that every time you step on the scale, it should automatically post the results to Facebook and Twitter. Further monetizing you by spamming your friends with garbage advertising claiming to be personal posts.

    This is what’s driving IoT devices in the first place: Those internet connections allow new ways to monetize you.

    If you can secure it yourself, you can stop it from phoning home and monetizing you. Long before we see legislation allowing users to secure their IoT devices, we’ll see DMCA style laws preventing them from doing so.

    Thad (user link) says:

    Re: Re:

    Regulating liability into software would be a death knell for open source.

    It seems to me that that depends on the regulation, the liability, and the software. But that’s not necessarily what we’re talking about anyway; the article is pretty vague.

    Consumers should just stop buying junk.

    Sure, and why should cars have seatbelts? People should just be better drivers.

    Daydream says:

    I don’t like the sound of legally requiring users to install updates/upgrades…the problem is, while it’s important to fix bugs and close off vulnerabilities and stuff, sometimes those same updates create new vulnerabilities in the first place.
    Who remembers the Sony rootkit incident? Or all those stories about updates bricking TVs/consoles/printers/etc, or adding new DRM?

    I’ve been phobic about updating my phone because of that, and because of another petty reason; I just don’t like the aesthetic changes in the interface. Same with Windows 10.

    I’d be afraid of a law that demands I install so-and-so ‘update’ on my phone or my computer, because I don’t know what kind of dangerous stuff those updates could have hidden in them. Especially given the NSA still wants backdoors in software.

    Christenson says:

    How *might* it work?

    I’m with Thad on not wanting to pay attention to my IoT stuff, even if I *could* have written the firmware myself.

    Now, if I have a decent quantity of dumpster fire IOT things, and no obvious way to get them secured, then *expiring* the internet access isn’t necessarily such a bad idea, but I think it needs to happen at the level of the home router, rather than the throwaway IoT device. But wait! That’s just a firewall that turns something unused off after awhile…hasn’t that failed before?

    Thad (user link) says:

    Re: How *might* it work?

    Hm, there’s a thought. Routers with a default configuration to expect some kind of regular security update notification from any IP on the LAN receiving UPnP traffic from the WAN. Anyone who’s competent to reconfigure a router could disable the feature (and that includes IT admins at businesses).

    Of course, it seems like there are a lot of simpler ways to set up a router to block suspicious traffic by default.

    And there is a lot of very simple hardening that can be built right into IoT devices — I already mentioned the glut of devices that have open telnet ports and/or hardcoded root passwords. And Roger Strong mentioned a scale that phones home to a server in France and posts on Twitter and Facebook — well, okay, I don’t want a scale that does that, but assuming that it’s gonna do that? It seems like it shouldn’t be communicating with any servers other than those three. A simple domain whitelist would go a long way to securing it.

    Anonymous Coward says:

    Re: Re:

    Not if the firmware contains security vulnerabilities that allow it to be hacked and/or enslaved in a botnet.

    Most of the time these vulnerabilities are discovered quite some time after the release of the firmware.

    So, not being able to update the firmware to close security gaps when they are discovered, actually makes it almost certain that the device will be hacked at some point…

    Anonymous Coward says:

    Heres an idea, use a simple real time operating system, and IP stack and and implement just the functionality that the device needs, including password and/ oe certificate settings. That way a user can use defaults and lose their privacy, but it becomes much harder to gey the device to become part of a botnet.

    Ideally, as extra protection, the devices would also be accessible only via a non routable IP address, and then remote access can be obtained by setting up a local VPN. This would requite the use of fixed IP addresses, otherwise it becomes too difficult for most people to do. Such an approach would have significant security advantages, This has advantages that it largely moves the security issues to the VPN server, and also has privacy advantages, like eliminating companies from gathering data solely for data mining and advertising purposes.

    Anonymous Coward says:

    Unsupported devices = automatic public domain open source

    And why not this:
    If a company doesn’t issue timely firmware updates, it is obligated to publish the firmware and all other software for the device (like server code) as public domain open source so it can at least in theory be operated and/or patched by the community.

    Anon says:

    Re: Unsupported devices = automatic public domain open source

    If a company doesn’t issue timely firmware updates, it is obligated to publish the firmware and all other software for the device (like server code) as public domain open source so it can at least in theory be operated and/or patched by the community.

    Great, so now we’d get meaningless do-little upgrades simply so the manufacturer does not have to release their source code. “Make XP source public? Nah, we’ll just issue one service update a year with minimal functionality.”

    Thad (user link) says:

    Re: Unsupported devices = automatic public domain open source

    There are liability issues with releasing code as public domain. An open-source license can include a disclaimer of liability; the law is not currently clear on whether you can disclaim liability for code that you release into the public domain.

    There’s a bit more on the subject at http://fossforce.com/2017/03/army-open-source-license/ , a piece about the US Army trying to decide on source licensing.

    Chuck says:

    Better Idea

    This is a horrible idea. Certifiably terrible. I almost can’t believe anyone at TD would suggest this with a 5,000 foot pole. This is nothing more than DRM made physical. If you don’t think that’s exactly how this would be used (err…abused) then you haven’t been paying attention.

    A better idea would be a legal liability change. All software – firmware or whatever – should be subject to a 3-strikes rule. After a vulnerability in your software or firmware is used in the wild 3 times (with a cooldown period between each “use” because otherwise this would happen instantly) to attack another system, the company that made the software becomes liable in civil court for the damage caused. We can debate the cooldown period. I’d go for 3 days (9 days total) so that you have 10 days to get your s**t together and fix your buggy code, but that’s just me.

    Putting an expiration date on firmware merely tells the hackers “ok guys, you have this long to exploit our vulnerabilities, and now that we know it’ll expire, we’ve got no reason to bother to fix it. Go nuts!” It’s the exact opposite of the message we should be sending.

    A three strikes rule puts the responsibility where it belongs – on the vendor. It’s their buggy code. They wrote it. It’s their freaking job to do so correctly. If they can’t be bothered to pay attention and write good code, then sorry, I can’t be bothered to let them keep their legal nigh-immunity.

    And before anyone says otherwise: no, this is not the same thing as holding a service provider for user-generated content, nor censorship. This is the equivalent of just applying already-existing “defective product tort” law to software, but doing so with a safety net (the strikes/cooldowns) so that people don’t get trolled by bad lawyers the instant a bug is found.

    Ninja (profile) says:

    The network side approach seems to be the most important one to me coupled with the other ideas exposed. The problem is, not all internet providers will be compliant. During the DDoS attacks researchers found out that a good chunk of the attempts came from a handful of service providers, most notable one from China. So this would need a deeper cooperation maybe with operators from bigger pipes, sea cables, cooperation from companies like Level 3 so the connection attempts can be blocked before even reaching the wider infra-structure. It’s no small feat the world will have to pull.

    mb (profile) says:

    I think the fix for this is liability. The company that produced a device capable of participating in a DDOS or other exploit should be liable for the damages caused, AND if it can be shown that the device manufacturer was aware of the exploit and did not take actions to solve the problem by either producing a fix or a recall, then the CEO and entire BoD should be held criminally liable with fines AND a minimum of 30 days jail-time.

    Add Your Comment

    Your email address will not be published. Required fields are marked *

    Have a Techdirt Account? Sign in now. Want one? Register here

    Comment Options:

    Make this the or (get credits or sign in to see balance) what's this?

    What's this?

    Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

    Follow Techdirt

    Techdirt Daily Newsletter

    Ctrl-Alt-Speech

    A weekly news podcast from
    Mike Masnick & Ben Whitelaw

    Subscribe now to Ctrl-Alt-Speech »
    Techdirt Deals
    Techdirt Insider Discord
    The latest chatter on the Techdirt Insider Discord channel...
    Loading...