Luddite Redux: Don't Kill The Robots Just Because They Replace Some Jobs

from the first,-do-no-harm dept

Here are a couple points to ponder:

Fun fact #1: California prison guards are expensive.

Fun fact #2: South Korea’s getting robot prison guards.

I’m sure the prisoners welcome their new robot overlords, but I bet the prison guards union doesn’t. Or any other union for that matter. And they’re not alone. Over the past few weeks, tech industry commentators spent slightly more time than usual wringing their hands over whether technology was killing jobs. I think this video captures the debate pretty well.

It might sound paradoxical, but this replacement of humans by machines is actually a good reason to limit secondary liability for the robotics industry. And I’m not just referring to secondary liability in the copyright sense, but to any liability incurred by robot manufacturers because of how others use their robots.

This isn’t a theoretical issue. Automation and efficiency have always threatened certain jobs and industries — and one of the standard reactions is to somehow blame the technology itself and seek to hinder it, quite frequently by over-regulation. Of course, the extreme version of this is where the term “luddite” came from — an organized effort to attack more efficient technology. Of course, that resulted in violence against the machines. More typical were overly burdensome regulations, such as “red flag laws,” that said automobiles could only be driven if someone walked in front of them waving a red flag to “warn people” of the coming automobile. Supporters of this law, like supporters of secondary liability laws for robots, can and will claim that there are “legitimate safety reasons” for such laws and that the impact on holding back the innovation and extending the lifetime of obsolete jobs is just a mere side benefit. But like those red flag laws, applying secondary liability to robotics would significantly hinder a key area of economic growth.

Techdirt has covered the question of a secondary liablity safe harbor for robots before, and Ryan Calo’s written a great paper about the legal issues coming out of the robotics arena, but an even more important (and specific) point is exactly why these safe harbors matter for job creation — even as some continue to argue the other way (that such safe harbors will destroy jobs).

Technology has been replacing human labor since humans invented, well, technology. But while technology may get rid of inefficient jobs, it eventually creates replacements. To cite one commonly-used example, the switched telephone network put operators out of a job, but it created plentiful new jobs for telemarketers (and other businesses that relied upon the packet-switched phone network… including everything built on and around the internet today). The problem is that while it was obvious how many operators would be out of a job, it wasn’t immediately clear how lucrative (or annoying) telemarketing could be, let alone the eventual transformation of the phone lines into a vast global information sharing network, and the hundreds of millions of new jobs created because of it.

Erik Brynjolfsson and Andrew McAfee examine this problem in detail in their book, which I recommend. But much of it boils down to this. Technology creates jobs, yet it’s not obvious where the new jobs are, so we need bold, persistent experimentation to find them:

Parallel experimentation by millions of entrepreneurs is the best and fastest way to do that. As Thomas Edison once said when trying to find the right combination of materials for a working lightbulb: “I have not failed. I’ve just found 10,000 ways that won’t work.” Multiply that by 10 million entrepreneurs and you can begin to see the scale of the economy’s innovation potential.

This is especially important for robotics. It’s obvious how robots make certain jobs obsolete — e.g. driverless cars don’t need drivers — but it’s less clear what new job opportunities they open up. We need to try different things.

Unfortunately, secondary liability creates problems for robot manufacturers who open up their products for experimentation. Ryan Calo explains this in more detail, but the basic problem is that, unlike computers, robots can easily cause physical harm. And under product liability law in most states, when there’s physical harm to person or property, everyone involved in the manufacturing and distribution of that product is legally liable.

Ideally, we’d want something like a robot app store. But robot manufacturers would be unwilling to embrace commercial distribution of third-party apps if it increased their chances of being sued. There’s evidence that Section 230’s safe harbors (and, to some extent, the DMCA’s safe harbors) play a key role in facilitating third-party content on the web. Absent a similar provision for robots, manufacturers are more likely to limit their liability by sticking to single-purpose robots or simply locking down key systems. That’s fine, if we know exactly what we want our robots to do — e.g. replace workers. But if we want robots to create jobs, it’d help to limit secondary liability for the robotics industry, open things up, and let widespread experiments happen freely.

Filed Under: , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Luddite Redux: Don't Kill The Robots Just Because They Replace Some Jobs”

Subscribe: RSS Leave a comment
26 Comments
Jake says:

I’m pretty sure that this is one job that really, really can’t be done more effectively by a robot. More cheaply, yes, in the same way that a CCTV camera in every cell with one guy in an office at the other end of the building monitoring them could do the same job more cheaply. But in terms of preventing inmates harming themselves or each other, this is not going to be an improvement over boots on the ground.

Anonymous Coward says:

Actually I believe robots would be better suited to keep an eye on prisoners or even subdue them, robots don’t get angry, robots don’t hold a grudge, robots don’t use more force than they are suppose too because they lack emotions, it could greatly reduce one of the primary causes of prison riots and that is mistreatment.

You can’t scare robots, you can argue with them, you can pound and pound and they will be just there in front of you until your anger goes away.

Of course it depends on how you program them, they can also be ruthless and use deadly force for no good reason at all which could increase aggressive behavior levels inside an enclosed environment that is highly stressful.

The thing is, robots are nowhere near those capabilities yet, they can be teleoperated though, we have the hardware to do it, we don’t have the AI to make it a reality.

It is possible to do those things because if it was impossible humans wouldn’t be able to do them as well.

Anonymous Coward says:

Re: Re:

“robots don’t hold a grudge,”

Are you sure? You wouldn’t program the robot to act a little differently towards someone who is a chronic problem?

“robots don’t use more force than they are suppose too because they lack emotions,”

Well, they won’t use more force because of emotions. But they might use more force because they LACK emotions such as compassion. They might also use more force due to a less intuitive grasp of the situation, programming errors, malfunctions, etc.

“You can’t scare robots, you can argue with them, you can pound and pound and they will be just there in front of you until your anger goes away.”

Somehow I don’t think this would be good for the prisoners on a psychological level.

You can’t assume a perfect AI. You can say it’s “possible” to make one, but that doesn’t mean we’re actually going to be able to do that in the next hundred years…

Andrew F (profile) says:

Re: Re: Re:

There’ll be things humans are better at for a long time. But we are starting to see exponential growth in certain areas of robotics. Seven years ago, researchers couldn’t make an autonomous car drive eight miles across a desert. Last year, Google was testing them in city streets.

One plus for the robot prison guard is that they’re easier to fix. Suppose a robot does make a mistake and uses excessive force. Once a programmer identifies what went wrong, the fix can easily be pushed to all of the other robots very quickly. In contrast, remedying police brutality requires extensive training. And lot of what appears as excessive force may really be a gut self-protective instinct on the part of the officer that’s very hard to figure out.

Will we replace all cops with machines? Probably not, you want a human to have final say over use of force for Isaac Asimov-type reasons. But I wouldn’t be surprised if, in 30 years, we saw a 3 to 1 ratio of robots to humans in corrections and law enforcement.

Andrew F (profile) says:

Re: Re:

“Everyone else” isn’t homogeneous. If you get a lemon of a car, you can often go after everyone from the dealer to the car manufacturer to the guy the manufacturer bought screws from.

In contrast, Apple isn’t liable when an iOS app wipes out all your data. And Internet companies get special safe harbors under Section 230 and the DMCA.

Under existing law, robots are treated more like cars than smartphones. The proposal is that, once you start installing apps on your robot, it makes more sense to flip that.

nasch (profile) says:

Re: Re: Re:

And Internet companies get special safe harbors under Section 230 and the DMCA.

You’re not wrong, but my understanding is the safe harbors are not there to remove any liability service providers would have ordinarily, but to ensure that the liability is placed where it ought to have been anyway: on the user actually performing the illegal act. I think it’s more of a defense against technologically illiterate judges and juries than anything else.

Anonymous Coward says:

Re: Re: Re:

And then the car companies claim that the lemon of a car is actually a robot, because it can parallel park itself.

“Under existing law, robots are treated more like cars than smartphones. The proposal is that, once you start installing apps on your robot, it makes more sense to flip that.”

But as was mentioned in the article, “the basic problem is that, unlike computers, robots can easily cause physical harm.”

(Even though I’m arguing on this side, I’m not totally convinced I’m right, by the way.)

rosspruden (profile) says:

It’s excellent to see a Techdirt piece on how safe harbor protections apply to more than just art. Well done, Andrew.

I once heard Issac Asimov speak in the early 80s. At the time, the Japanese had just starting introducing robots into the auto assembly line and reporters were calling up Asimov for a comment since he had created the term, “robotics”. This article above hints at the future Asimov wrote about in all his books and Asimov even alluded to it in his lecture?when robots can replace humans, humans can finally move on to do more important things… but wait, robots are replacing humans! It’s the paradox of efficiency: the more work you have taken away, the less work you have to do.

What makes this article so interesting to me is that it shows why secondary liability protection is so important when the “worker” wades closer into tort law. A telephone switchboard can’t hurt anyone and it put many many people of a job. But a robot worker whose laser can slice you in half? Yeah, problem.

Do those automated Predator drones have secondary liability protections?

Andrew F (profile) says:

Re: Re:

>> Do those automated Predator drones have secondary liability protections?

Doubtful.

But suppose the cops took a military-grade Predator drone and installed their own custom software on it, and … bad things happen. I wouldn’t blame the Predator manufacturer liable for that, just because they let people install custom software on the drones. Might be a different story if the manufacturer was actively involved in making the custom software.

Michael Ho (profile) says:

Re: Re: Civilian Drones

ask and ye shall receive:
http://seattletimes.nwsource.com/html/nationworld/2016882681_drones29.html

Police agencies want drones for air support to find runaway criminals. Utility companies expect they can help monitor oil, gas and water pipelines. Farmers believe drones could aid in spraying crops with pesticides.

“It’s going to happen,” said Dan Elwell, vice president of civil aviation at the Aerospace Industries Association. “Now it’s about figuring out how to safely assimilate the technology into national airspace.”

ethorad (profile) says:

not defective?

And under product liability law in most states, when there’s physical harm to person or property, everyone involved in the manufacturing and distribution of that product is legally liable.

I think the link isn’t quite the right one – it goes to a section on liability where the product is defective which isn’t quite the point you’re making?

In any case for non-defective robots I would hope that legal suits focus more on the user than the manufacturer. If they didn’t I’m amazed that you are still able to buy guns, cars and even hammers in the US – after all they are surely used in causing harm every year.

nasch (profile) says:

Re: not defective?

In any case for non-defective robots I would hope that legal suits focus more on the user than the manufacturer. If they didn’t I’m amazed that you are still able to buy guns, cars and even hammers in the US – after all they are surely used in causing harm every year.

I think the difference is the difficulty in correctly determining the liability. If the robot was modified and then harmed someone, how do you determine why it harmed someone? Sometimes it might be clear, but definitely at other times an analysis of the customer modifications won’t leave an obvious conclusion.

This uncertainty may lead some manufacturers to stay out of the business, or to try to prevent modifications. Possibly products will just be a lot more expensive because of all the insurance and lawyers. I guess the best case scenario would be complicated waivers you have to sign to buy a robot, and by “sign” I don’t mean tick a box online.

All these drawbacks make it worth considering some kind of rule to attempt to more clearly draw the liability line. That won’t be easy though, since any simple rule (eg “manufacturers are not liable for anything that happens no matter what”) will almost certainly be a bad one.

alternatives() says:

I'd rather interact with a robot

I’m sure the prisoners welcome their new robot overlords

One would hope the robots are not programmed to be sadistic.

With humans you get variability and some of the guards are going to be worse human beings than the best humans who are locked up. And the robots are not going to have emotions get in the way of the interactions with the prisoners – IE the Prisoner #6 interacts with Prisoner 571-AZ and Prisoner #6 spits in the face of prison guard Zimbardo. Guard Zimbardo than abuses Prisoner 571-AZ out of frustration because he can’t get to Prisoner #6. Prisoner 571-AZ is being abused because he knows #6 and #6 did something to Guard Zimbardo – where is the justice in this situation?

Mike42 (profile) says:

Prison Guards...

Hey, anyone out there know any prison guards? I do. My best friend is one. I can’t tell you how happy he is to be supervising 20 or 30 inmates, many of whom are armed, and his defensive weapon is… a button. If anything bad happens, he presses it, and hopefully no one sticks him before the armed guards show. And when a fight between two gangs DID break out on his watch, the only thing that saved him was the respect he showed the prisoners on a daily basis. One of the gang-bangers actually started for him, but another gang-banger stopped it.

In another situation, a prisoner asked for a second helping of lunch. A (new)surley guard told him no. The prisoner said, “I’m in for life. I’ve got no reason to take this” and proceeded to beat the guard to a pulp before someone pulled him off. The guard found out the hard way that respect is the best policy.

My buddy says there ARE guards with bad attitudes. They also have very short life expectancies.

Frost (profile) says:

All non-creative work will be automated.

There is no theoretical reason why we won’t eventually automate everything that doesn’t explicitly require human ingenuity and creativity. There simply aren’t enough of those jobs to go around, or rather there aren’t enough employers to hire the entire world population to be creative.

This, of course, is only a problem for as long as we cling to the outmoded idea that you need a “job” to get “money” so you can get everything else. If we can the idea of money and start running the world on sensible real-world premises and just provide people with what they need, then automation isn’t a threat to us – it’s the single greatest thing that has ever happened to humanity. 100% unemployment for all – and all the housing, clothing, food etc everyone could possibly need in spite of or even because of that.

Society is broken right now. It’s not the fault of the one thing that has ever been necessary to and instrumental in raising human standards of living – technological progress.

nasch (profile) says:

Re: All non-creative work will be automated.

There is no theoretical reason why we won’t eventually automate everything that doesn’t explicitly require human ingenuity and creativity.

Robots have already painted pictures. Someday computers will create music, literature, and other pieces of art that will be indistinguishable from human handiwork.

If we can the idea of money and start running the world on sensible real-world premises and just provide people with what they need, then automation isn’t a threat to us – it’s the single greatest thing that has ever happened to humanity. 100% unemployment for all – and all the housing, clothing, food etc everyone could possibly need in spite of or even because of that.

Sounds beautiful. It will be a rocky road to get there, though.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...