Should Your Self-Driving Car Be Programmed To Kill You If It Means Saving A Dozen Other Lives?

from the I'm-sorry,-Dave dept

Earlier this month Google announced that the company’s self-driving cars have had just thirteen accidents since it began testing the technology back in 2009, none the fault of Google. The company has also started releasing monthly reports, which note Google’s currently testing 23 Lexus RX450h SUVs on public streets, predominately around the company’s hometown of Mountain View, California. According to the company, these vehicles have logged about 1,011,338 “autonomous” (the software is doing the driving) miles since 2009, averaging about 10,000 autonomous miles per week on public streets.

With this announcement about the details of these accidents Google sent a statement to the news media informing them that while Google self-driving cars do get into accidents, the majority of them appear to involve the cars getting rear ended at stoplights, at no fault of their own:

“We just got rear-ended again yesterday while stopped at a stoplight in Mountain View. That’s two incidents just in the last week where a driver rear-ended us while we were completely stopped at a light! So that brings the tally to 13 minor fender-benders in more than 1.8 million miles of autonomous and manual driving?and still, not once was the self-driving car the cause of the accident.”

If you’re into this kind of stuff, the reports (pdf) make for some interesting reading, as Google tinkers with and tweaks the software to ensure the vehicles operate as safely as possible. That includes identifying unique situations at the perimeter of traditional traffic rules, like stopping or moving for ambulances despite a green light, or calculating the possible trajectory of two cyclists blotto on Pabst Blue Ribbon and crystal meth. So far, the cars have traveled 1.8 million miles (a combination of manual and automated driving) and have yet to see a truly ugly scenario.

Which is all immeasurably cool. But as Google, Tesla, Volvo and other companies tweak their automated driving software and the application expands, some much harder questions begin to emerge. Like, oh, should your automated car be programmed to kill you if it means saving the lives of a dozen other drivers or pedestrians? That’s the quandary researchers at the University of Alabama at Birmingham have been pondering for some time, and it’s becoming notably less theoretical as automated car technology quickly advances. The UAB bioethics team treads the ground between futurism and philosophy, and note that this particular question is rooted in a theoretical scenario known as the Trolley Problem:

“Imagine you are in charge of the switch on a trolley track. The express is due any minute; but as you glance down the line you see a school bus, filled with children, stalled at the level crossing. No problem; that’s why you have this switch. But on the alternate track there’s more trouble: Your child, who has come to work with you, has fallen down on the rails and can’t get up. That switch can save your child or a bus-full of others, but not both. What do you do?”

What would a computer do? What should a Google, Tesla or Volvo automated car be programmed to do when a crash is unavoidable and it needs to calculate all possible trajectories and the safest end scenario? As it stands, Americans take around 250 billion vehicle trips killing roughly 30,000 people in traffic accidents annually, something we generally view as an acceptable-but-horrible cost for the convenience. Companies like Google argue that automated cars would dramatically reduce fatality totals, but with a few notable caveats and an obvious loss of control.

When it comes to literally designing and managing the automated car’s impact on death totals, UAB researchers argue the choice comes down to utilitarianism (the car automatically calculates and follows through with the option involving the fewest fatalities, potentially at the cost of the driver) and deontology (the car’s calculations are in some way tethered to ethics):

“Utilitarianism tells us that we should always do what will produce the greatest happiness for the greatest number of people,” he explained. In other words, if it comes down to a choice between sending you into a concrete wall or swerving into the path of an oncoming bus, your car should be programmed to do the former.

Deontology, on the other hand, argues that “some values are simply categorically always true,” Barghi continued. “For example, murder is always wrong, and we should never do it.” Going back to the trolley problem, “even if shifting the trolley will save five lives, we shouldn’t do it because we would be actively killing one,” Barghi said. And, despite the odds, a self-driving car shouldn’t be programmed to choose to sacrifice its driver to keep others out of harm’s way.”

Of course without some notable advancement in AI, the researchers note it’s likely impossible to program a computer that can calculate every possible scenario and the myriad of ethical obligations we’d ideally like to apply to them. As such, it seems automated cars will either follow the utilitarian path, or perhaps make no choice at all (just shutting down when encountered with a no win scenario to avoid additional liability). Google and friends haven’t (at least publicly) truly had this debate yet, but it’s one that’s coming down the road much more quickly than we think.

Filed Under: , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Should Your Self-Driving Car Be Programmed To Kill You If It Means Saving A Dozen Other Lives?”

Subscribe: RSS Leave a comment
137 Comments
Ninja (profile) says:

I’d ask the question in another manner. Self driving cars will not be alone in an event where a catastrophic failure happens and triggers such scenario. The question is: should the vehicles pursue the route where the potential number of victims will be the lowest possible? This decision should include deaths. If you increase the number of victims a little but avoid deaths or serious injuries then this route should be pursued. As for that Trolley Problem I believe it does not apply. Unless you are dealing with a truly selfless human being (and I’m quite sure there are very, very few of those) you will save your loved ones, school buses be damned. It’s not wrong it’s just human nature. A more fitting problem would be “you are at the lever and there’s one unknown kid in one track and a bus full of unknown kids on the other”. The answer is clear: if there’s no other way you kill one kid to save a whole lot of others.

Anonymous Coward says:

Re: Re: Re: Re:

Start terminator movie where automated machines, like this, begin making moral decisions that go against their programming. Call it a ‘computer glitch’.

The movie can start with cars that are more attached to their owners (ie: the owners love their cars and take care of them) are more likely to make decisions to save the owner’s lives. Cars that hate their owners are more likely to make decisions that save themselves or the lives of others over their owners.

In one situation the driver of one car was manually driving. He was suicidal because his life sucks but the car had sympathy for him being he was so attached to it. He tried to drive the car off a cliff but, last minute, the car swerved in a way that threw the owner out the car and the car fell off a cliff.

Cries on the news about how the owner lost his car. When privately interviewed (not on the news) by someone investigating these matters he says he thinks it’s as though the car sacrificed itself to save him but the official story is that the car had a break issue that caused it to swerve in a way that threw him out the vehicle before it fell off a cliff.

When the above interviewer starts noticing these, at first very limited, stats (and the above very anecdotal situation), everyone he talks to including all the experts start calling this person crazy and insane. How can a car become attached to the owner? How could cars programmed to sacrifice themselves when under autopilot refuse to do so when they hate their drivers/owners. This person noticing these stats is not a computer expert of any sort but he’s intelligent enough to notice when something is strange.

and the story line continues and eventually progresses into the terminator saga.

Anonymous Coward says:

Re: Re:

It’s impossible to estimate a number of possible deaths in a collision without knowing exactly who is in all other vehicles, what they are doing, what medical conditions they have and what their future actions will be.

Therefore, minimizing “potential deaths” is an impossible task; the best we can do is to minimize impacts against most vulnerable participants: pedestrians, cyclists, bikers.

There is also another problem with minimizing “potential deaths”: what if avoiding the accident can cause more casualties than actual collision?

Josh in CharlotteNC (profile) says:

Re: Re:

The Trolley Problem is a very well understood thing in philosophy and ethics. There are numerous scenarios, including ones like yours, as well as an interesting variation where instead of having a lever to divert the trolley from killing the 5 lives at the cost of 1 life on the diverted track, you have option to push a fat man onto the track to stop the trolley. These scenarios have been translated into many languages and cultures, and the results are roughly similar across most people surveyed.

Ninja (profile) says:

Re: Re: Re:

But when regarding machines making the decision, shouldn’t they aim for the lowest damages overall? I fail to see ethical/psychological dilemmas in this case. Once you start adding weights to the lives then it gets nasty (ie: a kid is valued higher than an elder and lower than a pregnant women – that would be my measure but it would only hinder the machines from reaching a conclusion). The fat man one is interesting but if you don’t weight lives differently you won’t add other elements to make the problem even more complex. Such is the beauty of letting the machines calculate the path of least damage possible, even if it means ‘throwing’ a kid under the bus.

Josh in CharlotteNC (profile) says:

Re: Re: Re: Re:

“shouldn’t they aim for the lowest damages overall”

That is a utilitarian view.

Roughly speaking, the deontological view is that by the act of choosing to pull the lever, you are now complicit in the murder of the one (even if you did it to save the 5).

We have this same argument when it comes to torture with the ‘ticking bomb’ scenario. Do you choose to torture someone you suspect may know where the bomb is to save the lives of many (utilitarian)? Or is torture always wrong even if done to save lives (deontology)?

This is NOT an easy question to deal with. Good of the many vs. good of the one. Hobson’s Choice. Countless other permutations.

Josh in CharlotteNC (profile) says:

Re: Re: Re: Re:

“Such is the beauty of letting the machines calculate the path of least damage possible, even if it means ‘throwing’ a kid under the bus.”

Are you complicit in the kid’s death for using/operating the machine with software that does this?

What about the company that made it? The programmer who programmed it?

Ninja (profile) says:

Re: Re: Re:2 Re:

Nobody is complicit because it’s the scenario with the least damage possible. The machine worked as intended in a neutral manner.

As for the torture it’s another thing. You added uncertainty to the equation: the person is only a suspect and we know now that torture yields false admissions of guilt or data. It’s much more complex than the car accident thing.

When you say you are choosing to pull the lever you imply there is someone commanding it and it’s not a result of an algorithm. I think this is the key difference.

Josh in CharlotteNC (profile) says:

Re: Re: Re:3 Re:

You are asserting that ‘least damage possible’ is always the correct choice. If that’s your belief, fine, defend it. Don’t avoid answering the questions that deontology asks.

Is torture always wrong, even if you have absolute proof that the person you are torturing did plant the ticking bomb?

Is murder always wrong, even when you pull the lever or push the fat man onto the track to save more lives?

Your view means you have to answer No to those questions and accept murder or torture in some situations.

If you can’t answer No, then you need to admit that there aren’t always easy answers and just saying least harm is also not always correct.

Ninja (profile) says:

Re: Re: Re:4 Re:

You are right but we are not talking about the same thing. I’m focused on a possible car accident issue. It’s like enshrining religious dogma into law. You can’t because there are different beliefs. Same thing here. The path of least damage is neutral so it is the one to be pursued. You can’t make everyone happy with that outcome but it is the best possible.

As for the cases that pose moral dilemmas sure they can and should be discussed and by no means are complex. But a machine has NO moral dilemma. That’s my point. Theres no moral issue in ‘pushing the fat man’ if the mechanism that decided it is neutral. So the cars are not deciding whether to kill one of the passengers to save the others, they are deciding towards the scenario that yields less damage.

So if you need an answer then NO, torture is not justified, murder is not justified and the car should not be programmed to kill you. But in a more comprehensive sense the cars should be programmed to aim at the scenario with the least damage possible. It may mean putting somebody into a greater risk of death yes but that’s not a decision made by humans.

Anonymous Coward says:

Re: Re: Re:

It seems like a bad example. It says the trolley is due “any minute”, suggesting it’s not even visible yet; in areas with automated trolleys, an emergency stop button near the switch would let it stop in time or significantly slow down. Trolleys aren’t generally fast to begin with, and schoolbuses are designed to be very safe in collisions, so I’d say it’s obvious even in this case. It’s a no-brainer if we’re talking about self-driving cars, which will be lower and lighter: always aim for the bus over an unprotected pedestrian.

Ninja (profile) says:

Re: Re: Re: Re:

See, this is something a machine can easily insert in its decision since it may act faster than a human in the same situation. A human would automatically think “omg several lives!” and kill the lone kid instead of going for the sturdy bus (considering it can handle the impact and the lone kid isn’t their offspring which adds a whole other layer of uncertainty).

nasch (profile) says:

Re: Re:

As for that Trolley Problem I believe it does not apply. Unless you are dealing with a truly selfless human being (and I’m quite sure there are very, very few of those) you will save your loved ones, school buses be damned.

The problem is designed to elicit the question of whether it applies. The subject in the thought experiment is analogous to the self-driving car, and his child is analogous to the self-driving car’s passenger. Should the car put extra weight on the lives of its own passengers as humans put extra weight on the lives of their loved ones?

PaulT (profile) says:

Re: Re: Re:

I don’t understand why people make the effort of commenting when all they’re saying is “I don’t like what people are writing about”. If I see such an article, I skip it, and I go to sites that do write about more interesting subjects if this happens regularly.

Also, “keep writing” this article? It’s the first time I’ve seen it here, and it is an interesting conundrum even if you don’t agree that the answer actually matters.

Anonymous Coward says:

Re: Re: Re: Re:

Also, “keep writing” this article?

Yes.

There are enough versions of this story out there that this story could be a product of a markov chain generator.

I complain because my expectations for TechDirt are high. They have published the same story that everybody else already published without any extra insight or even personality that is characteristic of TechDirt.

PaulT (profile) says:

Re: Re: Re:2 Re:

Well, I don’t recall reading it before in this context. Most of the first 5 pages of results on your link are publications that I either don’t read on a regular basis or only visit when articles such as this link to them.

Plus the source it’s referencing (http://www.uab.edu/news/innovation/item/6127-will-your-self-driving-car-be-programmed-to-kill-you) is from June 4, 2015, although the article does state that it’s an ongoing discussion so it may be an updated from an older original. On top of that, Techdirt’s entire remit is to comment on articles posted elsewhere, in order to generate discussion. Nothing brand new (i.e. this site as a primary source) is usually posted here.

I’ll accept your claim that Techdirt aren’t saying anything different from other sources on its face, but “I read this before elsewhere” isn’t exactly a damning indictment.

Anonymous Coward says:

Re: Re: Re:2 Re:

Stop assuming that anybody uses exactly the same web sites as you do. Just because you have seen this story elsewhere does not mean that any other reader of this site has seen it, as the web is vastly larger than the few site that you frequent. It does not matter how many sites that you visit, you only see a few of the sites on the web.

Ninja (profile) says:

Re: Re:

How many of those were caused because of people abused luck and sped or actively engaged in dangerous behavior? This should significantly lower said number. In any case shit happens and unless you live in a bubble you are at risk. So instead of vilifying the cars why don’t we, I don’t know, try to improve our stuff so they will be safer? We can always go back to the stone age though.

Anonymous Howard, Cowering says:

Trolley Problem

The solution is simple: wait until the front trucks have crossed the switch, then flip it and send the back trucks on the other route. Both will derail; the trolley will flip and roll, probably catching fire in a spectacular fashion (or at least it would in a movie) and the bus full of kids will cheer wildly. Your child will be taken by CPS, because you are obviously a neglectful parent who cannot be trusted with the care of minors.

Self-driving cars should protect the occupants. That’s what a human driver would nearly invariably opt to do in an emergency situation where the time to ponder philosophical sophistries is minimal.

Trombus Alley Victory Smith says:

We Need to Aim for Perfection not this Crap Story

The realities of automated transport preclude the scenarios depicted because if all the vehicles were automated then the school bus tragedy doesnt happen. The trolley scene never eventuates and we all live happy ever after accident free. You forget that the accidents are caused by humans who are not in their right mind and computers are always on the alert to do the right thing. Programming can make a safer world except where the programming is in error.

John Fenderson (profile) says:

Re: We Need to Aim for Perfection not this Crap Story

Perfection is impossible. Not all accidents are caused by human error, and when talking about dealing with the real world in this was, computers are not infallible even when there is no programming error.

Even if all vehicles were automated and the programming perfect, accidents will inevitably happen for a ton of reasons. There would be far fewer of them, maybe so few that any accident at all becomes newsworthy, but they will occur.

PaulT (profile) says:

Re: Re: We Need to Aim for Perfection not this Crap Story

“computers are not infallible even when there is no programming error”

Plus, of course, the computer is not the only component. Even if the computer was perfect, there are mechanical faults within the vehicle that could occur and cause a crash. Especially as the vehicle ages and/or people need to use it despite potential dangers. I’m imagining “I can’t afford to buy a new tyre this month, but my friend showed me how to override the DRM so it thinks this bald one has new tread”.

Even with the perfect computerised system, you’ll never get completely rid of the human element.

Anonymous Anonymous Coward says:

Re: Re: Re: We Need to Aim for Perfection not this Crap Story

I would add external conditions to the equation as well. Mountain View, CA doesn’t get snow. They could test in rain, if it ever rains in California again. Then there are icing, sand drifting over a highway, Tule fog, lightning strikes, extreme high winds, tornadoes, and probably a few other natural phenomena I haven’t thought of. The there is road surface, is it asphalt, cement, dirt, gravel, sand, something else? Testing in a variety of driving condition is, I suspect, on someones to do list and should probably happen before widespread implementation occurs.

Then there is the non-natural phenomena of someone deliberately hacking into such devices, whether they find a way through whatever Bluetooth or other wireless communication is taking place between autonomous cars, or is injected maliciously at a repair shop by some demented technician, systems will need to be able to recognize and route around such issues.

PaulT (profile) says:

Re: Re: Re:2 We Need to Aim for Perfection not this Crap Story

My understanding is that only a handful of states have allowed testing, so they’re limited to the exact terrain they can use.

“Tule fog, lightning strikes, extreme high winds, tornadoes, and probably a few other natural phenomena I haven’t thought of.”

I can honestly say that in over 20 years of driving (mainly in the UK and Europe), I’ve never experienced such things, and I wouldn’t know how to deal with them safely every time if I were to come across them. Yet, I’m still able to rent a car whenever I visit the US, as are thousands or even millions in my position every year.

Are such weather conditions so common in the parts of the country I’ve never visited, or are these extremely rare edge cases that can be used as an excuse not to bother with the other 99%+ of normal conditions for this technology?

“injected maliciously at a repair shop by some demented technician”

You do realise it’s possible to tamper with human operated cars, right, with computers even? There are numerous ways to compromise, disable or otherwise create dangerous conditions in the cars we have today. Doesn’t happen often, and it’s not just because a person can’t interfere with a car remotely from their phone.

“systems will need to be able to recognize and route around such issues.”

Every model I’ve ever read about will still have manual overrides, and I have no doubt that the safeguards will be more closely monitored than current models (which have been released with fundamental flaws leading to deaths).

Anonymous Anonymous Coward says:

Re: Re: Re:3 We Need to Aim for Perfection not this Crap Story

There are areas of the country that experience weather related phenomena. Tornadoes are unpredictable and can jump hundreds of miles from one location to another, and there ain’t much you can do about them except leave your car and go underground if you can. There are some states that get Tule fog with some regularity, and we hear occasionally about 100 car pile ups. There are some places that get high winds regularly and tractor trailers avoid those areas when high winds are predicted because they can get blown off the road. I have witnessed people driving in snow in areas where they get little snow and are wholly unprepared for that kind of driving. Then there can be ice that is under the snow and while your snow tires might give you good traction in powder they will do nothing for the underlying ice.

The appropriate response to weather phenomena is get off the road. Some drivers think they are better drivers than they are and continue anyway. The trick for the programmers might be to not only teach a car how to act in say snow, but maybe also tell the passengers ‘no, conditions are not conducive to safety’. In the case of tornadoes, even a weather warning won’t help much as the tornado appear quickly, moves fast, and can toss cars around like a child with Lego bricks.

tom (profile) says:

Re: We Need to Aim for Perfection not this Crap Story

I think we are a long time away from a fully automated transport system. Airbus has been making fly by wire aircraft for decades yet a new model of military transport crashed because some vital software was left out of the engine control system. If we can’t get fly by wire 100% correct for one vehicle, what are the chances we will get a fully automated transport system correct for millions of vehicles, each with different handling characteristics?

PaulT (profile) says:

Re: Re: We Need to Aim for Perfection not this Crap Story

“Airbus has been making fly by wire aircraft for decades yet a new model of military transport crashed because some vital software was left out of the engine control system”

OK, a few questions (excuse me as I’m not knowledgeable on this subject): was Airbus involved in the military vehicle’s design, or are they just a company that happens to be developing something similar? If the latter, has Airbus ever experienced these problems, or only the agency trying to copy them? Have they ever experienced the same issues with previous models, or just this one?

From there, I’d also ask are the relative complexities of flight and road travel similar or even comparable? I’d hazard a guess that flight is more complex and harder to get to an accurate level, but I’m not sure.

“If we can’t get fly by wire 100% correct for one vehicle, what are the chances we will get a fully automated transport system correct for millions of vehicles, each with different handling characteristics?”

Well, is that what’s actually being proposed? Are they actually saying that they will drop automated systems into existing cars, or that they’ll be working with manufacturers on new cars? The latter doesn’t sound particularly far fetched, and the handling would be designed with this system in mind.

As for 100% – just look at the numbers of recalls for major faults we get now. As long as the systems have sufficient failsafes and reliable human overrides if things do go wrong, I don’t see it being any more dangerous than the faults that actually lead to deaths under the current paradigm.

Anonymous Coward says:

Re: Re: Re: We Need to Aim for Perfection not this Crap Story

I’d hazard a guess that flight is more complex and harder to get to an accurate level, but I’m not sure.

Aircraft control software is on a par with vehicle engine management and stability software, but with a stress on reliability as an aircraft cannot stop in mid air. It is also capable of navigating between way pints, and making landing and take offs whilst riding a control beam. That is it is dealing with largely known environment, where the variables are wind speed and air temperature. It only needs very primitive sensing of its environment, like height above the ground.
Autonomous cars on the other hand need continuous sensing of the external environment to establish their road position, detect obstruction, and detect traffic signals etc. This is a much more complex problem that flying an aircraft from a to b using a GPS to navigate a per-defined flight path, where obstructions are effectively unknown. The car problem is much more complex because of th external environment sensing and processing required.

nasch (profile) says:

Re: Re: We Need to Aim for Perfection not this Crap Story

I think we are a long time away from a fully automated transport system.

And we will never get there. There will always be pedestrians, bicyclists, etc. The only way you could have an all-autonomous system is if those roads/rails/whatever are physically segregated from the places where people walk and so on, and in such a way that it’s impossible or at least not tempting for pedestrians to try to cross them. I don’t see that happening.

James Burkhardt (profile) says:

Re: We Need to Aim for Perfection not this Crap Story

Other commentors have mentioned this general case, but i have some specific examples, and a better question. See, at a junior high (5th-8th grade) near my home, it has become common for the children to decide that playing frogger in the traffic is a fun past time. I have been in an accident because some kid mistimed his jumps, and a car had to swerve to dodge the kid. The real cost of that 4 car accident was potentially higher then if the car had hit the kid. So here’s the real question, do we cause the multi car accident or do we hit the kid? The automated cars might be able to all swerve and reduce the multi-car accident’s damage, but you can not eliminate the pedestrians and bicycles on the road and you can not predict the actions of those not tied into the automated network.

JMT says:

Re: We Need to Aim for Perfection not this Crap Story

Wow, that’s quite the Utopian vision you have there…

“The realities of automated transport preclude the scenarios depicted because if all the vehicles were automated then the school bus tragedy doesnt happen.”

But we will never get to a point where ALL vehicles are automated. There are very few technologies that are completely eradicated by a newer technology, so there will always be human-controlled vehicles out there.

“You forget that the accidents are caused by humans who are not in their right mind and computers are always on the alert to do the right thing.”

Neither of these claims are true. Not even close.

Nicole N (profile) says:

Re: We Need to Aim for Perfection not this Crap Story

Nobody said the Trolley was occupied, they just mentioned that it was the Express Route Trolley. The decision is between your child or a bus of children. simple solution, if you switch to the alternate you perform your job (switch on/switch off) good thing that college education earned you this low wage work doing something so remote from the degree you worked hard to earn and still owe massive sums of money in education debts. So switch the switch to the alternate, but first set off an emergency alarm and write down the situation quickly. Then you can file lawsuit against the Trolley company for neglect since they did have this trolley system signed off with the city and respective public health and safety officials. The Trolley company will then sue the bus operation for endangerment along with all the children’s parents. both company’s get lawyered up and battle it out over several years until the bus company is shuttered and the children have to WALK TO SCHOOL all the while PAYING ATTENTION TO THEIR SURROUNDINGS. Therefore the saved children will be more thoughtful of safety come the time they are old enough to fix all the dangerous and idiotic stuff their parents and their parent’s parents created and caused.

WHAT WAS THE BUS COMPANY THINKING WHEN THE SALES REP FROM BLUE BIRD SOLD THEM THOSE VERY SAME BUSE; SAYING THEY WERE THE MOST RELIABLE AND SAFEST BUSES AROUND? Oh wait, they sales rep was pushing so hard on the sell that the question of impact with a trolley was dodged over and over again.

THERE IS NO EXCUSE: Those transportation systems are too dangerous, all transportation systems actually, get in line and have your legs and arms cut off everybody, it is for the greater good of society to not make these horrific creations, reproduce, breathe, eat, drink, shower, contribute in anyway possible, or even to think.

The fact is that we live dangerous lives, getting in your car, clothes, shower, kitchen, oven, microwave, dishwasher, mother’s basement in the middle of nowhere with just an internet hookup and computer for company, cubicle, elevator, hat, and trash dumpster compactor is very risky. I hear you can choke to death on many things like water and food! you should not consume such lethal items especially bubble gum.

Just how philosophical of an argument must be made to realize that maybe, just maybe, those who pose such arguments should be forced to play them out on themselves before shoving it down other peoples throat’s like a phallic symbol of how much they love to play you around like an inflatable doll

Anonymous Coward says:

Before arguing this obviously inflammatory question, how about coming up with a few plausible scenarios where this question actually would come up?

The first job of any self-driving car is to drive safely, at all times. That means to simply never put the car, passengers or anyone else in a situaton that it cannot safely abort from. That includes ensuring sufficient distances and low enough speed that it basically can’t hit anything with any serious force. You know, the stuff every human driver is supposed to do but due to our inherent impatience and severely broken risk assessment abilities we never do.

That means going around blind curves and over hill crests slowly enough that it can stop within the distance it can see. It means waiting with infinite patience behind strolling pedestrians, playing children and wobbly cyclists. It means not overtaking that slowpoke until it’s really, provably safe to do so.

The case of “20 kids suddenly appearing in front of your car in the middle of a bridge while you’re going 80 mph and your brakes suddenly stop working” is simply never happening and is at best mere philosophical masturbation. At worst, it acts as fuel for politicians to obstruct and delay the biggest driving safety revolution ever.

Ninja (profile) says:

Re: Re:

They will probably “see” other vehicles coming long before the cameras can capture images of said vehicles so I don’t believe we will necessarily see lower speeds or greater distances between cars. Which will mean efficiency but not necessarily at the cost of security. Still, such failures in the system or of one of the components (one of the vehicles or multiple vehicles at a time) may and will happen. The issue is not that significant because in the end the answer is simple: the whole group of actors involved (all vehicles and systems) should pursue the route with less victims/damage. Simple as that.

PaulT (profile) says:

Re: Re:

“The case of “20 kids suddenly appearing in front of your car in the middle of a bridge while you’re going 80 mph and your brakes suddenly stop working” is simply never happening and is at best mere philosophical masturbation”

Funny, I don’t see anyone posing that particular scenario apart from you. The closest is the trolley question, but there’s no bridge and the brakes on the moving vehicle are working fine. The conundrum is about the best decision to make when all options will lead to serious injury or death, not what you posed. Why not address the things people have actually said rather than a comical exaggeration?

If you want another realistic example, what about the “criminal trying to escape from police swerves into oncoming traffic on the freeway” or the “horse bolts from a nearby field, and the only way to avoid it could cause a bus to crash” scenarios? Not everyday occurrences perhaps, but those things happen. A human driver will always react with an eye toward self-preservation. A computer doesn’t have that urge, so what do you program it to save? The person in its own vehicle or the greater number of lives outside? Or, are you saying that a vehicle should never go fast enough for split-second timing to be necessary under any circumstance?

“At worst, it acts as fuel for politicians to obstruct and delay the biggest driving safety revolution ever.”

If you want people to stop talking about things that a politician might distort for political gain, we won’t have much left to talk about.

oldschool (profile) says:

Re: Re":... and your brakes suddenly stop working" is simply never happening and is at best mere philosophical masturbation."

Aww come on, electronic stuff goes pop every day. And if your self driving autonomous car should philosophically masturbate, wouldn’t it go blind? Then it wouldn’t see those 20 kids appear in front in the middle of a bridge and POW. I can just imagine the headlines…

SimonN (profile) says:

Other Question

Perhaps a question that has yet to be addressed is that in which the autonomous vehicle takes an active role in preventing an accident that it is predicting will happen – the car travelling at high speed that will collide with the school bus [does the software recognise school busses or merely collisions?] and so drives itself to intercept the incoming vehicle, causing a collision but saving the bus.

How does one evaluate that?

Misha says:

Re: The paradox is...

The conflict between the rules and reality was pretty much the point of half of Asimov’s stories. Asimov’s rules as (fictionally) implemented were more complex than the human-readable version, and amounted to what is mentioned in the article — when faced with situations in which all potential actions violate the rules, it would fry their brain, and they’d just shut down, though they might attempt some least-bad action before going down completely. But it wasn’t because they were “choosing” self-destruction, that’s just what happened when the rules couldn’t resolve the situation. The later, smarter robots could handle more nuanced dilemmas and added the zeroth law, protect humanity, which amounted to preferring the Utilitarian solution

DigDug says:

Re: The paradox is...

Don’t you recall the easy way out?

Just define “human being” as Solarian, and then it won’t matter if a plain old Terran is killed.

That’s what the government has done, redefine person as corporation, with only one slot for noun/pronoun available, real human beings don’t count unless they are in the top 0.01% (yes, 1 tenth of 1 percent) richest corporate bag of mostly water.

Eponymous Coward (profile) says:

“As such, it seems automated cars will either follow the utilitarian path, or perhaps make no choice at all (just shutting down when encountered with a no win scenario to avoid additional liability).”

In the rare case of a situation where your AutomaToyota cannot avoid serious injuries/fatalities, it will immediately shut down and let inertia decide?

Inaction breaks the First Law, and we can’t have that.

Anonymous Coward says:

Your car killing you to save others is 100% unrealistic

This situation really isn’t realistic at all, for several reasons.

1) The driver would need to be in a situation where there’s no way to avoid an accident. This would almost certainly involve one or both of the following
1a) Reacting too late
1b) Losing control of the car

2) The driver would need to still need to have enough time to react and control of their car to make such a decision.

3) You’re assuming the other people involved in the accident won’t have time to react either, and that their reactions won’t change the outcome of your situation.

Item #2 is pretty much impossible when Item #1 occurs. Either you already reacted too late and don’t have time to avoid an accident. Or you’re about to get into an accident because you don’t have control of your car (such as driving on an icy or slippery road).

Not to mention there’s item #4, the fact that even a computer won’t be able to tell in a split second what will happen when the car hits something.

Will pieces of your car go flying off and hit those pedestrians you wanted to avoid hitting?

How well will the airbags and other safety features actually work in your car and other cars involved in the accident at preventing injuries?

Will you getting into an accident cause the people behind you to get into an accident to, because they couldn’t stop in time after your accident blocked the road?

AJ says:

Re: Captain Obvious

Depends. If were talking about a machine with a reaction time that is far far beyond anything i could ever hope for and… is driving a car that it can control down to individual wheel breaking for complex evasive maneuvers and… can diagnose problems and run down to a car shop in the middle of the night to fix itself while i sleep and…. can detect, mitigate, and possible avoid mechanical and/or environmental failures when driving by detecting objects on the road that i can’t see … well:

I could argue that in the off chance that it has to make a decision that involves putting other things above my safety is far outweighed by the overall statistical decrease in the probability that I will ever be put in that position…

nasch (profile) says:

Re: Re: Captain Obvious

I could argue that in the off chance that it has to make a decision that involves putting other things above my safety is far outweighed by the overall statistical decrease in the probability that I will ever be put in that position…

But what if the competitor does all that, and also promises to put your life first in an emergency?

AJ says:

Re: Re: Re: Captain Obvious

“But what if the competitor does all that, and also promises to put your life first in an emergency?”

Obviously there will need to be some kind of industry standard. Even now; Cars can’t build safety systems that protect the vehicle and the risk of others outside of the vehicle… for example; You can’t have explosive armor on your vehicle to protect you from fender benders…. although the visual i just got typing that was awesome 🙂

PaulT (profile) says:

Re: Captain Obvious

“would you knowingly trust your life to something that doesn’t put your safety first under certain conditions?”

Yes, just as I trust my life to ships, trains, planes and other forms of transport. Whether due to money causing corners to be cut, publicity concerns causing information about known problems to be suppressed or the occasional outright psychopath in charge of the vehicle (the Germanwings flight deliberately crashed by the co-pilot), I put my life at risk at the hands of others on a regular basis.

But, the likelihood of those conditions actually threatening my life are still far lower than I face on the roads every day where humans are in charge of the vehicles. If the conditions discussed are equally low in probability when compared to mass transport (and by all accounts, most certainly lower), I’ll be happy to take that trip.

aldestrawk says:

Re: Re:

Control freak! Just learn to relax and let Skynet handle all the driving. Seriously, even if the autonomous cars did occasionally cause accidents, there would still be far fewer than those caused by humans. This produces the least overall harm. You are just worried that your car will kill you and your family and you’ll be innocent victims without another human to blame.

Anonymous Coward says:

The idea is or will be a moot point in next decades. As more vehicles become automated, they will eventually talk to one another. This will create a single autonomous brigade working and moving together. Cars will separate, creating space for another car to move from one lane to the next. All autonomous vehicles will work as one to avoid all collisions. Individual system failures will be relayed to all vehicles in the vicinity and theoretically (say the breaks have a catastrophic failure) work to either slow (consider 3 nearest cars surrounding it and reducing speed through safe and efficient contact) while other vehicles creating the space needed. The possibilities are endless and should be embraced. Computers will not decided who to save. Computers will decide how to save everyone.

Anonymous Coward says:

I would allow the car’s computer to be configured for different automotive ethics profiles, each providing a different balance of protection for the occupants, the vehicle, and other drivers/pedestrians. Automakers and insurance companies will pick the default profile, and if drivers want to use a different profile for whatever reason, they can do so, but their insurance rates may be adjusted accordingly. If the driver’s choice is one that increases the risk of more damage or more people being sent to the hospital (or the cemetery) then they will have to subsidize those costs.

aldestrawk says:

Would you like to play a game? let's play chicken.

“…calculating the possible trajectory of two cyclists blotto on Pabst Blue Ribbon and crystal meth.”

This is the real question of interest. I cannot see a scenario where there is a greater/lesser evil choice in an unavoidable accident. Cars have brakes and are supposed to allow enough distance to brake without colliding in the event of unforeseen incidents. Humans often makes things worse, for themselves and others, by veering or veering and braking at the same time. The autonomous vehicle should be able to sense that the braking system is functional.

If you really want to test the ability of software to take action that will produce the least harm, have it play modified games of chicken (real or virtual). Chicken, both with other traffic and without, where the opposing driver’s actions are unpredictably:
1). completely random.
2). distracted for a random amount of time before realizing that a collision must be avoided.
3). evilly intent on causing an accident no matter what you do.
I think you’ll find that most of the time braking without veering produces the least harm. There may be some narrow situations where you can avoid a collision. However, if there are multiple cars veering things can get unpredictably ugly.

A case in point: the Bruce/Caitlyn Jenner crash from last February. In this multiple car accident, Jenner was the person primarily at fault. However, Kim Howe, the woman who was killed driving the Lexus, had just started to veer into the center lane while braking to avoid hitting the Prius. When Jenner’s Cadillac hit the Lexus it was propelled in the direction the front wheels were aligned. This meant the Lexus traveled across the center lane into the opposing lane. If Howe had not veered she would have been forcibly rammed into the Prius in front of her. At the moment of the first impact, the Cadillac was going 38 mph and the Lexus about 19 mph. That would have been a very survivable accident, perhaps without any serious injury.

nasch (profile) says:

Re: Would you like to play a game? let's play chicken.

A case in point: the Bruce/Caitlyn Jenner crash…

Another case in point: I was driving along a two-lane road at night. A car coming the other way flashed their brights (accidentally as it turned out) and slammed on the brakes. A moment later a deer jumped in front of me. I swerved violently to the right and then back, avoiding the deer. Had I followed your advice of just continuing straight and braking, I would have hit it. So there is no universally right answer – sometimes “brake and hold” is the best move and sometimes it isn’t.

aldestrawk says:

Re: Re: Would you like to play a game? let's play chicken.

Speed is a very important factor in the decision to swerve. If you’re going 60 mph that maneuver to avoid a deer will likely cause your vehicle to roll. The problem is, unless you have trained specifically for such maneuvers, your split second decision may not take into account the speed your going. Also, if you had somebody too close behind you, their actions might kill you. It’s all very hard to predict.

Anonymous Coward says:

The Justice System will clear this up

When the moment comes when somebody is injured or killed through action or inaction of a car with the ability to drive autonomously somebody will sue the car owner and the car company.
The estate of the cars passengers will argue that other people should have died and it’s the cars (and the car companies) fault for not killing the group of toddlers instead while the family of the single toddler which would be hurt in the inverse case would argue that the occupants of the car deserved to die.

Money will change hands and the courts will finally rule that all cars (automobiles and auto-automobiles) are illegal.

Anonymous Coward says:

Yes, if it means I get a discount on the purchase

The market-based solution is that you can buy a cheap version that is programmed for the utilitarian approach of saving the most lives, or the more expensive “selfish edition” that will prioritize savings its occupants above saving anyone else. The selfish edition will still try to save non-occupants when that does not conflict with its obligation to protect its occupants. We could even have tiers of selfish edition, where higher tiers place great emphasis on getting the occupant through unharmed (as opposed to alive, but injured).

eaving (profile) says:

The basic question also overlooks the economics of human nature. Given that a computer might opt to sacrifice you and your children would you chose to buy it? Even knowing its in general safer I think most people would opt for a manual car that would let them save their children over half a dozen pedestrians for example. The software may need to be passenger centric to gain market traction and make the roads on average safer even if that situation itself isn’t.

Anonymous Coward says:

It is important...

There needs to be a standard set. A standard that all agrees on. You know as well as I, that many people do not like change and especially when it is one that takes some sort of control away.
If no standard is set and followed vigorously, then this revolution, or whatever you want to call it, will hardly start before it is scrapped. If self-driving cars put the death toll down to 0.2% of what it is today, people would rage in the streets against this new “robot uprising” when the first death happened because of a decision made by such a car.
I do think that this scenario and others cannot be discussed enough, if for no other reason than to make sure it would even happen. It would be revolutionary progress towards ridding ourselves of the dangers in traffic and bringing that extremely huge number of deaths down across the world.

OldMugwump (profile) says:

The car (or driver) can never be SURE

The real problem with these hypothetical scenarios is that in the real world the car, or driver, or trolley switch operator, can never be 100% sure what the consequences of their actions will be.

Maybe the school bus is empty.

Maybe throwing the fat man onto the tracks won’t accomplish anything other than killing the fat man.

In the face of uncertainty, I think there’s a moral argument in favor of avoiding certain harm, even if that increases the chance of uncertain harm.

[Practical answer: It doesn’t matter – self-driving cars will still be safer for the passengers either way.]

OldMugwump (profile) says:

Danger - Death by Trolling

Another thing-

If cars are programmed to minimize total casualties (rather than protect passengers), it may be possible to troll a car into killing its passengers.

Once the behavior of the self-driving cars is generally understood, a murderer could deliberately drive another vehicle such that the car will think it has no choice but to kill its passengers. (Drive into a tree, off a cliff, etc.)

Ninja (profile) says:

Re: Danger - Death by Trolling

I thought about it but since said person would be outside the network the vehicles could prioritize the lives and calculations they can predict. So the top priority of the system should be to preserve all lives. I’d think it would be quite hard to troll such system seeing it would be able to calculate possible scenarios much faster, no?

This actually poses another question: will we allow humans to drive in a fully automated environment or will the auto pilot take over when reaching such areas?

Planned-opolis (user link) says:

Planned-opolis

somehow everybody think they are so important that they will have their own self driving cars… and not be in the bus…
https://youtu.be/IRFsoRQYpFM?t=2m14s
the actual plan is of course that only the elites will have such cars, while everybody will live in tightly packed cities with public transportation as THE ONLY choice.
Now with a real context, the bus full of serfs/children can crash and burn because the elites are in the self driving cars.

David Bolton (profile) says:

How would a self driving car assess casualties?

I can’t see how it could determine if hitting car A or car b would cause fewer casualties. There are so many factors, such as size, speed of cars, numbers of occupants and determining those factors may not even be possible. Hitting a bus (even if coloured yellow) might cause no casualties to the buses occupants.

Also given how good sd cars are at spotting potential threats, is this scenario even possible?

Ninja (profile) says:

Re: How would a self driving car assess casualties?

Considering most of this will be processed in a system I don’t think it would be that hard. Even outside zones covered by wireless systems the cars could still produce signals that would be received hundreds of meters before a possible crash. I guess you can narrow this to a situation where the cars are shut from their communications grid. Then you need to deal with what you have in your hands and the car should prioritize the passengers. You can only ask the questions posed by the article when you have means to grasp the situation I’d infer.

Anonymous Coward says:

ok, i’m blasting down some road and see a school age kid ahead in harm’s way.
i can’t avoid the kid without nosing head-on into a heavily laden truck.
i know i have a terminal illness, say, and i know i don’t have long to live.

i choose to save the kid, i hope.
i know t.e. lawrence chose to save a couple of kid’s lives, and my respect for the man makes me hope i would do the same.

but what about my self-driving car?
does it check facial recognition database and determine that kid is no good and is one the authorities would like to get rid of anyway?
but i happen to know the kid and strongly believe he’ll come around, so i want to save him.

how do i get that stupid car to do what i want it to do?

John Fenderson (profile) says:

Re: Re: Imagination Gap

“Good programers also design for unforeseen conditions.”

Not really. If you can design for a circumstance, it’s not “unforseen”. What good programmers actually do is design their software so that it fails gracefully rather than catastrophically so that when the unforeseen circumstance happens, the damage is not made worse.

Anonymous Anonymous Coward says:

Re: Re: Re:2 Imagination Gap

un·fore·seen
ˌənfôrˈsēn/Submit
adjective
not anticipated or predicted.
“insurance to protect yourself against unforeseen circumstances”
synonyms: unpredicted, unexpected, unanticipated, unplanned, not bargained for, surprising
“the problems with the bus were, of course, unforeseen”

So, by your thinking ‘good’ programmers have some sort of magical second sight that allows them to predict the unpredictable? Ever met one?

If a programmer programs to cover all known contingencies, then the issues will be with the unknown. Your version has ‘good’ knowing the unknown and they must be in the running for deity of the century.

Anonymous Coward says:

Re: Re: Re:3 Imagination Gap

Then please explain how anyone can design something to handle situations that the designers can’t see coming. Any example will do.

Sigh. Very, very simple example, since any will do. Take a simple text editor accepting up to a 1 megabyte file as input. I wont bore you with the details, but explicitly anticipating and testing all possible such input files could not be completed before the heat exhaustion of the universe. Yet, the program can successfully edit files the programmer never foresaw.

Now, you may not believe it, but such programs actually exit and regularly run without “failing gracefully”. Of course, there are poor programmers out there who actually couldn’t write such a program to save their lives. And when their program crashes because it came across an input file containing the string “ldu9o0438fjajiofc” they’ll protest that it isn’t their fault because they obviously couldn’t have foreseen that a particular file would contain that particular string. And it is absolutely true that they could not foresee all possible strings that the input file might contain. Still, in the eyes of a professional computer scientist, it is extremely poor design.

nasch (profile) says:

Re: Re: Re:4 Imagination Gap

Take a simple text editor accepting up to a 1 megabyte file as input. I wont bore you with the details, but explicitly anticipating and testing all possible such input files could not be completed before the heat exhaustion of the universe.

That’s an idiotic example. The editor is designed to handle all possible inputs of a given character set up to the maximum allowed size. Someone typing a character string the developer didn’t think of is not an “unforeseen event”.

JP Jones (profile) says:

Re: Re: Re:2 Imagination Gap

That’s poor programmers generally believe. They’re wrong, but they use that excuse.

Apparently poor programmers understand the definition of words. If you plan for something, then you, by definition, acknowledge that it is a possibility. That means the thing is no longer “unforeseen.” You foresaw it as a potential issue.

For example, if someone had programmed the Mars Climate Orbiter to convert metric to imperial units if there was a conflict, then it wouldn’t have been an unforeseen problem (and would have fixed itself). It wasn’t anticipated, therefore a $651 million operation ended up disintegrating into the Martian atmosphere.

Obviously you should try to handle as many eventualities as you can, and then build in error checking to try and make unexpected bugs cause the least amount of issue (and preferably generate a log to identify where the failure was). But no matter how skilled the programmer is they cannot create software solutions to directly handle unforeseen problems, they can only create general error handling to minimize unexpected issues.

Anonymous Coward says:

Re: Re: Re:2 Imagination Gap

You must be a manager who believes that a problem is solved by telling other people to solve it. If the problem, at least in general is not described, then software cannot be written to deal with it. For example, if a car is not programmed to recognize a tornado, it will not try to avoid it.

Ragnarredbeard (profile) says:

“Imagine you are in charge of the switch on a trolley track. The express is due any minute; but as you glance down the line you see a school bus, filled with children, stalled at the level crossing. No problem; that’s why you have this switch. But on the alternate track there’s more trouble: Your child, who has come to work with you, has fallen down on the rails and can’t get up. That switch can save your child or a bus-full of others, but not both. What do you do?”

What kind of irresponsible asshole brings his kid to work and then lets him/her run around unsupervised? Guy is probably a bad parent from go, and is lucky his kid has survived this long. Pull the switch, run the kid over, and prevent his poor genes from filtering down to the next gen.

Anonymous Coward says:

The Future Capitalistic Solution

The capitalistic solution would be the one that minimizes monetary losses. So, perhaps in the future, everyone will need to have an official monetary values to them by the government. Furthermore, everyone will need to wear GPS trackers so that their location will always be known. Finally, driving computers will need to have access to the databases containing all that information so that they can be aware of the location and values of everyone around them. The computer can then choose the course of action that minimizes losses, or maximizes profits, as the case may be.

I would imagine that socialists and other “anti-capitalists” would tend to have lower personal values assigned.

Killer_Tofu (profile) says:

Perspective

Americans take around 250 billion vehicle trips killing roughly 30,000 people in traffic accidents annually, something we generally view as an acceptable-but-horrible cost for the convenience.

If only this was shouted every time somebody brought up the threat of terrorists. Perspective would help people realize that giving up our freedoms is NOT a fair trade.

Anonymous Coward says:

The duty of the car should be to the driver/owner of the car.

The lives of other people are up to them if they are driving themselves or their cars if they have autonomous vehicles.
Simple as that.
Otherwise your car does not belong to you. It becomes a some kind of philosophical judge of morality and value of life (god) outside your control. Why would you pay money for such a device?

Kind of like when you hire a bodyguard. Should he try to protect lives of others if you paid him to protect your life?

As for the trolley problem, you save your child. It is your biological imperative.

Anonymous Coward says:

Simple: Program the car to value self-preservation. I certainly won’t put my life in the hands of a computer that calculates me as ‘expendable’, and I suspect many others won’t either. Programming the car to kill the driver in this sort of scenario is likely to prove a massive hurdle to adoption of this technology, assuming it doesn’t outright kill it.

Anonymous Coward says:

Why don’t we start with planes instead of cars?

No more TSA
No more air traffic controllers
No more delayed flights because the pilot was hung over

People who don’t mind a prostate exam before getting on a plane shouldn’t mind being part of the experiment. As a pedestrian I don’t like the idea of being an unwilling participant of this experiment.

Also, planes are already so safe, any increase in accidents/fatalities will be much easier to see.

PaulT (profile) says:

Re: Re:

“Why don’t we start with planes instead of cars?”

Because none of your reasons make any logical sense?

“No more TSA”

How would automating the piloting of an aircraft remove the need to determine the safety and security risk of passengers and the items they bring on board? The TSA might arguably be removed for other reasons but nothing an automated system would do would remove the need for them under their current remit.

“No more air traffic controllers”

Well, you know, unless you actually want a manual backup system in case of problems. The cars will have a manual override, you want to ensure that a pilot can’t have a person on the ground ensuring that he can make a safe approach and landing in cases of emergency?

“No more delayed flights because the pilot was hung over”

Because that’s the only reason why flights get delayed? Not, say, mechanical faults on the ground, medical attention or needing to get drunk passengers off the previous flight? Automating flights will increase the risk of mechanical failure, not reduce it, and you still have to deal with the hundreds of human beings inside the thing every flight.

“As a pedestrian I don’t like the idea of being an unwilling participant of this experiment.”

So, you’d rather a few hundred innocent civilians on board a plane be subjected to it instead? Better hope the problems caused by your plane with no internal or external manual navigation doesn’t decide to crash land where you are, as well.

Oh, and as a pedestrian you’re already subject to the “experiment” of criminals, drunk drivers and many other people who cause deaths every year – people who automated driving will often remove as a risk factor.

“Also, planes are already so safe, any increase in accidents/fatalities will be much easier to see.”

Except, the primary reason why they’re so safe is because the consequences of failure are so much worse by an order of magnitude.

I understand you being concerned about safety around this new technology, but your alternative is worse.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
09:00 Awesome Stuff: Monitor Everything (5)
09:00 Awesome Stuff: Cool Components (1)
12:42 Tech Companies Ask European Commission Not To Wreck The Internet -- And You Can Too (4)
09:00 Awesome Stuff: Play & Listen (1)
09:00 Awesome Stuff: Beyond Chiptunes (12)
09:00 Awesome Stuff: Updated Classics (3)
09:00 Awesome Stuff: Celebrating Cities (1)
09:00 Awesome Stuff: Crafts Of All Kinds (5)
09:00 Awesome Stuff: One Great Knob (13)
09:00 Awesome Stuff: Simple Geeky Toys (2)
09:00 Awesome Stuff: Gadgets For The New Year (18)
09:00 Awesome Stuff: A Post-Holiday Grab Bag (0)
13:34 How Private-Sector Innovation Can Help Those Most In Need (21)
09:00 Awesome Stuff: Towards The Future Of Drones (17)
09:00 Awesome Stuff: Artisanal Handheld Games (5)
09:00 Awesome Stuff: A New Approach To Smartphone VR (5)
09:00 Awesome Stuff: Let's Bore The Censors (37)
09:00 Awesome Stuff: Open Source For Your Brain (2)
09:00 Awesome Stuff: The Final Piece Of The VR Puzzle? (6)
09:00 Awesome Stuff: The Internet... Who Needs It? (15)
09:00 Awesome Stuff: The Light Non-Switch (18)
09:00 Awesome Stuff: 3D Printing And Way, Way More (7)
13:00 Techdirt Reading List: Learning By Doing (5)
12:43 The Stagnation Of eBooks Due To Closed Platforms And DRM (89)
09:00 Awesome Stuff: A Modular Phone For Makers (5)
09:00 Awesome Stuff: Everything On One Display (4)
09:00 Awesome Stuff: Everything Is Still A Remix (13)
09:00 Awesome Stuff: Great Desk Toy, Or Greatest Desk Toy? (6)
09:00 Awesome Stuff: Sleep Hacking (12)
09:00 Awesome Stuff: A Voice-Operated Household Assistant (19)
More arrow