Breaking: Self-Driving Cars Avoid Accident, Do Exactly What They Were Programmed To Do

from the I-can-and-will-do-that,-Dave dept

We just got done talking about how, after logging 1,011,338 autonomous miles since 2009, Google’s automated cars have had just thirteen accidents — none of which were the fault of the Google vehicles. By and large the technology appears to be working incredibly well, with most of the accidents the fault of inattentive human drivers rear-ending Google’s specially-equipped Lexus SUVs at stop lights. But apparently, the fact that this technology is working well isn’t quite interesting enough for the nation’s technology press.

A Reuters report making the rounds earlier today proclaimed that two self-driving cars from Google and Delphi Automotive almost got into an accident this week in California. According to the Reuters report, Google’s self-driving Lexus “cut off” Delphi’s self-driving Audi, forcing the Audi to take “appropriate action.” This apparently got the nation’s technology media in a bit of a heated lather, with countless headlines detailing the “almost crash.” The Washington Post was even quick to inform readers that the almost-crash “is now raising concerns over the technology.”

Except it’s not. Because not only did the cars not crash, it apparently wasn’t even a close call. Both Delphi and Google spokespeople told Ars Technica that both cars did exactly what they were programmed to do and Reuters apparently made an automated mountain out of a molehill:

“I was there for the discussion with Reuters about automated vehicles,” she told Ars by e-mail. “The story was taken completely out of context when describing a type of complex driving scenario that can occur in the real world. Our expert provided an example of a lane change scenario that our car recently experienced which, coincidentally, was with one of the Google cars also on the road at that time. It wasn?t a ‘near miss’ as described in the Reuters story.”

Instead, she explained how this was a normal scenario, and the Delphi car performed admirably.

“Our car did exactly what it was supposed to,” she wrote. “Our car saw the Google car move into the same lane as our car was planning to move into, but upon detecting that the lane was no longer open it decided to terminate the move and wait until it was clear again.”

In other words, As Twitter’s Nu Wexler observed, the two cars did exactly what they were programmed to do, though that’s obviously a notably less sexy story than Reuters’ apparently hallucinated tale of automated automotive incompetence.

Filed Under: , , , , ,
Companies: delphi, google

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Breaking: Self-Driving Cars Avoid Accident, Do Exactly What They Were Programmed To Do”

Subscribe: RSS Leave a comment
38 Comments
Ninja (profile) says:

Breaking: Self-driving cars avoid accident, doing exactly what they are programmed to do

This doesn’t generate as much hype as

BREAKING: Skynet over us, self-driving cars nearly cause apocalypse. John Connor called for help.

As much as I like driving I look forward a future where humans won’t touch the wheel. Or a text processor it seems from the idiocy from the media in this case 🙂

Anonymous Coward says:

Re: Re:

Agree with Ninja. I guaran-dang-tee you that self-driving cars will use their turn signals, will stop at stop signs and red lights, and will run with headlights operating.

If people cannot be trusted to operate a complex piece of machinery, perhaps the machine can run itself better. No question in the case of cars. I doubt more than one person out of four driving today actually knows how to drive in compliance with traffic laws.

Roger Strong (profile) says:

Keep in mind that 1,011,338 autonomous miles was almost always in the same small area, over and over and over.

That area is mapped out electronically in far more detail than you find elsewhere. That includes mapping the location of traffic lights so that the car will notice them. Temporary traffic lights are a problem, and the car will not notice a police officer signaling the car to stop.

And with much of the processing done in the cloud, be sure to limit your driving to areas with decent cellular coverage.

The average person won’t be able to operate a self-driving car in the next few years, but it IS wonderful progress.

Taxi services will soon be able to use them. The dispatcher can check to see if the pick-up location, destination and points in between are suitable for a self-driving car. If not, they can send one of their remaining human drivers.

I assume that this is Uber’s business model. Use humans to build up their business using their own cars. They’ll be established in many cities just in time for a self-driving fleet to become practical.

jupiterkansas (profile) says:

Re: Re:

The processing is not handled in the cloud. Each car is capable of driving using only its sensors and onboard computers. The cloud helps with maps and directions but isn’t required.

They can also recognize stoplights, but maybe not a police officer’s hand signal – but that’s a problem that’s easily solved.

And the areas may be mapped out – but they’re mapped out by the cars themselves. Any area where self-driving cars spend any considerable time would also get mapped out in great detail. It’s one of those situations where the more it’s implemented, the better it works.

Roger Strong (profile) says:

Re: Re: Re:

Cloud computing is a big part of the system.

Technology Review: Impressive progress hides major limitations of Google’s quest for automated driving.

Maps have so far been prepared for only a few thousand miles of roadway, but achieving Google’s vision will require maintaining a constantly updating map of the nation’s millions of miles of roads and driveways.
[…]
If a new stop light appeared overnight, for example, the car wouldn’t know to obey it.
[…]
Among other unsolved problems, Google has yet to drive in snow, and Urmson says safety concerns preclude testing during heavy rains. Nor has it tackled big, open parking lots or multilevel garages.

Anonymous Coward says:

Re: Re: Re: Re:

Yeah, snow would be my major concern. A little loss of traction would throw off any navigation aides based on how far the car thinks it’s traveled, and heavy cloud cover could ruin GPS reception, and the snow cover hides the road lines… Is the car capable of picking up the more subtle cues of where the road is?

If they’re scared to even test the thing in heavy rains, then it’s nowhere near ready for actual deployment.

Anonymous Coward says:

Re: Re: Re:3 Re:

…These are tests and proof of concept vehicles…

Which NOW need testing on ACTUAL roads, not controlled test tracks.

Supposedly Nevada has approved some of their highways for such testing, albeit with caveats. Such as a licensed driver must be in the vehicle, even if they are not controlling it. And the ‘tests’ have their own caveats: IIRC the vehicles to be tested currently do NOT have the ability to enter or exit the highway, not can they do a passing maneuver.

jupiterkansas (profile) says:

Re: Re: Re: Re:

When a Google car sees a new permanent structure such as a light pole or sign that it wasn’t expecting it sends an alert and some data to a team at Google in charge of maintaining the map.
[…]
The car’s video cameras detect the color of a traffic light.

As I said, the cars themselves help map the terrain, so it’s not like they can’t see what’s around them and operate independently or rely on maps in the cloud. And luckily, stoplights don’t appear overnight. It would be easy for the city to work with the Google to incorporate road changes and construction zones (or for the city to maintain the maps themselves).

Snow and ice are definitely serious issues though. Right now that means in certain types of weather, you’ll just have to drive yourself.

And these are today’s cars – not cars 10 years from now. Go back and look at where they were in 2004, and I’m not too worried that these issues won’t be overcome.

Anonymous Coward says:

Re: Re: Re: Re:

The car has to be able to deal with local conditions and changes all on its own, as it is never guaranteed to have an up to date map, especially when dealing with temporary lights and lane closure due to road works etc. Also there is nothing to stop a passenger giving a car directions, like take the next road or dive on the right, or stop here.
Spotting a stop light is no harder than spotting a cyclist or pedestrian or other unmapped obstruction. All such processing has to be done on-board, as otherwise an Internet dropout could cause an accident.

Hideous says:

robocars to use time-tested CSMA/CD "collision detection" protocol...

Autonomous vehicles will adopt the simple, well-understood CSMA/CD with BEB protocol from computer networking. When two autonomous vehicles see an open lane or intersection at the same time, they will both enter it. If they collide they will sense the collision and back off. Each will choose a pseudorandom delay time and proceed after it expires. Each car that collides again will backoff and delay for twice the previous delay time before attempting to go again.

A. Nnoyed (profile) says:

Programming makes bad decision for driver!

1. Push button ignition switch killed some people when the throttle stuck and the driver could not shut off engine which resulted in a fatal crash. There was a way to stop engine in an emergency but drivers are not advised how to do so.

2. A driver was killed because they could not exit the vehicle in time when railroad crossing gates trapped them. A news story showed how a driver could easily drive through and break off a railroad crossing arm because they are designed to be easily broken off. If a driver is caught between railroad gate crossing arms and vehicle program will not allow the driver to drive into and break off an arm to escape a collision with a train, without a series of steps, to override the collision avoidance system, it is dangerous.

Uriel-238 (profile) says:

Re: The bear rule.

Are these actual incidents or hypothetical ones?

The push-button ignition only indicated the system needs to check that the transmission is out of gear before engaging the engine. That’s a fixable bug.

The second one sounds avoidable by the guidance system detecting the position of the train and not proceeding onto the tracks if it is too close.

I expect that, because people are ambitious and creative in their idiocy that self-driven cars will sometimes engage in accidents, some fatal. But they don’t have to be casualty free, just safer than human-driven cars.

Granted this may create a liability problem, but that’s a different issue. There are similar liability problems with public transit.

A. Nnoyed (profile) says:

Re: Re: The bear rule.

1) Read this driver could not shut engine off:
http://abcnews.go.com/Blotter/toyota-pay-12b-hiding-deadly-unintended-acceleration/story?id=22972214
ABC News first reported the potential dangers of unintended acceleration in an investigation broadcast in November 2009. The report said hundreds of Toyota customers were in “rebellion” after a series of accidents were apparently caused by the unintended acceleration. Two months before, Highway Patrolman Mark Saylor and three members of his family had been killed after the accelerator in his Lexus had become stuck on an incompatible floor mat. Saylor was able to call 911 while his car was speeding over 100 miles per hour and explain his harrowing ordeal right up until the crash that ended his life.
EXCLUSIVE INVESTIGATION: Runaway Toyotas

2) Driver killed when vehicle became trapped between crossing arms Page 1 to 4 here:
http://www.cbsnews.com/pictures/metronorth-train-accidents/

Uriel-238 (profile) says:

Re: Re: Re: The bear rule.

Yeah, I can’t find an analysis of the Jeep SUV failure due to collision detection regarding the second one.

The first one is a problem with controls. It sounds like it’s not a problem with power acceleration (fly-by-wire) but with a foot pedal getting stuck. That’s actually a situation in which a smart automatic driving system could help, especially if the acceleration was control was fly-by-wire as it could then override the stuck control and slow the car down regardless.

MrTroy (profile) says:

Re: Programming makes bad decision for driver!

Re 2, why have a gate crossing arm on the exit side of the crossing?

Question asked and answered, sigh: https://www.azatrax.com/controller.html

“To reduce collisions at grade crossings, railroads are installing four quadrant gate systems on high speed rail corridors, commuter lines, light rail systems and in areas with high concentrations of foolish drivers.”

At least once people stop driving their own cars, we can return to more sane two quadrant crossing gates.

Derek Kerton (profile) says:

Warning! You Could Be In Danger...stay tuned

Later, in the news: drone-like, repetitive, clickbaity and formulaic news articles basically write themselves. How the forumla for creating a moral panic may threaten the human element and editorial touch we used to value in news reporting.

Click to read more…but first, Is something in your garage planning to kill you?

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...