Uber Self-Driving Crash

November 7th, 2019
self-driving, tech, transit
Content warning: discussion of death

A year and a half ago an Uber self-driving car hit and killed Elaine Herzberg. I wrote at the time:

The dashcam video from the Uber crash has been released. It's really bad. The pedestrian is slowly walking their bike left to right across a two lane street with streetlights, and manages to get to the right side of the right lane before being hit. The car doesn't slow down at all. A human driver would have vision with more dynamic range than this camera, and it looks to me like they would have seen the pedestrian about 2s out, time to slow down dramatically even if not stop entirely. But that doesn't matter here, because this car has LIDAR, which generates its own light. I'm expecting that when the full sensor data is released it will be very clear that the system had all the information it needed to stop in time.

This is the sort of situation where LIDAR should shine, equivalent to a driver on an open road in broad daylight. That the car took no action here means things are very wrong with their system. If it were a company I trusted more than Uber I would say "at least two things going wrong, like not being able to identify a person pushing a bike and then not being cautious enough about unknown input" but with Uber I think they may be just aggressively pushing out immature tech.

On Tuesday the NTSB released their report (pdf) and it's clear that the system could easily have avoided this accident if it had been better designed. Major issues include:

  • "If we see a problem, wait and hope it goes away." The car was programmed to, when it determined things were very wrong, wait one second. Literally. Not even gently apply the brakes. This is absolutely nuts. If your system has so many false alarms that you need to include this kind of hack to keep it from acting erratically, you are not ready to test on public roads.

  • "If I can't stop in time, why bother?" When the car concluded emergency braking was needed, and after waiting one second to make sure it was still needed, it decided not to engage emergency braking because that wouldn't be sufficient to prevent impact. Since lower-speed crashes are far more survivable, you definitely still want to brake hard even if it won't be enough.

  • "If I'm not sure what it is, how can I remember what it was doing?" The car wasn't sure whether Herzberg and her bike were a "Vehicle", "Bicycle", "Unknown", or "Other", and kept switching between classifications. This shouldn't have been a major issue, except that with each switch it discarded past observations. Had the car maintained this history it would have seen that some sort of large object was progressing across the street on a collision course, and had plenty of time to stop.

  • "Only people in crosswalks cross the street." If the car had correctly classified her as a pedestrian in the middle of the road you might think it would have expected her to be in the process of crossing. Except it only thought that for pedestrians in crosswalks; outside of a crosswalk the car's prior was that any direction was equally likely.

  • "The world is black and white." I'm less sure here, but it sounds like the car computed "most likely" categories for objects, and then "most likely" paths given their categories and histories, instead of maintaining some sort of distribution of potential outcomes. If it had concluded that a pedestrian would probably be out of the way it would act as if the pedestrian would definitely be out of the way, even if there was still a 49% chance they wouldn't be.

This is incredibly bad, applying "quick, get it working even if it's kind of a hack" programming in a field where failure has real consequences. Self-driving cars have the potential to prevent hundreds of thousands of deaths a year, but this sort of reckless approach does not help.

(Disclosure: I work at Google, which is owned by Alphabet, which owns Waymo, which also operates driverless cars. I'm speaking only for myself, and don't know anything more than the general public does about Waymo.)

Referenced in:

Comment via: facebook, lesswrong

Recent posts on blogs I like:

Animal Welfare and Capabilitarianism

All ethics is a special case of animal welfare science

via Thing of Things December 18, 2024

Developing the middle ground on polarized topics

Avoiding false dichotomies The post Developing the middle ground on polarized topics appeared first on Otherwise.

via Otherwise November 25, 2024

How to eat vegan on Icon of the Seas

Royal Caribbean has a new giant cruise ship, Icon of the Seas, which has a large selection of food options.

via Home November 21, 2024

more     (via openring)