In March 2018, one of Uber’s self-driving cars became the first autonomous vehicle reported to have killed a pedestrian. A new report by U.S. authorities suggests that the accident occurred because the car’s software was programmed to ignore jaywalkers.

What happened: The National Transportation Safety Board released the results of an investigation into Uber’s self-driving AI. According to the agency’s analysis, the model failed to classify the victim properly because she wasn’t near a crosswalk — a feature the model used to classify pedestrians in the road.

What the report says: The vehicle’s computer log in the moments leading up to the crash highlights a number of flaws in the system:

  • The vehicle’s radar first picked up the victim 5.6 seconds before impact. The self-driving Volvo SUV was in the far right lane, while the pedestrian was walking her bicycle across the street from the left. The system classified her as a vehicle but didn’t recognize that she was moving.
  • The lidar pinged the victim repeatedly over the next several seconds. The system assigned various classifications — car, bicycle, “other” — but it didn’t associate one classification with the next. It reset the tracking system each time and thus didn’t recognize that she was moving into the car’s path.
  • 1.5 seconds before impact, the victim was partially in the SUV’s lane, so the system generated a plan to swerve around the obstacle, which it considered to be unmoving.
  • A few milliseconds later, the lidar identified her as a moving bicycle on a collision course. It abandoned its previous plan, since that didn’t account for the bicycle’s motion.
  • Uber’s developers had previously disabled the system’s emergency steering and braking systems because they were known to behave erratically. Instead, the vehicle began to decelerate gradually. 1.2 seconds before impact, the car was moving at 40 miles per hour.
  • One second later, the self-driving system alerted its human safety driver that it had initiated a controlled slowdown. The safety driver grabbed the wheel, disengaging the self-driving system. The SUV struck the victim, and the driver slammed on the brakes.

Aftermath: Immediately after the accident, Uber took its autonomous test vehicles off the road. The victim’s family sued the company and settled out of court. Uber has since resumed self-driving tests in Pittsburgh (issuing a safety-oriented promotional video, excerpted above, to mark the occasion). Responding to the NTSB report, Uber issued a statement saying the company “has adopted critical program improvements to further prioritize safety” and “look[s] forward to reviewing their recommendations.”

Why it matters: Next week, the NTSB will hold a hearing where it will announce its judgment of Uber’s role in the accident. Federal legislators and state authorities will be watching these hearings, which are likely to bring forth a number of recommendations on ways to ensure the self-driving car industry is operating safely.

We’re thinking: Government oversight is critical for progress on autonomous vehicles, both to hold companies accountable for safety and to ensure that safety information is widely disseminated. Regulation has made safety a given in commercial aviation; airlines compete on routes, pricing, and service, not how safe they are. Similarly, the autonomous vehicle industry’s commitment to safety is something that consumers should be able to take for granted. And, while we’re at it, let’s build sensible rules for testing AI in other critical contexts such as health care, education, and criminal justice.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox