The timing of this year’s National Autonomous Vehicle Day (May 31) couldn’t have been worse.
Car manufacturers (and other companies such as Google and Uber) have been tirelessly pursuing efforts to make self-driving cars a reality and it has been accepted for some time now that autonomous vehicles are an inevitability.
However, this week saw the publication of a report by the US National Transportation Safety Board (NTSB), highlighting oversights in Uber’s driverless testing programme that resulted in the death of a 49-year-old pedestrian in Arizona back in March.
Issues with Uber’s sensor suite meant the woman was detected, but the self-driving car failed to slow or stop because the system didn’t identify the object as a person pushing a bicycle. As a result, the car hit the woman at about 40mph.
The car’s emergency braking system had been disabled and Uber did not install a warning system to alert the human safety driver who would take control in an emergency, the NTSB’s report also found. Uber explained its decision was aimed to “reduce the potential for erratic vehicle behaviour”, claiming that other road users could be confused by a car that is too cautious and stops too often or abruptly.
Pravin Varaiya, professor of electrical engineering and computer sciences at the University of California, Berkeley, said that emergency braking was switched off to give a comfortable ride. In an article published by the Financial Times, he said: “This may be a persistent problem in autonomous vehicles: too many false positives will entail too much emergency braking, which no one will tolerate.”
This isn’t the first time a self-driving car has been involved in a fatal road accident. In the same month as the Uber incident, a self-driving Tesla Model X SUV slammed into a concrete highway lane divider and burst into flames. The driver died shortly afterwards at the hospital.
Unbelievably, this is the second time Tesla’s semi-autonomous Autopilot function was controlling the car when it was involved in a fatal collision. Tesla confirmed that the Autopilot was on but the car’s adaptive cruise control distance was set to the minimum. Visual and audio warnings occur if the driver takes their hands off the wheel for too long, and the car will bring itself to a safe stop and turn on its hazard lights if the driver doesn’t respond.
Data pulled from the wrecked car suggests that the driver would have had about five seconds and 150 meters of unobstructed view of the concrete barrier before the crash. The idea of playing Tetris or watching a film while our car drives us down to the shops is easy to appreciate, but it is clear that while the tech is fast advancing, it still has a long journey ahead of itself.