Google recently released data showing that its self-driving cars have been involved in 11 minor crashes over the past six years, which has raised questions about when such autonomous vehicles will be ready for prime time.
The report suggests that most of the crashes were likely due to human driver error, and may not have been preventable, said Steven Shladover, a researcher at the Partners for Advanced Transportation Technology at the University of California, Berkeley.
Still, while some levels of automation are already in existing cars, completely driverless cars — with no steering wheels or brakes for human drivers — would require much more innovation, Shladover said. [10 Technologies That Will Change Your Life]
"If you want to get to the level where you could put the elementary school kid into the car and it would take the kid to school with no parent there, or the one that's going to take a blind person to their medical appointment, that's many decades away," Shladover told Live Science.
From ultra-precise maps to fail-proof software, here are five problems that must be solved before self-driving cars hit the roadways.
Driving in the United States is actually incredibly safe, with fatal crashes occurring once every roughly 3 million hours of driving. Driverless vehicles will need to be even safer than that, Shladover said.
Given existing software, "that is amazingly difficult to do," he said.
That's because no software in laptops, phones or other modern devices is designed to operate for extended periods without freezing, crashing or dropping a call — and similar errors would be deadly in a car. Right now, Google's self-driving cars avoid this by having both a backup driver and a second person as a monitor, who can shut off the system at the first hint of a glitch. But coming up with safety-critical, fail-safe software for completely driverless cars would require reimagining how software is designed, Shladover said.
"There is no current process to efficiently develop safe software," Shladover said. For instance, when Boeing develops new airplanes, half of their costs go to checking and validating that the software works correctly, and that's in planes that are mostly operated by humans. [Photos: The Robotic Evolution of Self-Driving Cars]
Nowadays, Google's self-driving cars seem to operate seamlessly on the streets of Mountain View, California. But that's because the company has essentially created a kind of Street View on steroids, a virtual-world map of the town. That way, the self-driving cars know exactly how the streets look when empty, and only have to fill in the obstacles, such as cars and pedestrians, reported The Atlantic.
Driverless vehicles, with their current sensors and processing, may not be able to operate as smoothly without such a detailed map of the rest of the world, according to the article, but so far Google has mapped only about 2,000 miles (3,220 kilometers) of the 4 million miles (6.4 million km) of roadway in the United States.
Before people all toss their drivers' licenses, a self-driving car must be able to distinguish between dangerous and harmless situations.
"Otherwise, it's going to be slamming on the brakes all the time for no reason," Shladover said.
For instance, potholes or a nail below a tire are incredibly hard to spot until just before they've been hit, while a paper bag floating across the highway may be very conspicuous, but not very dangerous, he said.
The cars also need to decide in sufficient time whether a pedestrian waiting on the sidewalk is likely to walk into traffic, or whether a bike is going to swerve left. Human brains do a masterful job of sorting and reacting to these hazards on the fly, but the current crop of sensors just isn't equipped to process that data as quickly, Shladover said.
Once driverless cars begin to proliferate, they will need a much better way to communicate with other vehicles on the road. As different situations emerge, these cars will need to flexibly adjust to other cars on the roadways, reroute on the fly and talk to other driverless cars. But right now, communication among individual self-driving cars is minimal.
"If they don't have the communication capability, they will probably make traffic worse than it is today," Shladover said.
And then there are the ethical issues. Sometimes, a driver must decide whether to swerve right or left, for instance — either injuring three people in a truck or potentially killing a person on a motorcycle. Those types of ethical dilemmas would require the software in a self-driving car to weigh all the different outcomes and come to a final solution on its own.
A machine that can do that would be unprecedented in human history, Shladover said. Even drones that target enemies in war are remotely manned by a human who has final say about the killing, Shladover added.
"There's always a human on the other side who has to make that decision about using deadly force," Shladover said.
Follow Tia Ghose on Twitter and Google+. Follow Live Science @livescience, Facebook & Google+. Originally published on Live Science.
Live Science newsletter
Stay up to date on the latest science news by signing up for our Essentials newsletter.
Tia is the managing editor and was previously a senior writer for Live Science. Her work has appeared in Scientific American, Wired.com and other outlets. She holds a master's degree in bioengineering from the University of Washington, a graduate certificate in science writing from UC Santa Cruz and a bachelor's degree in mechanical engineering from the University of Texas at Austin. Tia was part of a team at the Milwaukee Journal Sentinel that published the Empty Cradles series on preterm births, which won multiple awards, including the 2012 Casey Medal for Meritorious Journalism.
By Briley Lewis
By Harry Baker