The short answer is "No." Self-driving cars can never really be safe. They will be safer! So much safer that it's worth a few minutes to understand why.
Humans are very dangerous
First and foremost, according to the National Highway Traffic Safety Administration (NHTSA), 90% of all traffic accidents can be blamed on human error. Next, according to the AAA Foundation for Traffic Safety, nearly 80% of drivers expressed significant anger, aggression, or road rage behind the wheel at least once in the past year. Alcohol-impaired driving fatalities accounted for 29% of the total vehicle traffic fatalities in 2015. And, finally, of the roughly 35,000 annual traffic fatalities, approximately 10% (3,477 lives in 2015) are caused by distracted driving.
Remove human error from driving, and you will not only save a significant number of lives, you will also dramatically reduce the number of serious injuries associated with traffic accidents. (There were more than 4.4 million traffic accidents in the United States in 2015.)
Data begins to make a case
In May 2016, a 40-year-old man named Joshua Brown died behind the wheel of a Tesla cruising in Autopilot mode on a Florida divided highway. He was the first.
Rage against the machine quickly followed, along with some valid questions about whether Tesla had pushed this nascent technology too fast and too far. Everyone expected the accident to be the fault of a software glitch or a technology failure, but it was not.
The NHTSA investigation found that "a safety-related defect trend has not been identified at this time and further examination of this issue does not appear to be warranted." In other words, the car didn't cause the crash. But there was more to the story. The NHTSA's report concluded: "The data show that the Tesla vehicles crash rate dropped by almost 40 percent after Autosteer installation." In reality, while Mr. Brown's death was both tragic and unprecedented, the investigation highlighted a simple truth: semiautonomous vehicles crash significantly less often than vehicles piloted by humans.
What do you mean by 'safe'?
The same NHTSA report mentioned 99% of U.S. automakers had agreed to include Automatic Emergency Braking (AEB) systems in all new cars by 2025, with the goal of preventing 28,000 crashes and 12,000 injuries. The AEB program is limited to rear-end crashes, but there are a host of other semiautonomous features in the works -- and by the numbers, all of them will make us safer.
That said, this is very new technology, and regulators will need to define what they mean by "safe." Must our autonomous vehicles drive flawlessly, or do they just need to be better at it than we are? The RAND Corp think tank says, "A fleet of 100 cars would have to drive 275 million miles without failure to meet the safety standards of today's vehicles in terms of deaths. At the time of the fatal May 2016 crash, Tesla car owners had logged 130 million miles in Autopilot mode."
Transitioning to fully autonomous vehicles
In April 2016, Ford, Alphabet, Lyft, Volvo Cars and Waymo established the Self-Driving Coalition for Safer Streets to "work with lawmakers, regulators, and the public to realize the safety and societal benefits of self-driving vehicles." They have their work cut out for them.
In January 2017, Elon Musk tweeted that a software update featuring Shadow mode was being pushed to all Teslas with HW2 Autopilot capabilities. This enabled the car's autonomous driving AI to "shadow" its human drivers and compare decisions that it (the AI) would make to the decisions that were being made by the human driver. Think of it as self-driving AI in training. The auto industry and several tech giants are working as fast as they can to make autonomous vehicles mainstream. To speed the process, they may need to share some data. Will they? My guess is, absolutely.
Hacks and crashes
In September 2016, Chinese researchers discovered some "security vulnerabilities" in the Tesla Model S and remotely hacked into the car. This was notable because it was the first time anyone had remotely hacked into a Tesla.
We have a thesis here at The Palmer Group, "Anything that can be hacked, will be hacked." Is this going to be an issue? Yes, but it's also going to be an arms race. I'm betting on the good guys, but to be fair, hacking across every digital touchpoint is a never-ending battle. We will do our best to combat the bad guys.
As for computer crashes, yes, it is possible for the computer that runs your self-driving car to crash, but it will happen so infrequently that, by the numbers, you will be significantly safer in an autonomous vehicle than if you were driving yourself.
Fear and assessment of risk
Some people are afraid to fly. When you point out that flying is the safest form of travel by several orders of magnitude, the response is always some version of, "But when a plane crashes everyone dies."
Human beings are not very good at assessing risk. If you don't have a gas pedal, a brake pedal, or a steering wheel, and your car crashes, you will feel helpless and out of control. And you may die. But, by the numbers, tens of thousands of people will not die or be injured because semiautonomous driving and ultimately fully autonomous driving will be much safer than pure human driving. Some will counter that it's cold comfort if you're the one who is killed or injured, no matter how rare it is. I agree. But AGAIN, by the numbers, if you were going to make a policy decision for our society at large, you have to agree that saving tens of thousands of lives and millions of injuries is a worthy endeavor.
I'm pretty sure that before 2030, if you are under the age of 25 or over the age of 70, you are going to need a special permit to manually drive a car. I'm also pretty sure that you will not be allowed to manually drive on certain streets and highway lanes because you will pose too great of a threat to the caravans of autonomous vehicles on those roads.
With any luck, the fear-mongers and bureaucrats will get out of the way, and we will all be much safer sooner.