Exactly.
The industry fallaciously believed they could leapfrog 10-20 years of technological development, social acclimation, and legal precedence and go from "automobiles with no driver assistance" to "fully autonomous vehicles". It was a stupid assertion 10 years ago and it's a stupid assertion today in the age of "AI" (for very loose definitions of "AI"). Here are the problems:
1. Driving is HARD.
Choosing to take actions while driving is easy, but taking in the MASSIVE amount of data that humans do while driving, synthesizing all previous knowledge about driving, context, and human behavior, to inform literal millisecond decision-making *resolutely and correctly* is hard. Signage, striping, markings, laws, customs, fashion, novelty, probability, etc. all play a part with the human senses, brain, and reactions being key.
You may say, "Any fool can learn to drive moderately safely," but that doesn't say anything about the ease of driving. Instead, it gives massive context that some of the dumbest and least reliable adult humans can drive mostly safely and but thousands of scientists with billions of dollars can't make a single self-driving that is similarly capable. Driving is HARD. It's why simply being buzzed while driving is an absolutely massive risk to your safety and the safety of those around you.
2. Autonomous vehicles must be PERFECT because we haven't sorted out liability.
Even if you could create a nearly perfect level 5 autonomous car right now, no company could handle the lawsuits from unlawful deaths. Last year, nearly 43,000 people died on American roads. Let's say GM's Cruise had a breakthrough and their vehicle would work with 99% safety meaning that if they replaced every single vehicle in the United States of America with their nearly perfect autonomous vehicles, only 430 would die on the roads per year. Everyone would love the idea conceptually... but GM probably wouldn't accept the offer because they would then be financial liable for the negligent deaths of 430 people per year. They wouldn't survive a single year of litigation.
If a human driver kills another person on the road due to their negligence, they may do some jail time, they may pay massive fines and compensation to the victim's family, and they may lose their driving privileges for some time. You could expect the same of GM in this scenario except wait... they're going to be responsible for FOUR-HUNDRED, THIRTY deaths-- all with the same driver, their autonomous driver program. GM (being a mega corporation) would end up paying $10s of million per death and would be forced to remove their "driver" from the road. Now what?
Just ask yourself: If your child was in a GM level 5 autonomous vehicle and it made a bad decision resulting in your child's death, would you say, "Oh well... we're better off on balance..." or would you go after GM?
Thus, until we either have PERFECT level 5 autonomous vehicles or we socially acclimate as an entire society to the use of "driver assist" features over decades of vehicle turnover, we'll never acclimate socially to change how automotive liability works.