Autonomous driving is a hard AI problem with a lot of constraints and the capacity to actually kill people, both in the car and outside, so it must be taken very seriously. The mere idea of privately-developed, close-source software driving tons of hardware at highway speed without supervision is enough to evoke nightmares.
AI has made a lot of progress lately. Somewhat hardish AI problems that we can solve today are in the order of facial recognition, language translation, categorising the content of images, helping with the diagnosis of some illnesses like skin cancer, etc. These are incredibly useful, but notice that they still have a non-zero error rate. The current state of the art on ImageNet for picture recognition is in the ordre of 3%, and they are relatively easy to fool. If you have ever used google translate, you must know its limitations.
Autonomous cars need to have a precise image of their surrounding up to a few hundred meters in all conditions (day, night, wet, etc), very frequently updated, and they need to be able to anticipate other drivers as well as other hazards on the road. The technology is simply not there yet, even with the 70k$ sensors that Waymo uses.
We all want this technology to be here and to be better than humans at driving. This sounds easy since most of us drivers think everyone else drives like an idiot, however, the fatality rate in cars is in the order of 1 death per 100 million miles travelled. This is actually very low. This presents a challenge to the autonomous driving research community, because to certify that their system is actually better than humans, they will have to travel significantly more than hundreds of millions of miles, in real, not simulated or recorded conditions. The cost of doing this is astronomical, and it must be done everytime a new version comes out.
Trusting this software is going to be very hard in practice.
Source: RAND corp.