Most people try to pin the blame for an accident on a single cause. Most liability laws are based on this same (erroneous) concept.
Airline accident investigations are really good at demonstrating how an entire chain of events led up to the accident. And that any single factor happening differently could've prevented the accident. e.g. The Concorde crash was caused by (1) debris on the runway from a faulty repair on a previous plane, (2) failure of the Concorde's tires when it struck the debris, (3) failure of the undercarriage to withstand tire debris striking it from a blowout at take-off speed, (4) the manufacturer not making any procedures or provisions to recover from a double engine failure on a single side because it was considered so unlikely. Any one of these things doesn't happen and the Concorde doesn't crash.
Safety systems layer multiple accident-avoidance measures on top of each other. This redundancy means that only when all of those measures fail is there an accident. Consequently, even if the self-driving car was not legally at fault, that it was involved in an accident still points to a possible problem. e.g. If I'm approaching an intersection and I have a green light, I don't just blindly pass through because the law says I have the right of way. I take a quick glance to the left and right to make sure nobody is going to run their red light, or that there aren't emergency vehicles approaching which might run the red light, or that there's nobody in the crosswalk parallel to me who might suddenly enter into my lane (cyclist falls over, dog or child runs out of crosswalk, etc).
So even if the autonomous car wasn't legally at fault, that's not the same thing as saying it did nothing wrong. There may still be lessons to learn, safety systems which were supposed to work but didn't, ways to improve the autonomous car to prevent similar accidents in the future.