Personally, I am slightly baffled whenever this "yeah but human drivers make mistakes too" or "human drivers make more/worse mistakes" whataboutism pops up on /.
To me, it's a diversion tactic to draw the attention away from the issue at hand. Which is, self-driving cars make mistakes they should not be making. They cause health and safety risks, they cause injuries, even deaths. Don't compare them to human drivers, compare them other tech. Why should self-driving cars get less stringent safety requirements than other tech? Because human drivers make mistakes too? Sorry, that's not going to fly. You should introduce tech that is safe by design or get off the road. Don't use the real world (the production environment) as your test bed. Other tech developers aren't doing so or even allowed to anyway. (Microsoft being the glaring exception.)
Can a self-driving car be made safe by design? In theory, yes - but the real-world cases make me wonder. This one, the dog case a day or two before, and a number of earlier ones. I have no doubt that self-driving cars can perform excellently and even surpass most human drivers in many cases. In well-defined, by-the-book cases. The problems arise when things don't go by the book. In the real world, they seldom do. It's an open world out there and not everybody is playing by the book. Not to mention unexpected accidents, bridge collapses, natural disasters, and so on, which in turn make others around you react in unpredictable ways. Heck, people and other living things are unpredictable by their nature. It is really impossible to list every imaginable situation and tell the AI how to react in that particular situation. It is an endless list of possible scenarios and outcomes.
To react correctly in unexpected situations, you need to read the whole situation and react quickly. Humans do this instinctively - they may not always make the right call but at least their capability to analyze unexpected situations still far exceeds an AI trained on a closed set of rules and scenarios.
I am not sure the current maturity levels of AI can used to handle such unusual situations. What we read in the news doesn't suggest so.
An AGI might be needed, and that is not on today's menu.