Congrats on winning a strawman argument.
It's not a strawman argument. Earlier in the conversation, you explicitly wrote:
And while this debate continues, those cardboard boxes containing small children are being run over by drunk or inattentive humans on a regular basis.
The implication there is that, because drunk or inattentive humans regularly run over the hypothetical (occasionally real) cardboard boxes containing small children, then it is OK if humans do it. It's not a strawman argument, it's a direct response to what _you_ wrote.
That said, did you even win? Clearly we allow babies to be on or near a road, and we still allow cars to drive around those same roads. If we as a society is not ok with babies being run over, then a lot more effort would have been put into preventative measures. We could for example put ankle monitors on all babies and make loud noises when they crawl near a road. The fact that that's not even being discussed shows that we're actually ok with babies being run over.
It wasn't really about trying to "win", it was mostly just to clarify your position. Honestly though, the rest of your paragraph is just wacky. The solution to put ankle monitors on babies? The whole conversation is about whether self driving cars should and do avoid objects on the road. There's an even more detailed argument about the technical problems involved that we're not having (that includes the problems of identifying objects on the road and weighting them based on how important it is to avoid them like for example leaves vs. icy patches vs. gum wrappers vs. fallen trees, etc.). So, once again, when a human runs over a dog on the road, we can just chalk that up to known factors because we are human and we have a reasonable understanding of human nature and behavior in such situations. When a self-driving car does it, however, there are logically questions we have to ask at this stage in the car about why. Those questions include the ones I have mentioned numerous times like: Was the car even aware that the object was there before, and did it realize it had hit an object after? If it was aware, did it take any action to avoid it and if not, why not? Was it because it fell under a list of objects it considered OK to hit? Was it because it fell under a list of objects it considered to be not OK to hit, but a flaw in programming didn't actually allow braking or avoidance as options? Was it because it was unidentified and anything unidentified is considered OK to hit? Etc.
The point is that we have a good idea how a good human driver will react to these things and what they will be experiencing and understanding about the situation and about the various ways they might react. Example: I remember driving along a country road late on a Winter night and, ahead of me, an animal started coming over a snow bank. At first I thought it might be a dog, but I had not fully classified it before I braked and came to a stop. That worked out for me because it was a deer, and not just one. It was the deer in front of a whole herd that raced across the road in front of me. If I had hit them, my car would have been totaled, not to mention that injury or death to myself was a possibility. So, the question is, why did I brake as soon as I saw the head of an animal breach the top of the snowbank? I didn't know what kind of animal it was. I am not even 100% sure if my mind had actually identified it as truly an animal at that point, or if it was just a pareidolia heuristic that told me: probably an animal, probably about to cross the road. I didn't know the actual size yet, and I definitely did not know that it was a herd (although, once again I do know that animals of various kinds can travel in groups). I knew there were no closely following cars, which is something I maintain awareness of while driving, so there was no obstacle to braking, so I braked based on a vague possibility of a collision and the gamble paid off massively. A moment of inconvenience stopping, versus a car-destroying and possibly life ending collision.
So, for me, the important question for self driving cars is, what will they do in such situations. What is the current state of the art. What does it see and recognize, what assumptions does it make about other objects on the road and how they may act and move? What about objects that are not on the road yet? Does it know what a snowbank is? What a deer is? Does it need to recognize it as a deer before it will act as if it may be a deer. You may be uninterested in the details, but I am not.
Let me give you some other examples using humans. My girlfriend once upon a time had a best friend who once gave a restaurant a drive through by, well, driving through it. This was with my girlfriend in the passenger seat. She managed to just miss diners and the main gas line. On another occasion, years later, my girlfriend and I were sitting in the back seat while she was driving and her boyfriend was in the passenger seat. It was night and we reached a red light. When it changed green, the light change freaked her out and she took her hands off the wheel and refused to put them back on, even though she kept driving. Her boyfriend steered from the passenger seat for the next few miles. We have not seen her in ages, but it is a miracle she is still alive. I don't actually recall if that was the first time I was in a car with her driving, but it was definitely the last. I would not trust her to drive me around. Then there's my father. He is, generally speaking, good at driving, but I would not consider him all that good a driver, mainly because he has always had a tendency to drive a bit like he's James Bond in the middle of a car chase. This is especially annoying when you're supposed to be following him somewhere and he drives like you're a tail he's trying to lose. Generally though, while his driving takes more risks than I would like, he is in control of the vehicle and does not do weird unpredictable things. I generally have no problem being driven places by him, although I will note that, I can not sleep when he is driving. Then there's my girlfriend. She is definitely not as skilled at driving as my father, but from a safety perspective, she is a better driver. In some sort of competitive driving situation, my father could probably beat her even at his current age, but the way she drives is statistically safer.
So, let's compare AI driving to those examples. On average, AI drivers may be better than either my father or my girlfriend. They have to be better than my girlfriend's friend. They won't take many of the risky maneuvers that my father would, and they probably would do better at the plain driving style that my girlfriend uses when she drives because they would have better situational awareness and be able to deal with some emergency situations better than she can. However, the question for the AI driver is if it is completely free from the kind of bizarre things that my girlfriend's friend would do while driving? Will it drive through a restaurant? Tests have shown that some self-driving systems actually will if, for example, it has a convincing road painted on it. Will it do things like ignoring stop signs for school buses? I mean, we know that they may because there's a story up on the main page about Waymos doing exactly that. Will they exhibit bizarre emergent behavior during close maneuvers in parking lots? Yes, we know that will happen because we hear about the complaints about them honking their horns all night at depots. The reason for the honking is that the cars automatically warn other cars by honking their horns and also act defensively by backing up if they move into a danger zone in front of them without stopping. Except that, when backing up, the cars don't respect the same danger zone behind them. So, when one of them backs up with another behind them, that one will honk it's horn and back up... right into the danger zone of the car behind it, which will then honk its horn and back up... you see where this is going I hope. The point is that self driving cars are at the point where they are good enough for certain uses, but they still may handle edge cases in bizarre, inhuman, unreasoning ways. Without true general AI (and probably even then), the way to handle bizarre edge cases is generally to identify them, come up with a plan for how to handle them, then develop a way for the system to identify them and apply the plan to handle them. This is clearly an ongoing process for self driving cars. So, it makes sense, when something that may be an unhandled edge case comes up, to not simply assume that what happened was not simply unavoidable and verify if that is actually the case and, if not, see if it could have been handled better. This is not an unreasonable position.
So how well do you understand the human brain? The behavior of that piece of safety-critical autonomous equipment is also unknown and varies widely across the population. And in a few cases where the behavior is known, it acts in fail-dangerous ways.
That is missing the point. For starters, we do actually hold humans to certain standards to perform certain jobs. Additionally, holding automated systems to the same standards is ridiculous.
Even if it were the case that the average self-driving car may be worse than a tiny fraction of human drivers. So what? Nobody should be making policies to cater to the extraordinary. Yeah some humans can pull an commercial jet. That doesn't mean you should replace aircraft tugs with people
You keep getting hung up on the same problem, confusing general performance metrics with focused performance metrics. There are specific situations that occur while driving that the majority of humans can handle quite well that self-driving cars are either bad at or completely incapable of. The situation I mentioned above with the honk/backup behavior is an example. So is the stopping for school buses example. There's that time that a self driving car confused a turning truck for a billboard, for example, and cut off the top of the car (and the passenger). My point is simply that self-driving is a work in progress, so incidents need to be scrutinized while your argument is... honestly I am still not really sure. Are you actually arguing that self-driving is now a solved problem that no longer needs any development? I don't see how you could be arguing from that position, but it seems to me like the only position where you could be complaining about my desire for incidents involving self-driving cars to be evaluated and publicly disclosed in detail.
Self-driving cars will probably not beat Max Verstappen on a good day for many decades. If that takes 50 years, are you okay with 2,000,000 people dying as a result of your demand for perfection?
This statement is clear evidence that you do not understand my position. So that we can hopefully reach a mutual understanding, can you please clearly state what you think my position actually is?
No, those are rational reasons to replace the accident-prone humans with self-driving cars. Once we have that, then we can issue a patch that recognizes deer. Then a bit later we can patch more animals.
See, this is the thing. You are not understanding my position. My position is that we should be doing the process that would lead to those patches as an ongoing process and that it should also be done publicly so that people understand clearly what the current state of the art in self driving is and what it isn't. I am actually all for self-driving cars, I am just not blind to the flaws of current self-driving systems or the flaws of the corporations producing them.
There's nothing you can do with human drivers to improve the situation in any way. And yes, we've been trying for many decades.
Yes there is, and we're doing them. Those things primarily include self-driving and self-driving adjacent features like auto-braking, lane following, etc. Once again, for the I don't know what number of times, I am all for self-driving. My entire point is that there are areas that need improvement, so we should scrutinize incidents to understand what happened and what needs to be improved.
Waymo engineers should have the information and they should work on it at some point. But from a policy perspective, it doesn't matter. Humans are already worse on average and even the best drivers have bad days.
Waymo engineers are a corporate black box. I think I see the fundamental problem here. You don't seem to know that Slashdot is supposed to be a nerd site. People here are supposed to care about the details, not just declare that they are problems for someone else to worry about and not sweat them. Unlike you, I actually want to know what the real state of the art is when it comes to self driving.