Comment Re:Baby steps (Score 1) 289
I'm equally sure that there will be exponentially more situations where standard automation will make better decisions, and produce better outcomes, than average (or even well above-average) human drivers.
I absolutely agree with you that there are probably already "exponentially more situations where standard automation will make better decisions." Human drivers make stupid decisions all the time -- driving too fast, following too close, changing lanes abruptly without signaling, etc. But thankfully, humans are also adaptable enough to deal with a lot of bad unexpected things that come about because of those bad decisions.
I'm less certain whether I agree with you that AI will "produce better outcomes" in "exponentially more situations" anytime soon, mostly because of articles like this one. It sounds like AI is great for dealing with the expected, and it probably survives well by having detailed information about the route along with pointedly NOT making all those poor decisions that human drivers make (i.e., actually using a safe following distance, not weaving between lanes, etc.).
But the question is -- in real life where significant adaptability is required -- which factor will win out? Will AI perform better because all of those "better decisions" prevent more accidents, or will AI's lack of adaptability cause more accidents than all the "better decisions" prevent? What really matters is the number of serious and fatal accidents per X number of miles -- an AI may make "better decisions" 99% of a time than a human, but it's those 1% of cases where accident avoidance is critical where adaptability matters... and if AI doesn't have it, AI's stats may not be better than humans in terms of outcomes for a while.
I tend to agree with GP on this: it will be decades before AI will achieve adaptability to ALL roadway conditions on unknown roads (or at least roads with unknown novel hazards) that will outperform GOOD human drivers (not stupid humans who drive like maniacs).
That doesn't mean that AI won't be able to perform well under controlled conditions on well-known routes -- the question is just when that limited functionality becomes good enough for drivers, safe enough the regulatory agencies will allow them to be sold to anyone, and safe enough that the legal problems that could arise (liability issues, insurance issues, etc.) can be adequately resolved..
I'm sorry, but "there will always be situations where a human performs better than AI" sounds an awful lot like "I won't wear a seat belt because it might trap me in a burning car".
I really don't mean to be a jerk about this, but didn't you actually just utter pretty much those exact words?! -- from earlier in your post:
I'm sure that there will always be a few situations where a skilled human driver will make better decisions, and produce better outcomes, than standard automation.
So, given that you said that and that you were "sure" of that statement, does that mean you also don't wear a seat belt because you're afraid of dying in a car fire? Just wonderin'.