OK here's the thing with this generation of driverless cars- their motion is governed by neural nets. I am going to assume that everyone here is familiar with this programming paradigm. If not, the Wikipedia entry on it is adequate.
While in the end NN are just another form of Turning machine, currently no one can divine the algorithm of a trained neural net well enough to express it in IF THEN ELSE WHILE form.
That means given a trained NN which is 100% correct 100% of the time , no could write an imperative or procedural (broadly speaking) program which captured the logic (IF THEN ELSE) the neural net is using (defacto using, NN don't have IF THEN ELSE logic except those implicitly embedded in their activation rules) to solve the problem.
That means the algorithm the NN has arrived at is not open to analytical inspection and confirmation, except very indirectly.
This is OK for wide variety of predictive tasks in which human life does not hang in the balance. In medicine, the diagnostic results from NN and even Good Old Fashioned AI expert systems are reality--checked by human doctors.
Neural nets ALWAYS run the risk of coming to the right conclusion for the wrong reason enough of the time to fool humans into thinking it "understands" the problem domain in a way that is analogous to a human. A NN so trained will fool or lull human observers into a false sense of security until that BIG ACCIDENT happens then a post mortum reveals the shocking truth about what the NN was focusing in on to make it decisions.
The Big Idea behind NN is that, through a combination of evolutionary forces and billions of iterations the NN will learn using the same Hebbian activation princples the brain appears (now) to use and that with enough training, the exceptional cases that I am describing will be found and rooted out.
But even in nature, this doesn't happen reliably. Take for example the Australian Jewel Beetle. Over perhaps millions of years, it has of course evolved a robust way to recognize desirable mates and procreate. That is as basic an evolutionary task as you can imagine- it has to work or the species is doomed.
However, the male's algorithm for mating is not as robust as you might imagine. It seems that what males rely on to select a mate is a very, very limited set of perceptual cues. As it turns out, it is looking for big glossy brown curved things. When it sights one, it alights and starts humping away.
Well, Austrailian beer bottles fit this description *and fit it better than the female of the species*. People toss empty beer bottles in the outback and the result is the male beetles prefer the beer bottles to such a degree that the beetles were going to go extinct. Austrailia had to pass a law to change the appearance of its beer bottles.
http://blogs.scientificamerica...
This is a cautionary tale to those who think evolutionary forces produce only *robust* algorithms. What evolution actually produces is *good enough so far* algorithms. What well trained NN produce are similarly good enough algorithms. In both cases we have to do science to try to get at what it is they are relying on- what features they are *really* trained on. And we don't know there's a problem until tragedy happens and we don't know how ridiculous the problem is until we do science.
This is different from procedural programming which, the Halting Problem notwithstanding, CAN be analytically examined for correctness. Procedural type programming plus sensors is what runs water stations, trains, planes etc. The military does use NN to try to recognize things but it has humans making the final decision and when the missle gets launched, it's not left to a NN to decide where to finally land.
Moreover, self driving cars under the control of a NN can and will be attacked by the usual miz of 14 y/o kids, pranksters, criminals and terrorists each of whom will look for the hidden fragility , the Black Swan, in those algorithms.
I would not drive my car straight into a bridge abutment full speed without braking because a prankster had taken chalk and marked up that bridge abutment to appear to be a road in some crude way. That is the stuff of Roadrunner cartoons. But with NN, it's entirely possible such a ruse would play beer bottle to the NN Austrailian Jewel Beetle. In fact, a successful hack would even have to resemble in any recognizable way a road; the hacker only has to use their creativity to find them mimic whatever it is the NN is focusing in on.
Think of the liability. You couldn't even create a click-n-fuck-you TOS since innocent bystanders will always be put in peril.
There are lots of alternative ways to achieve the interesting goal of creating robust self driving cars, including using sensors on the road and cheap, defined tracks ala a train or trolley but with alternative technology. Commerical jet planes are essentially self flying owing to a combination of restricting airspace so as not to create random chaos (birds are a problem) and well-defined, predetermined trajectories.
Actually, we've made that move wrt to road transportation once before- when we switched from horses to cars. With a horse, you went anywhere the horse could travel using any trajectory you felt like. The world was your road. Then we traded that freedom (and lots of people didn't like the loss of that freedom) for efficiency, speed and capacity.
We achieved that y doing something which would have been thought logisitcall and financiall impossible, we paved the crap out of the entire land mass.
So too we can rework our existing road system to accept virtual electronic or even physical "tracks" that cars are outfitted to run on. Yes, there is loss of latitudnal freedom implied there, but the recapturing of time and attention we spend on driving will probably be such that most people go along with it.
NN and evolutionary program or genetic programming is having a heyday and lots of researchers and investiment money is captivated by the idea. AI in general is currently going through one of its cyclical periods of hype and wild enthusiasm. People like Kurtzweil declare that "the singularity is near" and every paper and blog declares that soon computers will be smarter than we are, that they are going to take all our jobs |and will they keep us around after they learn to pop the tab on a beer can find the channel the Bachelorette is on and all this kind of talk.
We've been here before, in the 60s then again in the 80s. Since those times we've gotten the electronic calculator, Adobe Photoshop and the Internet. Do you feel obsoleted by any of those?
Self driving cars via NN is, most likely, another interesting experiment from which we'll learn a lot but which will ultimately fail to live up to its hype.