That may be true. However, self driving cars are an entirely different matter. While they are really cool, do you really want to be in one hurling down the highway at 85MPH (I'm in Utah) and trusting that the automated systems are going to know the difference between a coyote or a tumbleweed? There are an incredible number of obstacles that a person can instantly recognize that even today, a computer can't. If a child and a dog run out into the street at the same time from opposite sides, do you trust the car to make the right decision as to which it will run over? How would you like to be legally responsible for your self driving car if it runs over a child? What about black ice? What if a person is in the road and the car has a choice of running over the person or crashing and possibly killing you. Do you trust the car to make the right decision?
As much as I like software (and writing it), there are IMHO too many judgement calls for a computer and in many situations too many for a lot of (supposedly sane) people.
The only way I can see self driving cars really working is to have special roads to carry them. These would be isolated from regular traffic and most of the regular road hazards. They would be in many ways analogous to a set of rail road tracks. (You don't see trains often running into problems with obstacles -- though when they do, the train usually comes out ahead.) Once you get to where you generally plan on going, you jump off and drive the rest of the way manually.