That may be true. However, self driving cars are an entirely different matter. While they are really cool, do you really want to be in one hurling down the highway at 85MPH (I'm in Utah) and trusting that the automated systems are going to know the difference between a coyote or a tumbleweed?
Yes. In fact hopefully it is (much) faster, since the self-driving cars will be so much more reliable than meat-popsicle cars.
There are an incredible number of obstacles that a person can instantly recognize that even today, a computer can't. If a child and a dog run out into the street at the same time from opposite sides, do you trust the car to make the right decision as to which it will run over?
First, a person can't instantly recognize anything. We have significantly longer reaction times than computer systems. If a child and a dog run out into the street at the same time, a self-driving car has a better chance of hitting neither of them. A human on the other hand will take a lot longer to start braking at all, and in all probability (if the time scales are so low), not actually put any thought or reasoning behind their reaction, they will just try to swerve out of the way, possibly resulting in both being hit.
How would you like to be legally responsible for your self driving car if it runs over a child? What about black ice? What if a person is in the road and the car has a choice of running over the person or crashing and possibly killing you. Do you trust the car to make the right decision?
I wouldn't really like to be liable for those things, but I wouldn't really like to be liable if I did them either. That said, you are absolutely correct in that there are deep legal questions to be answered before we can have ubiquitous self-driving cars. At first blush it seems like the manufacturers are the correct place to put the liability, as in a properly designed system, the only input the driver should have is the destination. You can obviously expect them to fight tooth and nail to not take on this liability, though. It's a very interesting question, but let's actually try to answer it instead of just saying it doesn't work in the currently existing legal framework.
As much as I like software (and writing it), there are IMHO too many judgement calls for a computer and in many situations too many for a lot of (supposedly sane) people.
The only way I can see self driving cars really working is to have special roads to carry them. These would be isolated from regular traffic and most of the regular road hazards. They would be in many ways analogous to a set of rail road tracks.
That's one possibility yes, but the reason for it would be to keep the dangerous meat-popsicle cars away from the much safer, much faster, much more efficiently packed autonomous cars.
(You don't see trains often running into problems with obstacles -- though when they do, the train usually comes out ahead.)
I see you don't watch the news. Trains derail all the time, wrecking much of their cargo, sometimes spilling nasty chemicals.
Once you get to where you generally plan on going, you jump off and drive the rest of the way manually.
So what you are proposing is... a train network. That seems to have worked out incredibly well. Sarcasm aside, it absolutely has for its use-case, but you still see millions of single-occupant cars on the road every day for a reason; that's not going away.