If the promise of self-driving cars includes the idea that the self-driving car will rarely ever be in error and will at least be far less likely to be in error than a human driver, then it seems probable to me that a human being being alert enough and able to correct the self-driving car may make things worse by trying to intercede both legally and in terms of actual outcome.
Also, market forces might cause insurance companies to offer lower rates for cars that are self-driving, and eventually much lower rates, because they know they will almost never have to fork over the money and they want that market. At some point, courts and the general public will figure out that it was almost certainly the fault of a human-driven car and as such, liability may end up being nearly always on the human driven car, driving up insurance for cars built for or intended to be used by humans drivers.
Human drivers might end up being priced out by the rising costs of gas (self-driving cars are probably going to be more economical), liability, and so forth. Once a car can really be self-driving, we can have probably pretty damn cheap self-driving taxis and "minivans" which use algorithms to pickup multiple people in a small area who want to go to a similar place, further driving down the costs of transportation, gas, liability, etc.
It might also have an impact on health care costs, as accidents cost the state, insurance, and patients, lots of money in hospital care for accidents. Certainly avoiding the negative economic impact of losing valuable people (aren't we all valuable?) to car crashes will also probably fuel legislation that makes it ever harder/costlier/illegal for a human to drive a car on a public road.