Not sure if the "you" in your post was me or the Googles of the world making self-driving cars. If it's me, I'll just point out that I never proposed that handwringing over decisional ethics was the one thing holding SDCs back.
My point was that questions like the one in TFS are matsurbation. The question ought to be, are we at a point where they're safer (aggregate) than humans, driving in real world conditions? You and I both agree that currently the answer is no. For optics and liability reasons, they'll probably have to be an order of magnitude safer than the average human driver before they gain wide traction. I think that day is closer than you seem to, but that's just fortunetelling.
I disagree about rushing to a blanket ban, and I don't grok your main complaint about jackasses with half capable systems. Is there a big mod/DIY community out there outfitting their Suburbans with hand-rolled CarLinux or something?
FWIW I agree the whole "don't worry the driver's gonna be right there to take over at a moments notice" line is absurd. I buy it for these early test runs where the drivers are paid to make sure no one gets flattened by a Google logo, but a real self-driving future actively discourages engagement and driving skill.