You can't synthesize a general rule from systemic failures? Keep It Simple Shithead.
Planes do fail by software errors.
http://en.wikipedia.org/wiki/Q...
http://it.slashdot.org/story/1...
http://en.wikipedia.org/wiki/A...
Antilock brakes are very simple systems, and you have a mechanical backup as well. But, for the record, I don't like computer controlled brakes. I drive a mechanical car.
If ABS do fail or malfunction, I doubt anyone is keeping track as to how or when. As no one keeps track, you can't perceive systemic failure as a problem. They'd have to fail massively for anyone to care.
Robots don't operate very much, and frankly I certainly don't want a piece of software cutting on me. It's not outlawed for the same reason automated cars aren't outlawed. Not enough experience to perceive failure, and an unwillingness to acknowledge failure when it does happen. And civilized countries allow voting via computer programs as well - the ultimate in unpercievable failure.
Pacemakers can fail via deliberate malware infestation, or an EMP attack or accident, or a software bug. Just because you don't know of a failure doesn'[t mean it doesn't happen.
Here's some automated software injuries:
http://en.wikipedia.org/wiki/T...
http://www.ccnr.org/fatal_dose...
As to your point about a software bug failure on Twitter being different than a software bug in a car running half a billion lines of code:
You make my point for me. Twitter failed from one point. Just one point. Half a million lines of code have damn near an infinite chance of:
1. Failure through complexity. Any real-world programmer knows that hyper-complex systems can have cascading weirdness.
2. Failure through sensor failure, processor failures, bus failures, and similar failures we can't anticipate.
http://it.slashdot.org/story/1...
http://www.cs.tau.ac.il/~nachu...
And Google's robot car had to be rebooted twice during its certification run.
3. Failure through an the inability to program a PC to anticipate all the possibilities that a car swarming with other cars in a real world situation. One can't program that.
4. Failure through vulnerability to outside attack. Software on a network is very vulnerable; one hundred percent so. Physically, a high energy radio pulse fired at a car, or a whole highway of cars, would cause carnage. Carnage would be multilation and death, what happens when steel boxes swerve randomly around at 70 mph with no driver.
5. The problem isn't about ALL cars failing. One car can fail and crash the cars around it. For the system to work, all cars have to work 100% perfectly all the time.
An car - driver is eating a sandwich. Car computer failure would crash the car instantly, depending. Carnage.
An airplane - plane is, generally speaking, in the air most of the time. If the computers fail, somehow, the pilot can take control with time enough to avoid contact with other planes or the ground.
Car - failure, milliseconds to react, car may not even let you drive. Plane: seconds or minutes to recover and land.
I'm only pointing out the obvious failure points. Others will happen. I wistfully recall posting on Slashdot about the vulnerability of a NFC card being read without the owner's knowledge; I was mocked as an ignoramus. I just pointed out physics didn't rule out building a concealed reader, or very powerful pulse generator. Both have happened.
I await the stories of failed robot cars in the coming years, and either the panicked response or the determined refusal to acknowledge a problem
Try driving your own cars. If that isn't safe - and it ISN'T, cars kill more people than wars - think about building decent public trans.