Actually, this may not be so bad. If they're not government agencies, then they're not immune to lawsuits and when they bust in the wrong house, that person can sue the hell out of them, right?
Come as a surprise? If they WEREN'T doing this, then the people running the company would be incompetent and should be tossed out the door.
The irony is that he's 180 degrees off from the main problem with his story, which is that he thinks robots are magic too. The reason robots will not be making ethical decisions is that they can't, not only because getting them to reason ethically would require us to agree on a system of ethics for them to follow, but because even if they had such a system, they don't have enough data to act on it with the degree of accuracy that would be required for the premise of the article to make sense. The author essentially assumes that these car-driving robots will be omniscient, or that they will be able to trust the omniscience of the robots in other cars with which they are communicating. The first supposition is nonsensical; the second is unlikely to be true, for the same reason that video game cheats are a problem.
He does no such thing. He assumes that the programmers who write the algorithms that control the robots will consider various possible responses to an emergency situation and will use ethical decisions in deciding how to code their algorithms. There may indeed be circumstances where the robot does not all of the data available that would be needed to make a valid ethical decision. Robots will certainly not be omniscient. Their sensors will not be infallible, nor will they be able to accurately discern all of the factors in all of the cases. But that doesn't mean there are no cases in which ethics will play a factor. A robot would almost certainly be able to tell the difference between a bus and a small passenger car, and it's reasonable to assume that the bus carries more passengers than the car, even if there are some cases where that would not be true. If a bus turns left in front of you when you have the right-of-way and the robot calculates that it is unable to avoid a collision altogether, should it hit the bus or swerve into the next lane, hitting the passenger car there? That's a scenario where some variant will almost certainly happen if self-driving cars become common, and it's one the algorithm should take into account. It doesn't at all mean the robot-cars are capable of thinking, of calculating ethics, or are omniscient. The question is how the programmer's writing the algorithms should code the decision making tree.
True, they did not, but I would put that at the level of mistake rather then being unreasonable.
I'm reasonably certain that the OpenSSL team did not do this on purpose. It likely wasn't a sabotage by a malicious developer. I seriously doubt someone paid the team to intentionally install the bug. You're almost certainly right that it was a mistake. But arrogance, ignorance and other weaknesses lead to mistakes which should not be made, and when they do, it's jake to point the finger. Just because it was a mistake doesn't mean it was out of their control.
Always draw your curves, then plot your reading.