It is easy to argue that a perfect driverless car should act according to strict utilitarian principles, maximizing the number of lives saved. But a perfect driverless car, bug-free and unassailable, is still decades away, if it is even achievable. Imperfect driverless cars are close, but the rules are different. They must be. Until it can be proven that a driverless car is bug-free and utterly immune to outside attack, there must be no code path that allows it to deprioritize the lives of its own occupants. The three reasons for this are simple: bugs, attacks, and buggy attacks.
The issues with bugs and attacks are clear: bugs can cause random deaths, and attacks designed to kill the passenger create a new tool for those who would murder. But buggy attacks -that is to say, attacks that are not designed to harm anyone, but do so anyway because faulty attack code- may be the biggest threat of all. More than one piece of malware, particularly among the early viruses and worms has proved far more destructive than its creators ever intended, all due to bugs, not in the code of the system being attacked, but in the attack code itself. Even if the code in a driverless car's system can be guaranteed bug-free, we cannot assume this of attack code, which is what makes immunity to attack so important.
We are not at a state where we can guarantee such security. Until we are, we must not allow driverless cars to deprioritize their own occupants' safety, even in cases where doing so holds great philosophical appeal. Doing so would almost certainly take far more lives than it would save.