The robot should give priority to its owner. If the robot has to consider all humans equal, it will have to deal with ambiguity and uncertain intention in the external environment, which can lead to some disturbing possibilities. Consider the possibility that a robot could be manipulated into committing murder by having two pedestrians step out in front of a car on a narrow bridge. The car has no choice but to turn off the bridge, because two people are worth more than one. Or turning away from pedestrians (who are more likely to die) and instead going into oncoming traffic (where the oncoming car may or may not even have a passenger, or it may be a school bus). By always maximizing the survival of the passenger, I suspect that overall deaths will be minimized.