We just need to be clearer where we allocate blame. If I launch a robot, and the robot kills someone, the responsibility for that killing is mine. If I did so carelessly or recklessly, because the robot was badly programmed, then I am guilty of manslaughter or murder as the courts may decide. Bad programming is no more an excuse than bad aim. A robot that I launch is as much my responsibility as a bullet that I launch, or a stone axehead that I wield.
So the three laws, present or absent, are a problem for the launcher of the robot weapon. We don't need complex international laws about AI, we just need a wholehearted implementation of "You broke it, you pay for it".
Which is just as well, because by and large attempts to ban "immoral" weapons have failed. The only fairly successful instance is chemical warfare, which has succeeded because chemical weapons are actually rotten weapons, far too likely to misfire or backfire. Whatever rules are made, automated weapon systems will come in. In fact, they have: what is the significant difference between a mine which explodes when it detects a man, tank or ship, and a gun which fires when it detects a man, tank or ship?