Why is the ethics for an autonomous killing machine different from a non autonomous one?
Because "autonomous" means "non-manned". A drone has no dreams, hopes or an anxious family back home waiting for its return. The only thing getting hurt when one is shot down is the war budget, and even that money lost turns into delicious pork in the process.
If you don't have to worry about your own casualties, it changes the ethics of tactics - which, like it or not, matter a lot in the Age of Information - quite a bit.
To me that sounds just like another case "it happened with computers so it must be more dangerous because I do not understand computers".
It is, to Elon Musk. He's high up in the current system, and thus has little to gain and a lot to lose from any changes to status quo.
Figure out a way to raise humans so that they don't turn out bad. Then apply the same method to other neural networks.
If you don't go out of your way to abuse children, they usually turn out okay. The problem is, society is more than just a collection of individuals. A decent person still has limited personal strength and thus can give in to peer pressure, and once they have, their compliance - or at least silence - helps put pressure on others, which is how places like North Korea can persist, at least for a while. Nor can peer pressure be simply judged an unfortunate defect and eliminated from the design of any artificial intelligence, because it also helps keep various not-so-decent impulses and urges under control, and also because it's not possible to upkeep a technical civilization if you can't make any assumptions about the behaviour of someone you've not met before.