I see...Artificial general intelligence is hard so anyone worrying about its consequences is uninformed. Wait. What? And as long as it doesn't have artificial general intelligence, there shouldn't be any problems with giving a machine control over lethal weapons. Wait. What? Maybe instead artificial general intelligence is a long term existential threat independent of whether current technology is particularly close to achieving it, and Elon Musk knows a little more than they give him credit for. And maybe the transfer of decision making about the use of lethal weapons to machines is always a very bad idea. Unless you hope to make money from selling such devices to the military. In which case, this report sounds like an excellent strategy.