It is certainly interesting that deciding whether or not to kill some fleshy humans can be demonstrated to be circumscribed by the halting problem; but it's always a bit irksome to see another proof-of-limitiations-of-turing-complete-system that (either by omission, or in more optimistic cases directly) ignores the distinct possibility that humans are no more than turing complete.
Humans certainly are enormously capable at approximate solutions to brutally nasty problems(eg. computational linguistics vs. the average human toddler); but that is very different from a demonstration that, say, humans possess an Oracle, or are some sort of hypercomputational system, rather than simply being enormously good at hard-but-not-theoretically-intractable problems in certain areas.
In this instance it's especially galling because we've only been philosophizing about acceptable losses, 'just war', legitimate causus belli, 'proportionality', and whatnot for about as long as we've been chucking spears at one another. It's a pure commonplace that a mixture of overkill and underkill is an effectively certain outcome when you go to war. It is interesting that, in principle, kill/no-kill is subject to the halting problem; but has anyone (aside from sleazy assholes hyping 'smart' weapons) ever asserted that kill decisions would be anything but imprecise?