Your point is taken, but:
If a machine is capable of autonomous behavior (i.e. still carrying out an objective when completely cut off from remote control), and it's capable of delivering lethal force, it's an autonomous killing machine. Or, to use the term the military uses (which I just looked up), it's a Lethal Autonomous Weapon System (LAWS).
It's possible to think of a scenario where LAWS (LAWSes?) will accomplish an objective without harming a hair on anyone's head-- as you have done in your post. The problem is that it's rather easy to think of *other* use scenarios, ranging from the mundane to the extreme, where there is a different outcome. To repeat the point I made earlier: war zones tend to be cluttered up with a large number of human beings, some of them combatants, some not.
The current status of "LAWS" (which, again, I just Googled) seems to be this: The US military doesn't (officially) field any of them right now, but they have no policy against doing so in the future, and there are no international treaties which would dissuade them from doing so.
(Not that I believe for a second that any treaty would make a bit of difference. The US still uses *land mines*, to a limited extent, despite the fact that almost every other country in the world has signed a treaty forbidding their use. The Russians and Ukrainians are using land mines right now despite being signatories to the treaty. I suppose a landmine would technically qualify as a type of primitive LAWS).
So this is a real-world issue.