I once worked on the camera portion of a semi-autonomous weapon which, once a target was designated, would continually analyze the live image to maintain, track and intercept that target. A key part of the system was a human in the loop abort, which would cause the system to veer off target before impact should the operator see something he or she didn't like: not the intended target, high probability of collateral damage, etc.
The point is, all judgements about selecting the target and aborting the mission or changing targets were in the hands of a human. The automated parts were vehicle operations, corrections for terrain and weather, tracking an operator-designated object, etc. — all things that required no risk assessment, moral judgment, ethical considerations, etc.
That's the difference between autonomous and semi-autonomous: A human identifies the target, and monitors the system to issue a stand down order as new information becomes available.
(It's also the only weapon system I ever worked on, and it caused me great conflict. Though the intended use had merit, the possible unintended uses made me very uncomfortable. No, I can't be more specific.)