Comment All nice and well (Score 1, Interesting) 99
...until AI decides to do something (we see as) stupid because the datapoints didn't teach it that specific combat situation.
AI can't assess the combat situation and decide against an order which is based on incomplete data and would lead to a disaster.
Random example: AI-piloted fighter engages enemy above a civilian populated area, at some point the enemy target is between AI airplane and ground, a missile could be launched, but if the missile misses the target, it would slam into that nice neighborhood. AI has orders to destroy the enemy, fires said rocket, which misses and slams into said nice neighborhood. Boom, hundreds dead.
Sure, you could implement an exception, than an exception to that exception, then... you know. A whole web of intertwined decision trees. Neat, but SLOW. Won't work either.
A human pilot could also make mistakes, but they would understand responsibility and generally err on the side of protecting civilians, sometimes against their immediate orders (or maybe I am THAT naive). The AI would not, unless it's taught a LOT more things than just how to fight, and there will always be huge gaps in its training, simply because it doesn't have 25+ years of human life under its belt.