You can commit genocide rapidly with artillery and airstrikes too - that's not really the issue.
At it's core this is really a debate over liability and perception. If you setup a perimeter gun, who's liable when it kills someone? If it's supposed to have IFF and it fails then who's liable? The guy who set it up? The manufacturer? etc.
But more important then that is very much perception: the law of armed conflict exist because war is not eternal, it has to end someday and we'd like that to be sooner. Where robots fit into this is an interesting question: indiscriminate machines that you know group X unleashed on you probably is somewhat worse then group X's soldiers showing up, since the perception of who was responsible isn't clear - if it's not just the soldiers, it might as well be all of them so let's go kill all their civilians when we get the chance.
But conversely robots offer some weird modifiers to that possibility - after all, it's conceivable you could build an armored soldier which would only ever fire back at muzzle flashes with pinpoint fire (maybe lasers?) meaning it would be staggeringly unlikely to ever hit a civilian. This sure would help a lot in asymmetric warfare, but then, if the robot can't "die" should it kill at all or should we only use tazer and dispersal weapons?