In lots of sci-fi movies where robots/computers start slaughtering people
there's some point where these machines decide to disobey their instructions
not to harm people, and thus their sentience is manifested in their free will.
I haven't seen anyone talking about the required code 'Thou Shalt Not Kill' to be in these machines to
prevent them from killing people, either of their 'own' volition or from
orders. If anything, news on the recent interest the Pentagon as taken in using
Segways on the battlefield as Mobile Autonomous Robots
(http://seattlepi.nwsource.com/business/150662_segway02.html) suggest the
contrary will be true. News of Sony's QRIO humanoid robot, which can run, jump
and recover from falls focuses attention on the advances in the robotics field,
one that seemed to have lost public attention decades ago when it was apparent
that robots were not very human looking, and worked mainly to assemble cars in
what could be described as repetitive, fairly simple work.
In response to concerns over robots like QRIO (see lots of them here: http://www.androidworld.com/prod01.htm) being developed into killing machines, some people have pointed out flaws. They're short (a couple of feet tall), expensive and not as flexible and resourceful as humans. These are all probably true at this point, but over time they will get cheaper, could be made larger and certainly will be even more capable. Already the advances made in what a relatively inexpensive, consumer oriented robot like this can do - run and jump, are incredible and only suggest further advances will be similarly amazing.
I see no reason to doubt that robots will only get more attractive as an eventual replacement for conventional ground troops, but just because they look humanoid and they may be used as weapons doesn't mean they have to function just like a foot soldier. There are a lot of things robots can do that people can't, won't, don't like to or have difficulty doing.
From the 'replace foot soldiers' perspective, imagine how the Indian army could use robots in Kashmir to defend against rebels and Pakistan. The conditions there are beyond belief. It's far below freezing, the air is thin and it's isolated. (see this article for details: http://edition.cnn.com/2002/WORLD/asiapcf/south/05/20/siachen.kashmir/)
Robotic sentries could be stationed at strategic points to be on watch for months at a time, never getting bored, tired or cold. If an enemy were detected it could fire on a target from a great distance, doing all the math to compensate for distance and wind, never shivering from the cold, and then run at the target or another high point, unafraid of the dangers of falling.
For a 'non foot soldier' scenario, robots could be used as replacements for bombs where a higher degree of surgical precision is needed to avoid collateral damage, and even to get confirmations on the identities of targets hit. Heavily armed and armored they could withstand harder landings than man, resist more hits, and no matter what the price they're expendable, allowing more radical moves than may normally be available.
If you need military (not necessarily lethal) robot examples that are non-humanoid and are being used or currently tested take a look at the DARPA contest involving autonomous ground vehicles (http://www.darpa.mil/grandchallenge/), and this article (http://msnbc.msn.com/Default.aspx?id=3068872&p1=0) that mentions how the makers of the Roomba robot vacuum are working on a military robot, and look at this article (http://www.jfcom.mil/newslink/storyarchive/2003/pa072903.htm) from the US Joint Forces Command that spells out in black and white that the US Military is interested in robots, just in case you couldn't figure that out!
It's a given that true robots, not remote controlled machines, but autonomous, computer powered creations will be used for killing and could conceivably be used by the 'wrong' people (presumably anyone who might want to kill you or yours). So the question of whether robots could gain free will and would then have a motivation to wipe out humans is moot. Humans want to wipe out humans, so if they have robots that can kill, then robots will be killing humans. It's really just a question of scale, and if you happen to be on the killing or killed side.
The final relevant issue on this is how well the robots are programmed. How well will they differentiate friend from foe, combatant from non-combatant, and how well will they be able to resist being reprogrammed by someone other than their owner?
We live in interesting times where science fiction starts looking a whole lot like fiction.