Become a fan of Slashdot on Facebook


Forgot your password?

U.S. Army Robots Break Asimov's First Law 821

buanzo writes "The US Army is deploying armed robots in Iraq that are capable of breaking Asmov's first law that they should not harm a human. SWORDS (Special Weapons Observation Reconnaissance Detection Systems) robots are equipped with either the M249, machine gun which fires 5.56-millimeter rounds at 750 rounds per minute or the M240, which fires 7.62-millimeter rounds at up to 1,000 per minute. " update this story refers to this article from 2005. But com'on, robots with machine guns! I don't get to think about that most days!
This discussion has been archived. No new comments can be posted.

U.S. Army Robots Break Asimov's First Law

Comments Filter:
  • Not really... (Score:5, Insightful)

    by jargoone ( 166102 ) * on Wednesday March 15, 2006 @10:50AM (#14923778)
    From TFA:

    They are still connected by radio to a human operator who verifies that a suitable target is within sight and orders it to fire.

    While they are harming a human, it's ultimately a human that makes the decision to fire. And who cares about fictional "laws", anyway?
    • Re:Not really... (Score:5, Insightful)

      by Anonymous Coward on Wednesday March 15, 2006 @10:58AM (#14923873)
      And who cares about fictional "laws", anyway?

      I think it's this point that is the most salient. Asimov's laws are interesting, and make for good "debate over your adult beverage of choice" fodder, but they are just one persons take on a single use case for a particular technology. Those laws might make sense for industrial and domestic helper robots, but wouldn't apply for military (obviously) or law enforcement roles. Certainly a law enforcement robot could be trained to limit the amount of harm it inflicts on a perp to neutralize him, but some amount of harm may be necessary.

      Bottom line is that as robots actually do start entering more into our mainstream lives, some "real" thought needs to be given to how to make them as non harming to humans as possible. These laws, while laudible, can't be "programmed" as is, making the task much more complex.
      • Re:Not really... (Score:5, Insightful)

        by macdaddy357 ( 582412 ) <> on Wednesday March 15, 2006 @11:25AM (#14924167)
        A lot of people think Asimov's laws are real, and don't get it that he was a sci-fi writer, not a scientist in the field of robotics. He was even asked to speak at universities as an expert on robotics when all he had done was write some stories. If they had read the robot novels, they would have noticed that even Asimov's robots did not always obey the laws.
        • Re:Not really... (Score:5, Insightful)

          by rk ( 6314 ) * on Wednesday March 15, 2006 @11:46AM (#14924385) Journal

          I thought the point of Asimov's stories was that they always obeyed the laws, but not necessarily in ways humans would. Most stories in "I, Robot" show that these seemingly excellent and fault-tolerant laws could have unexpected and sometimes dangerous consequences of their own, and that the real-world is too complicated to ever be dealt with only hard and fast rules.

          You're right though, I never understood why people took Asimov's laws as a great thing to use as a reference for robot behavior when the same author who created them proceeds to point out their flaws for an entire book's worth of short stories.

        • by Frazbin ( 919306 ) on Wednesday March 15, 2006 @12:11PM (#14924659)
          Sci fi writer? Pfft! Next you'll tell me we don't have advanced humanoid robots with positronic brains, and that U.S. Robotics is just a shitty winmodem manufacturer!
        • Re:Not really... (Score:5, Insightful)

          by AlterTick ( 665659 ) on Wednesday March 15, 2006 @12:16PM (#14924711)
          A lot of people think Asimov's laws are real, and don't get it that he was a sci-fi writer, not a scientist in the field of robotics. He was even asked to speak at universities as an expert on robotics when all he had done was write some stories. If they had read the robot novels, they would have noticed that even Asimov's robots did not always obey the laws.

          Indeed, I think anyone who reads "I, Robot" and comes away with the notion that the Three Laws are a good idea should be barred from working in robotics entirely. Asimov's short robot stories drive home again and again how those hard-coded, inviolable laws are a very, very bad thing, and taken to their ultimate end, could result in the human race basically being reduced to animals in a robot zoo! Seriously, I think too many people read "I, Robot" when they were too young to grasp the serious philosophical point behind it, and haven't bothered to re-read it since.

          The book uses robots as an analogy for a very serious philosophical point about humanity: codified rules are not a suitable replacement for people educated in ethics, science, and rational thinking. No set of laws, commandments, edicts, or mandates passed from On High will ever match every situation. Knowledge is the only way forward.

          • by conJunk ( 779958 ) on Wednesday March 15, 2006 @12:51PM (#14925085)
            alright buddy, take your common sense and your accurate reading and go somehwere else, okay! i've got a fantasy to live in here
          • Re:Not really... (Score:3, Interesting)

            by UserGoogol ( 623581 )
            Yes, but I'm not of the opinion that Asimov ever portrayed "the human race being reduced to animals in a robot zoo" as being a bad thing. In the particular stories in I, Robot where he directly touches upon the idea of robots entering politics, ("Evidence" and "The Inevitable Conflict") he's sort of nervously optimistic on the subject. In Evidence, he very clearly states that robot overlords would be awesome, and in The Inevitable Conflict he's a bit grimmer, but I still don't think it's depicted as a "bad"
        • Re:Not really... (Score:5, Insightful)

          by roman_mir ( 125474 ) on Wednesday March 15, 2006 @12:51PM (#14925082) Homepage Journal
          If they had read the robot novels, they would have noticed that even Asimov's robots did not always obey the laws. - you should reread the short stories then, because those robots always obeyed the laws since they were hardwired to break if they tried to violate the first law. The point of Asimov's stories was to show that in this world the idea of absolute laws doesn't work. The absolute laws do not cover every situation and often paradoxes are created where the law, that is followed to strictly, causes some sort of an unintended and often harmful result. This happens because the robots followed the letter but not the spirit of the law.

          Same thing obviously applies to humans, this is why Asimov's stories are such an interesting read and will never become out of date.
        • Re:Not really... (Score:4, Interesting)

          by smoker2 ( 750216 ) on Wednesday March 15, 2006 @02:33PM (#14926010) Homepage Journal
          Yeah, there's that guy Arthur C Clarke too, he writes sci-fi, and made up something called a geostationary satellite [] , that will never happen either !

          Oh, wait ...

        • Re:Not really... (Score:3, Informative)

          A lot of people think Asimov's laws are real, and don't get it that he was a sci-fi writer, not a scientist in the field of robotics. He was even asked to speak at universities as an expert on robotics when all he had done was write some stories.

          All he had done? Dude, doctor Asimov INVENTED the word "robotics".

          If they had read the robot novels, they would have noticed that even Asimov's robots did not always obey the laws.

          If YOU had read them, you would have noticed that they ALWAYS obey the laws. The laws
      • Re:Not really... (Score:3, Insightful)

        by SpacePunk ( 17960 )
        When robots become autonomous then I'll consider some sort of 'law' programming. As they are now, they are either simply remote control devices or have less 'intelligence' than a cockroach.
      • Re:Not really... (Score:5, Interesting)

        by arivanov ( 12034 ) on Wednesday March 15, 2006 @12:02PM (#14924556) Homepage
        They have an immediate major failing which was duly noted by quite a few other SciFi writers.

        There is at least one missing law: The robot must know that he is a robot.

        Without this one the primary three make no sense.

        • Don't forget:

          The Robot must strive to understand the concept of love by asking humans, 'What is this thing you call...LOVE?'
      • The real purpose of the three laws was to create a plot device. Asimov was clear that robots could be used to kill people -- that was the excuse for bringing up the three laws to begin with -- people scared that robots would run around killing them.. so they created the three laws to keep people calm.

        That violent future history of robotics leading to the creation of the three laws could have made a story in and of itself, but asimove relegated it to a footnote -- because that sort of story would be somet

      • Is it hard reality? No, of course not. But neither are the depictions in Frankenstein, Fahrenheit 451, 1984, Jurassic Park, Gattaca, or Animal Farm or Watership Down for that matter. But clearly these are important pieces of thinking on the issues they address--issues like surveillance technology, abuse of political power, genetic manipulation, etc.

        Speculative fiction is often where the implications of technological change are first addressed. The most successful practitioners are literally thought leaders,
    • by PIPBoy3000 ( 619296 ) on Wednesday March 15, 2006 @10:58AM (#14923876)
      . . . a place where Asimov's Laws, like the US Constitution or the Geneva conventions, don't really apply.
    • Re:Not really... (Score:3, Informative)

      by shoptroll ( 544006 )
      Agreed. They're called a "plot device", outside of Asimov's books they have no meaning whatsoever.
    • by Alien54 ( 180860 ) on Wednesday March 15, 2006 @11:25AM (#14924170) Journal
      And who cares about fictional "laws", anyway?

      Many researchers are spending lots of time researching AI, and the problems for which the Laws of Robotics are a an attempted solution; Namely how do you keep the robotis from taking over and/or indiscriminately killing mere humans, as seen in so many hollywood movies. So fictional laws are important as experiments in looking at potential solutions to a real problem.

      As I see it, the main problem consists of two factors. One factor develops as a result of the first.

      The first factor is consciousness, also known as self awareness. The second factor sounds like it is the first, but it includes other areas.

      The second factor is Identity. Identity is not restricted to Self Awareness, but also includes group awareness, etc in expanding circles to include universes, subjective and otherwise. When someone else is considered part of a group identity, as "one of us", then you tend not to act against yourself. When the other person is seen as being "one of the Not Us but Them" then you tend to get an opposition, etc.

      In wars, it is more a universe thing, the Hitler Universe vs the Churchill Universe, for example. Or Religious Figure One (tm) vs Religious Figure Two (tm). Or a religious universe vs a scientific universe.

      Part of the problem of psychopaths, sociopaths, etc. is that they tend to group their victims into the "One of the Not Us/Not Me" category. No sense of being or identity is allowed or granted to the other person, and so, to one degree or another, this rationalizes pigeon-holing people into things that can be abused one way or another. Or else the identity given is some other alteration of reality that legitimizes criminal activity.

      This is difficult enough to deal with in humans. Psychologists and psychiatrists have no cure for psychopaths, since it is seen as being in the genes. You can't make a pill for it, and no psychopath would take it as they do not have the luxury of seeing that anything is wrong with themselves.

      Now we try to apply this to Robotics. Probably the only real solution for the problem is to redefine Human as self aware creatures from earth, and incorporate this awareness somehow into robots, to some slight degree, so that Robots see Humans as "One of Us".

      It is a little touchy on how you would do this. It exposes some of the potential hypocrisy of humans in actions towards other potentially self aware creatures on earth, as well as each other. A self aware robot could see the hypocrisy without the emotional justification people exhibit. At this point, we could be in trouble.

      • A few thoughts.... (Score:5, Insightful)

        by Savage-Rabbit ( 308260 ) on Wednesday March 15, 2006 @12:01PM (#14924544)
        First: two observations:

        1) SWORD is remote controlled it is not autonomous like I always thought a true robot in the Asimovian sense had to be.
        2) Since we are now including remotely operated vehicles in the definition of a true robot, SWORD is not that different from a Paveway bomb or a Hellfire missile except SWORD doesn't self destruct when it destroys the target.

        This begs the question wasn't Asimov's first law broken decades ago, perhaps even by the V1 which was strictly speaking a remote operated vehicle?

        Personally I won't begin to worry about Asimovs laws as long as Humans are on the other end. apons.
        • by Ian Peon ( 232360 )

          wasn't Asimov's first law broken decades ago, perhaps even by the V1 which was strictly speaking a remote operated vehicle?

          I was thinking more about the CIWS system [] (being an ex-Navy type). It has it's own computer system to detect a target, track, decide to engage, fire, kill assessment - it even looks like a ship-mounted robot, I usually describe it to people as looking like R2D2 with a gatling gun. Its targets are not limited to inbound missles, it will also take down aircraft.

          Or, how about an AEGI []

  • by cookiej ( 136023 ) * on Wednesday March 15, 2006 @10:50AM (#14923783)
    ... You have thirty seconds to comply..."
  • Phalanx... (Score:5, Informative)

    by JDSalinger ( 911918 ) * on Wednesday March 15, 2006 @10:51AM (#14923795)
    I guess it depends what you consider to be a robot? And under what conditions it could kill another human? The Phalanx defense system, currentlly employed on U.S. Warships, would allow itself to shoot down an enemy aircraft if it were attempting to crash into the ship. The Phalanx uses radar to detect incoming missiles and shoot them out of the sky by unleashing an insane amount of bullets in direction of the target. Pictures and info here. []
  • by LiquidCoooled ( 634315 ) on Wednesday March 15, 2006 @10:51AM (#14923796) Homepage Journal
    Theres lots of robots designed for this purpose.

    Of course, they are just toys and the big deal is this will be rolled out, but heres a couple of things I thought of:

    USB Air Darts []

    Controllable from the computer :D

    Automatic sentry gun []
    Uses a built in camera to detect and aim at moving targets.

    Its all very half life ish, but plenty of fun.
  • Fluff Piece (Score:5, Informative)

    by AKAImBatman ( 238306 ) * <{akaimbatman} {at} {}> on Wednesday March 15, 2006 @10:51AM (#14923798) Homepage Journal
    Don't bother with the Inquirer story. It's practically a verbatim copy of the source story here []. The only difference is that the source story adds the following comments:
    As I pointed out in the article (and the comments), these devices are not autonomous. For some, this would disqualify them from being true robots. However, the military and the manufacturer both refer to the SWORDS device as a robot, and it certainly fits common usage. The word "robot" comes from the Czech robota (from Capek's play R.U.R.) meaning "forced labor" or "drudgery." This device surely does an unpleasant task usually done by a person. Also, consider that, strictly speaking, an autonomous cruise missile is a self-guided machine, and is therefore a "robot" although most people wouldn't think of it that way.

    These are actually robots, but they're not the fully-autonomous solutions that Asimov was suggesting that mankind needed protection from. Thus the "laws" of robotics don't apply here, because it's still a human who's doing the thinking for the machine.

    In effect, this is a safe way for ground troops to line up a kill zone, then cause lots 'o bad guys to get torn to shreds. Prior to this, troops needed to use a vehicle-mounted machine gun to get this sort of rate of fire. This was extremely limited in close quarters, where a Humvee or Tank might not fit. While it was theoretically possible to carry a machine gun to the combat zone, such weapons are difficult to transport, setup, and use in close quarters.
    • Re:Fluff Piece (Score:3, Insightful)

      by jcr ( 53032 )
      These are actually robots

      Nope, they're just remote-controlled weapons. They're not programmable.

    • Not quite (Score:3, Informative)

      by Soulfader ( 527299 )
      The M-249 is a belt or cartridge fed light machine gun, also known as the SAW (Squad Automatic Weapon). It fires the same rounds as the M-16, just a bit faster. It's heavier, but very much man-portable, and is a personal weapon. The M-240 is the 7.62mm replacement for the old M-60 of the Vietnam era. It is freaking heavy, and considered a crew-served weapon, but doesn't require a vehicle to move. You CAN mount either weapon on a Humvee turret, but it's hardly required. Again, SAWs are usually consider
  • by daveschroeder ( 516195 ) * on Wednesday March 15, 2006 @10:51AM (#14923806) no "law" at all.

    If the submitter wants to troll about the military, the least he could do is spell Asimov's name correctly.

    What makes a "robot"? Progressively more complex machinery has been able to inflict bodily harm, and kill, for quite some time.
  • by TripMaster Monkey ( 862126 ) * on Wednesday March 15, 2006 @10:52AM (#14923812)

    THE US Army is deploying armed robots in Iraq that are capable of breaking Asmov's first law that they should not harm a human.

    Sorry to break it to the folks over at the Inquirer, but Asimov's Laws do not actually exist....any more than his 'positronic brain' does. It's fiction.
    Next week on the Inquirer: Computers Built That Break The Orange Catholic Bible's Commandment of 'Thou shalt not make a machine in the likenes of a human mind'.

    They are still connected by radio to a human operator who verifies that a suitable target is within sight and orders it to fire. the're not even robots, then. They're telepresence devices.

    Then the robot has the job of making sure lots of bullets are sent towards the target.

    Statement from the Iraqi forces regarding the use of these 'robots':
    OMFG! u r fukn gay! u hack, i know it! fucking aimbot! tak ur aimbot bs to nothr country, asshats!

    Nice to know we can take what we've learned in FPSs and apply them to the real world.

    Later the US plans to replace the control system of the bots with a "Gameboy" type of controller hooked up to virtual reality goggles.

    Yes! Finally, all my training has paid off! I can be a soldier from the comfort of my basement! Where do I sign?
    • by RyoShin ( 610051 ) <> on Wednesday March 15, 2006 @11:49AM (#14924427) Homepage Journal
      Yes! Finally, all my training has paid off! I can be a soldier from the comfort of my basement! Where do I sign?

      That may not be far off, actually. If this kind of technology takes off, you'll hear less and less about Army recruiting numbers. Why? Becuase they'll be recruiting "l33t" Counterstrike players (or the Army's own game.) Many of these kinds of players have the skills that would be needed to effectively control these robots- pit them against regular soldiers (both controlling robots,) and the soldiers will most likely lose. Not because the CS players have better training or instinct, but because they are more adept to handling the controls and the limits that would be placed upon them.

      While I'm sure the robots cost a lot per unit, the price will go down as manufacturing continues, and it sure as hell sounds better to say "20 robots were destroyed in the raid" than "20 men were killed in the raid". Plus, it would send a psychological element into battle, where the enemies cower because they face adversaries that stare down the barrel of a gun and charge.

      The main problem would be making sure that the CS players aren't hasty about sending their unit out- I highly doubt the Army is working on respawn technology. (I suppose the robot could take a lot more hits than a player in CS could, though, a fact to their benefit.)

      Another positive benefit is that the army would not have to pay to recruit and train men lost in battle, just worry about getting their "Army Players" another bag of Cheetos (TM).

      I can't wait to tell my grandkids stories about the 14th Interactive Division.
  • by whyrat ( 936411 ) on Wednesday March 15, 2006 @10:52AM (#14923817)
    These robots will have a pre-set kill limit.

    The enemy must merely send wave after wave of men until that limit is reached and they will shut down.
  • Bright Side (Score:4, Funny)

    by th1ckasabr1ck ( 752151 ) on Wednesday March 15, 2006 @10:52AM (#14923818)
    Well look on the bright side - at least it seems to stick to the second and third laws.

    (assuming you ignore all that "except where such orders would conflict with the First Law" stuff)

  • Really? (Score:3, Interesting)

    by AnonymousYellowBelly ( 913452 ) on Wednesday March 15, 2006 @10:54AM (#14923831)
    As the robot is not intelligent enough - or isn't considered as such - to make the decision of opening fire I personally don't think this breaks Asimov's law. This robots are more like 'extensions' to a soldier's body, IMHO.

    I guess that if we are to consider this a violation of Asimov's laws the computers of guided missiles have been ilegally killing people for a long time.
  • Not a robot (Score:5, Informative)

    by akheron01 ( 637033 ) on Wednesday March 15, 2006 @10:55AM (#14923847) Homepage
    I don't know why people seem to want to classify everything that moves as a robot, this is a waldo [] rather than a robot. To be a robot it has to make it's own decisions through some form of artificial intelligence or simulated intelligence, this is little more than a glorified remote control car with a gun strapped to it.
  • Oh no! (Score:5, Funny)

    by gEvil (beta) ( 945888 ) on Wednesday March 15, 2006 @10:56AM (#14923856)
    Oh no! Robots are breaking fictitious laws!!! Someone call the Fiction Police!
  • by digitaldc ( 879047 ) * on Wednesday March 15, 2006 @10:57AM (#14923860)
    This is great, now we can sit back, watch the News and see the Robots destroying each other in real time!
    'Honey, pass me a beer, the robot wars are on.'
  • by Hao Wu ( 652581 ) on Wednesday March 15, 2006 @10:59AM (#14923881) Homepage
    What is a Predator drone but a flying robot?

    Or is Slashdot more stuck on Hollywood myths than anyone, convinced that robots must have anthropomorphic traits, flashing non-functional lights, and a canned monotone voice...

  • by cparisi ( 136611 ) on Wednesday March 15, 2006 @10:59AM (#14923885) Homepage
    Now they just need to find some video game ace and tell him they want him to test out the "latest virtual reality video game". Even better if he's young and named "Ender"
  • Ridiculous Laws (Score:5, Insightful)

    by Illserve ( 56215 ) on Wednesday March 15, 2006 @11:00AM (#14923892)
    The very idea of a rule against hurting humans implies that a robot knows:

    1. What hurting means
    is it pain? death? financial impact? what about indirect effects? If I help human 1 build a better mousetrap, I am indirectly harming some other human's way of life.

    2. What people are

    3. Where they are

    These are highly non trivial problems. In fact, they're unsolvable to any degree of certainty. They only make sense in a *science fiction* book in which a highly talented author is telling you a story. In the real world, they are meaningless because of their computational intractibility.

    In the real world, we use codes of ethics and/or morality. Such codes recognize the fact that there are no absolutes and sometimes making a decision that will ultimately cause harm to someone is inevitable.

    So can we please stop with these damned laws already?
  • by freg ( 859413 ) on Wednesday March 15, 2006 @11:01AM (#14923900)
    So what this really shows us is that the winner of future wars will be determined by the country who has the most skilled gamers. I think I like the direction things are headed. Let's be sure to stay friends with the Japanese tho.
  • by SisyphusShrugged ( 728028 ) <.moc.draregi. .ta. .em.> on Wednesday March 15, 2006 @11:04AM (#14923941) Homepage
    Does anyone remember the movie Screamers (and the Philip K Dick book Second Variety, on which it was based) In the movie the robots that we trying to wipe out humanity were called SWORDs. Maybe Bush really wants to wipe out all those annoying voters who are messing up his approval ratings....
  • Not the First... (Score:4, Insightful)

    by MadMorf ( 118601 ) on Wednesday March 15, 2006 @11:07AM (#14923967) Homepage Journal
    I would argue that Cruise Missiles (US Navy's Tomahawk and USAF's ALCM and GLCM) are more robotic than this remote controlled toy...

    Hey, almost any "fire and forget" missle qualifies for this distinction...

  • by DaveV1.0 ( 203135 ) on Wednesday March 15, 2006 @11:16AM (#14924061) Journal
    In my mind, a robot operates on it's own. It is a mechanical device that can be programmed to perform specific function in advance and then operates independantly.

    A lot of what are called robots are just fancy remote controled cars. In this case, a fancy remote controled car with guns. Fun, but not a robot.
  • by vandelais ( 164490 ) on Wednesday March 15, 2006 @11:27AM (#14924195)
    Lesson from Battletech: Don't store machine gun ammunition in the head.

    Don't ask.
  • by blueZ3 ( 744446 ) on Wednesday March 15, 2006 @11:40AM (#14924328) Homepage
    What the Army is using is not a "robot" in the traditional sci-fi sense. The devices are not autonomous, and are under the control of a soldier who is the one making the decisions to pull or not pull the trigger. This is more of a "remote controlled gun platform" than a robot.

    The distinction is hard to get non-geeks to make though, as all sorts of remote controlled devices are talked about as "robots." They misuse this term all the time when talking about devices to search dangerous locations for earthquake survivors, for instance. The devices are like remote controlled cars with a camera on the front (and are not wirelessly controlled--they drag a cable behind them for power and control) but they call them "robots" all the time in the news
  • by robertjw ( 728654 ) on Wednesday March 15, 2006 @12:51PM (#14925083) Homepage
    Fortunately a Carnegie Mellon research scientist has written a handy guide named How to Survive a Robot Uprising []. Might be a good reference.
  • by cr0sh ( 43134 ) on Wednesday March 15, 2006 @01:02PM (#14925201) Homepage
    First off, people need to understand that the so-called "Three Laws of Robotics" was nothing more than a literary device Asimov used to help advance the plots in his stories revolving around robots. His robotic protagonists could be considered embodiments of "the ideal man", a conception of an individual who strove to be helpful while preserving integrity and life. Since people are, well, human - and thus prone to fallibility - the concept of a robot with in-built laws to guide it was a perfect literary device to allow Asimov to explore the possibilities of a future where mankind, through being forced by his own creations, was a more altruistic species. Granted, in these stories (and I have by no means read them all) ways were found around these laws by both men and robots, but this is again nothing more than another magnifying glass being focused on mankind's faults.

    Indeed, it is this and other devices which Asimov employs in his fiction-based studies of human nature which make his books masterpieces in the hard-science fiction genre. He could have just as easily have written about ordinary men under regular law, during just about any era in history, but such stories wouldn't have likely had the same impact as what he ultimately wrote. His work is great social commentary and insight about the human condition wrapped in a gauze of fiction. Unfortunately, so many people seem to not realize this or choose to ignore it. So much for reading comprehension, I guess.

    With that said, is it really any wonder why we would make automated war machines (especially ones which fail the "Three Laws of Robotics")? Throughout history, technology (amongst other things) has ultimately been spurred on more by violence than by any other force. Information technology and the machines which manifest themselves from it are no different (save for the other great driving force, sex, which also has proven to be a factor in the spread of information and the technology that controls it). Violence and sex - war and pornography - these are ultimately the two great driving forces of information technology in human society.

    Where's my fembot, damn it?!

  • Huh? (Score:3, Insightful)

    by rspress ( 623984 ) on Wednesday March 15, 2006 @01:18PM (#14925331) Homepage
    Are there any robots in existence that are Three Laws Safe? Are there any robots at all that have any of Asimov's laws?
  • robots.txt (Score:3, Funny)

    by Eudial ( 590661 ) on Wednesday March 15, 2006 @02:02PM (#14925733)
    Clearly this hasn't been thought through, I mean, seriously, don't you think people will set up a robots.txt blocking this specific robot?

    This is going to adorn pretty much every wall.

    User-agent: Military attack robot
    Disallow: *
  • by mr_burns ( 13129 ) on Wednesday March 15, 2006 @02:23PM (#14925929)
    Both these weapons (m249 and m240g) get really hot. You have to fire them in 3 second strings and swap out the barrels every 300 rounds or so. While you're letting the barrel cool between strings or changing out the barrel... that's when the enemy attacks you.

    So a common technique is "talking machine guns". You have 2 gun crews and they take turns with the firing strings, so there are always rounds going down range and the barrels stay relatively cool. Hopefully you can stagger changing out the barrels too.

    So how do the robots handle this? You'd need moving parts that handle the ammo chain. Either it would have to be able to reload from standard chains by itself or troops would have to link many chains together and load them into a drum beforehand. If you've got a long chain you need an armature to twist the chain in case of a runaway gun. And then there's the barrels. You need more moving parts to change those out. And what if it drops one?

    So to deal with those cooling issues with these weapons you may need 2 weapons per robot or 2 robots working in tandem.

    But even that's not ideal. A minigun is a far better weapon for this kind of thing. The ones on the blackhawks would be perfect. We already can order them in bulk, the barrels stay cool and in the case of a runaway gun, you just cut power to the motor. And the moving parts are far less compicated. Much easier to maintain in the field.

    The only advantage I can see to deploying the m249 or m240g is that the robot and troops could share ammo and the troops know how to service them. But the m134 minigun already uses the same ammo as the m240g and if you're going to service a robot, you probably are going to get special training anyway.

    Oh, and in peacetime can they clean my carpets?
  • by core plexus ( 599119 ) on Wednesday March 15, 2006 @02:44PM (#14926126) Homepage
    This article describes the unmanned Stryker's that the army is testing [].

    "Yesterday we ran a 100-mile test where the lead vehicle was being driven manually and the robot was following," Jaczkowski said. "We did this successfully where the average speed was about 22 miles per hour. You may think that 22 miles per hour is not that fast when operational convoys are going 60 to 70 miles per hour. But you have to take into account that we did 68 right turns.

    "You don't take right turns at 50 miles per hour, especially with a 20-ton robot."

  • by Feanturi ( 99866 ) on Wednesday March 15, 2006 @03:27PM (#14926521)
    In order for a law to be broken, it has to exist first. A killing machine such as this is merely a gun with a remote control. It's not a robot in the sense that there would even be a place for such a law in its programming.
  • by Animats ( 122034 ) on Wednesday March 15, 2006 @03:55PM (#14926753) Homepage
    America's suburban lifestyle requires oil. Getting that oil has a price in blood. America's robot armies will insure that less of that blood is American.

    That's the reality.

Kill Ugly Processor Architectures - Karl Lehenbauer