Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

The Question of Robot Safety 482

An anonymous reader writes to mention an Economist article wondering how safe should robots be? From the article: "In 1981 Kenji Urada, a 37-year-old Japanese factory worker, climbed over a safety fence at a Kawasaki plant to carry out some maintenance work on a robot. In his haste, he failed to switch the robot off properly. Unable to sense him, the robot's powerful hydraulic arm kept on working and accidentally pushed the engineer into a grinding machine. His death made Urada the first recorded victim to die at the hands of a robot. This gruesome industrial accident would not have happened in a world in which robot behavior was governed by the Three Laws of Robotics drawn up by Isaac Asimov, a science-fiction writer." The article goes on to explore the ethics behind robot soldiers, the liability issues of cleaning droids, and the moral problems posed by sexbots.
This discussion has been archived. No new comments can be posted.

The Question of Robot Safety

Comments Filter:
  • Virtual bots (Score:3, Insightful)

    by WinEveryGame ( 978424 ) on Sunday June 11, 2006 @10:54PM (#15514845) Homepage
    The story curiously doesn't dwell much on virtual bots and issues posed by them. It focuses entirely on mechanical bots.
    • Re:Virtual bots (Score:3, Insightful)

      by ThePengwin ( 934031 )
      In the world of literature it dosent matter what things are, so long as they can sound real.
      Besides, many people would have died in a similar way to that.

      I have read about robots for ages and i think that the three laws are a load of crap. We dont even live in a world where robots can think for themselves yet, let alone be able to kill someone because they wanted to. I dont even see the point of making a robot that is aware of its existance, There is no real reason to do so.
      • Re:Virtual bots (Score:4, Insightful)

        by Bin_jammin ( 684517 ) <Binjammin@gmail.com> on Monday June 12, 2006 @12:07AM (#15515104)
        I don't want a robot that can think for itself, I want a robot that can think for ME.
      • Self Awareness. (Score:5, Insightful)

        by camperdave ( 969942 ) on Monday June 12, 2006 @12:10AM (#15515114) Journal
        Robots already have a degree of self awareness. Position sensors, battery charge monitors, etc are all designed to let a robot know about itself in relation to the world. As we develop more sophisticated robots, they will require a greater degree of self awareness. Right now, industrial robots are basically programmed at the "goto position x1,y1,z1; close gripper; goto position x2,y2,z2; release gripper;" level. If you want them to work at the "Pick up part X from conveyor belt; dip part in solvent tank;" level, the robot is going to have to be able to coordinate vision and arm motion. In other words it will have to have a greater degree of self awareness. When you get into higher level stuff (same robot, multiple tasks) the robot will have to keep track of which tool it has, what loads it is capable of manipulating, etc.

        In short, the more self aware the robot, the higher the level of abstraction you get in assigning tasks to it.
        • Re:Self Awareness. (Score:4, Insightful)

          by danaris ( 525051 ) <danaris@NosPaM.mac.com> on Monday June 12, 2006 @08:11AM (#15516104) Homepage

          I think you're misunderstanding just what "self-awareness" means. It's not just "awareness of certain properties of the body"--it's "awareness of the self as distinct from the rest of the world." What you're describing is simply environmental awareness--which is necessary for a robot capable of following the high-level instructions like the ones you mentioned, but is worlds away from true self-awareness.

          Dan Aris

      • Re:Virtual bots (Score:5, Insightful)

        by Fulcrum of Evil ( 560260 ) on Monday June 12, 2006 @12:24AM (#15515152)

        I have read about robots for ages and i think that the three laws are a load of crap.

        That's the whole point: the three simple rules that Asimov proposes have complex implications - his robot stories are filled with situations where following the laws results in tragedy. So yeah, they're a load of crap, but they're intended to be crap.

      • Re:Virtual bots (Score:3, Insightful)

        by amRadioHed ( 463061 )
        Self awareness is a side effect of general intelligence. We can't make it yet, but when we can it will be useful.
        • Re:Virtual bots (Score:3, Interesting)

          by Poltras ( 680608 )
          It will probably be an unwanted by-product. At least I don't want my robots to be self-aware... it has so many deep implications of how they do their work that they become unreliable and even dangerous, which defeats the whole purpose of a robot in itself. Sure, it would be nice that some robots develop a self-awareness, but it's mainly theoric and serves no pratical purpose.

          Put a brain that is able to become self-aware of itself in my dishwasher, my car-maker industrial robot or my robocop and I won't guar

        • by KiloByte ( 825081 ) on Monday June 12, 2006 @02:53AM (#15515439)
          Self awareness is a side effect of general intelligence. We can't make it yet, but when we can it will be useful.

          Of course! The moment we can make general intelligence, it will be a big improvement, for any species.
          This whole article, for example, is a case of failing an intelligence check.
          Hint: it's not the robot who failed it.
      • Re:Virtual bots (Score:5, Interesting)

        by NitsujTPU ( 19263 ) on Monday June 12, 2006 @04:16AM (#15515586)
        Actually, there are very good reasons to make a robot aware of its own existence. Certain types of reasoning and learning are helped significantly by the ability to reason about the existence of oneself.

        Consider the following experiment, which toddlers have difficulty performing prior to 4 years, but are able to after. A tube is presented to them, with the logo of a candy company on it, "smarties," not the American ones, but the British ones. The child is asked, what is in this tube? At this point, the child invariably says, "smarties!" The conductor of the experiment then opens the tube, revealing pencils. The experimenter asks again, "what is in this tube?" The child says, "pencils." Now, "if I ask another child what is in this tube, what do you think they will say?" Before 4, the kid will say, "pencils." After, they will say, "smarties."

        This reasoning task requires the kid to model themselves prior to the revelation that there are pencils in the tube. It requires a model of what happened after. It, further, requires a model of the other child, of what they will be like without this knowledge. This is actually part of a model of self-awareness, but it's not the entire model. You might ask, "why would a robot need to know this?" Well, actually, it's quite important if the robot is to interract with people, because people will expect the robot to behave in an appropriate manner. Dangerous scenarios could arise because the robot does not understand that things that are in its field of view, for instance, are not in the field of view of a person. An example might be a robot handling dangerous materials, during a construction task. Perhaps the person can't see that it's handling hot metal. A person would warn the other person, avoiding danger.

        As for the three laws, they were written in a body of fiction. I think that too much attention is paid to them.
    • by Schemat1c ( 464768 ) on Sunday June 11, 2006 @11:39PM (#15515013) Homepage
      The dangers of robots are worse than you think, just watch this PSA [youtube.com]. And remember, when they grab you with those metal claws you can't break free cause they're made of metal, and they're very strong.
  • Fear them! (Score:5, Funny)

    by dreemernj ( 859414 ) on Sunday June 11, 2006 @10:55PM (#15514848) Homepage Journal
    Fear the Roomba!
  • Good department (Score:2, Informative)

    by MilenCent ( 219397 ) *
    Am I the only Slashdot reader that remembers that toy? "I am the atomic-powered RO-bot! PLEASE give my best wishes to EVERYBODY!"

    (As immortalized in a Mystery Science Theater episode.)
    • Beware; those "best wishes" are not such a sure thing anymore. My first tinkering experience involved my Dad and I removing the cheap electric phonograph assembly that produced the voice and rigging a pushpin and a plastic cup to learn how it worked. We gutted him merely for my cursed human curiosity.

      I may have inadvertently endangered the entire human species! And with atomic power, no less!
  • "This gruesome industrial accident would not have happened in a world in which robot behavior was governed by the Three Laws of Robotics drawn up by Isaac Asimov, a science-fiction writer"

    Neither would this have happened if the maintenance tech had followed procedure and just switched the damned thing off. I don't see how this is any different from a normal industrial accident with something like a sheet metal press.
    • Neither would this have happened if the maintenance tech had followed procedure and just switched the damned thing off. I don't see how this is any different from a normal industrial accident with something like a sheet metal press.

      Exactly...its not as if we have these laws for cars or trains...plenty of people step infront of them and squisho...human kebab! Besides, those "robots" arent aware of anything...its just a controller which follows a set pattern attached to the controls which manage movement o
    • yeh seriously, how would the three laws have helped if the 'robot' didn't have advanced enough eyes along with powerful enough image processing to know that a human was getting close to it and stepping in its way? "don't kill a human, work work work, don't kill a human. hmm what was that? oh well. work work work."
      • yeh seriously, how would the three laws have helped if the 'robot' didn't have advanced enough eyes along with powerful enough image processing to know that a human was getting close to it and stepping in its way?

        But the thing about the 3 laws of robotics, from my point of view is that you do not have to take them *literally*, or at least, try to take them in the broad sense of meaning.

        See, Asimov laws where inteded for a ficticious kind of robots with something called the "positronic" brain.

        But, if you thi
        • I would say it's more a combination of common sense and reading ability. It's like the families of people who ignore "DANGER OF DEATH 20,000V" signs on substations then complain that more could have been done. Of course, the 9ft fence with barbed wire on the top wasn't a deterrent.

          Walking into an area with operating, unguarded machines is a bad idea be they belt sanders or hydraulic lifting arms. There would almost certainly have been a warning sign, so it's really the guy's own fault for not following proc
    • I don't see how this is any different from a normal industrial accident with something like a sheet metal press.

      It isn't, and the robot in question had less automated safety features than your average modern metal press.

      There's no need to invoke Asimov's laws for something which has less AI than an automatic door. Even a few sensors linked to a cutout switch could have prevented the accident. Something like this: http://gsfctechnology.gsfc.nasa.gov/FeaturedRobot. html [nasa.gov] could even have prevented the accident and allowed the robot to continue working.

      • Even a few sensors linked to a cutout switch could have prevented the accident.

        Maybe the sensor was on the gate which he bypassed by climbing a fence.

      • In the grand sceme of industral equipment, the robot doesn't sound all that dangerous. It was surrounded by a saftey fence that takes a serious effort to climb over. I've seen plenty of machines that simply have a sign that says do not put your hand here while the machine is in operation. And yet, people have still stuck their hand in there and lost fingers.

        Industrial equipment does not stop instantly. Sensors that trigger a stop may prevent some incidents, but not all. No level of technology, not even say
      • Yes, neither Asimov's laws or real AI process would have been required in this case, but the ideas raised in this article are real: we are seeing more and more robotic or computerized machines interacting with people in the 'outside world', and we need to think clearly about how those will be programmed.

        A lot of people here seem to be of the opinion that it's 'not a robot' unless it has an actual Turing-level AI, but I disagree. I think a 'robot' can be defined as a machine that performs tasks without direc
    • Not to mention that this "robot" hardly compares with an Asimov robot. This thing just moved in a very specific sequence according to it's programming, it did not have AI or think or even react dynamically to it's environment.

      For a robot to even have a chance to be programmed in the three laws, it has to have AI and be able to "think" because the three laws are such abstracts. There are not simple laws such as the law of gravity where you just plug in numbers.
      • This is why you don't let journalists design industrial robots. Even if we had the capabilities to create AI capable of following the three laws, putting it into a robot would dramatically increase the costs involved - making every single welding robot in a car plant self-aware would most likely be an quick way into bankruptcy.
    • Obviously some safety precautions were in place as the guy had to "climb over a safety fence" to get to the robot. But there should have been some other fail-safe measure to sense non-robot objects in the work area and cancel operation (floor sensor, light sensor, etc.) Thus avoid the potential problem of accidentally switching the robot on while doing maintenance within the danger zone.

      The three laws of robotics will only apply when ... well .. when WE actually apply them, not the robots.
    • Amen to that. Referring to the summary only, the robot in question is a machine, that's all. The fact that it's a robot is not particularly material. The fact that it's an industrial machine that can kill you if you don't use it properly is what's important.

      -h-
    • The person seems to me to be a possible candidate for a Darwin award.
      There's a reason there are safety fences around those machines.
    • Neither would this have happened if the maintenance tech had followed procedure and just switched the damned thing off. I don't see how this is any different from a normal industrial accident with something like a sheet metal press

      you are so right, the only difference is:

      1. A manufacture slapped the "robot" label on a piece of industrial equipment.
      2. A worker fails to observe safety procedure as is killed in industrial machine accident
      3. Author, failing to grasp difference between "sci-fi robot" and modern, in
    • by iendedi ( 687301 ) on Monday June 12, 2006 @03:53AM (#15515548) Journal
      This gruesome industrial accident would not have happened in a world in which robot behavior was governed by the Three Laws of Robotics drawn up by Isaac Asimov, a science-fiction writer"

      Neither would this have happened if the maintenance tech had followed procedure and just switched the damned thing off. I don't see how this is any different from a normal industrial accident with something like a sheet metal press.
      Oh come on, the submitter is on to something here. The manufacturer of the robotic manufacturing equipment most definitely should have encoded the three laws into their manufacturing robots. It couldn't be too hard, right?

      Here, I'll show you... Where did I put my wrench?
    • Bad interface design. I will guarantee you that the reason he didn't switch it off was one of
      • He needed to test something with the power on
      • He would have had to reboot the robot
      • Getting in through the safety gate was hard
      • Someone told him it was safe to enter (that the product feed was inactive)
      I've been smacked by robots a couple of times, and while the ones we have are simple lift and put robots that don't hit too hard, it still hurts and if it hit you the wrong way could probably do some serious damage. For the reasons why it's happened to me, see above.
    • This would never have happened if humans had their own personal First Law which was "Protect your own existence". Oh wait. I think we already do have that. Well, I guess this human was just programmed incorrectly.
  • Operator Error (Score:5, Insightful)

    by romanval ( 556418 ) on Sunday June 11, 2006 @10:59PM (#15514859)
    The robot didn't actively kill him; it just wasn't programmed to know whether a person is there or not. It's like stepping into a giant blender without turning it off. There's isn't much morality to worry about.
    • Well, more like being smacked into a giant blender by a hydraulic arm that you were stupid enough to not turn off first. Anyways, I don't think we can really call something without AI a robot, and most automated processes aren't thinking on their own. Doing, yes, but not thinking.
      • Re:Operator Error (Score:3, Insightful)

        by x2A ( 858210 )
        "I don't think we can really call something without AI a robot"

        Well it does fit dictionary definition, although I do actually agree, to me this is just "a machine", the term 'robot' does have at least some kind of awareness-process-respond connotation in my and many peoples minds, it would be nice to have some proper differenciation. But perhaps another word, as the roots behind the word 'robot' ("forced labor") hardly conjours the best images either.

  • Christ, not again. (Score:5, Insightful)

    by rk ( 6314 ) * on Sunday June 11, 2006 @10:59PM (#15514860) Journal

    Whenever robots come out, why do people trot out Asimov's Laws of Robotics like they're holy writ? He created those laws and then wrote a book's worth of short stories (read: FICTION) showing their pitfalls.

    For anyone who thinks they're a great idea, I'd also like to see your working prototype code and design docs.

    • Really, if only those stupid engineers had typed "thou shalt not kill" into the code.
    • by Aussie ( 10167 )
      Asimov's Laws of Robotics

      More accurately, John W. Campbell's [wikipedia.org] laws.

      "Asimov attributes the Three Laws to John W. Campbell from a conversation which took place on December 23, 1940. However, Campbell claims that Asimov had the Laws already in his mind, and they simply needed to be stated explicitly"
    • same reason why people always bring up the Moore's law whenever people talk about processor speeds.
    • Ignoring the distasteful nature of the question for the minute, supposed "robot" were a synonym for "slave". Would you trust even very intelligent humans to be able to follow the three laws as written even if they actually desired to do so? They require nearly omniscient comprehension of the effects of ones actions -- how can you know that you have to refuse to drive to the mall and pick up three cans of tomato sauce because if you don't you'll be in a car wreck with a little old lady and break rule 1?

      R

    • He created those laws and then wrote a book's worth of short stories (read: FICTION) showing their pitfalls.

      He could have saved much time and gone with the alternate version of the three laws as depicted in Short Circuit:

      1)Do not disassemble.
      2)Robots are alive and self-aware.
      3)Steve Guttenberg is not funny.
  • I for one (Score:5, Funny)

    by Digitus1337 ( 671442 ) <lk_digitus@h[ ]ail.com ['otm' in gap]> on Sunday June 11, 2006 @10:59PM (#15514861) Homepage
    ...am for guidelines to govern the actions of our new robot overlords.
  • Wrong kind of robots (Score:5, Interesting)

    by QuantumG ( 50515 ) <qg@biodome.org> on Sunday June 11, 2006 @11:00PM (#15514863) Homepage Journal
    Asimov's rules were always applied to intelligent robots. No-one (to my knowledge) has ever suggested that a hammer should have a sensor to recognise if it is hitting a nail or a thumb and refuse to obey the "command" of its operator if it is targetting the latter. The purpose of Asimov's three rules was to prevent himself from falling into the trap of writing yet another Frankenstein story. That said, I believe there are some proponents of handgun biometrics that believe guns should override the commands of their operators if the operator is not authorized to use it. In the future you may not be able to (legally) purchase a handgun that will fire on a human being.
    • Asimov's rules were always applied to intelligent robots. No-one (to my knowledge) has ever suggested that a hammer should have a sensor to recognise if it is hitting a nail or a thumb and refuse to obey the "command" of its operator if it is targetting the latter.

      Allow me to be the first to suggest it. The idea occured to me earlier today, a few minutes after the throbbing pain in my thumb subsided.
    • In the future you may not be able to (legally) purchase a handgun that will fire on a human being.

      What use would handguns have then? Other than getting basketballs off the roof and turning off lights? :)

      Wow. Suddenly disturbing to think how many handguns are out there, and that the reason behind almost every purchase was "in case I need (want?) to shoot another person."

  • Aaargh (Score:2, Insightful)

    This gruesome industrial accident would not have happened in a world in which robot behavior was governed by the Three Laws of Robotics drawn up by Isaac Asimov, a science-fiction writer.

    First the robots would have to be able to understand Asimov's laws and have situational awareness in order to follow them.

    Even if that was possible today, how much do you think it would cost to implement that in something like an industrial robot performing a single, repetitive task. Perhaps some simply safety sensors woul
  • Well not really. The guy was known for his snyde and sarcastic remarks towards the machine, and for using a wrong power supply, to underpower the robot on purpose. If i was in robot's ...err...shoes, i'd do exactly the same, given the opportunity.
  • What moral issue (Score:5, Insightful)

    by JanneM ( 7445 ) on Sunday June 11, 2006 @11:02PM (#15514874) Homepage
    What's the moral issue with sex robots? It would be just another sex toy. Has there ever been a technology some inventive human has not adapted for self-gratification?

    I'd venture that it would in fact not even be all that good as a sex toy; it would be limited to being human-like, with human-like capabilities, unlike the classical simple, cheap, but far more versatile toys sold today.

    • It could have many interchangable parts as well as being extremely flexible. Also, it could kill you while you slept with its cold metal hands.
    • What's the moral issue with sex robots? It would be just another sex toy.

      If the sex robot could pass the Turing Test, at least within the boundaries of its design I would argue that it should be treated as human.

      • Re:What moral issue (Score:3, Interesting)

        by 1u3hr ( 530656 )
        If the sex robot could pass the Turing Test, at least within the boundaries of its design I would argue that it should be treated as human.

        I'd be wary of a Turing sex test.

        In 1952, Alan Turing was convicted of acts of gross indecency after admitting to a sexual relationship with a man in Manchester. He was placed on probation and required to undergo hormone therapy. When he died in 1954, an inquest found that he had committed suicide by eating an apple laced with cyanide.

  • He was a dumbass. (Score:2, Insightful)

    by DAldredge ( 2353 )
    Not the robots fault - the idiot didn't turn it off correctly. The same thing would happen if one was working at a chemical factory on the pipes with out shutting them down first.

  • Am I the only one who feels sick every time some idiot reporter trots out those damned "three laws"? They're not laws of anything, and they're completely unimplementable in a real system. Asimov invented them to explore the consequences of what would seem like simple and obviously desirable rules for robots, but that had, in fact, disastrous consequences once the robots got capable enough to really apply them.

    In classic fiction, runaway robots are almost always analogies for runaway social constructs -- g
    • I know this is Slashdot, but please RTFA before whining too much. From the article: "So where does this leave Asimov's Three Laws of Robotics? They were a narrative device, and were never actually meant to work in the real world, says Dr Whitby. Quite apart from the fact that the laws require the robot to have some form of human-like intelligence, which robots still lack, the laws themselves don't actually work very well. Indeed, Asimov repeatedly knocked them down in his robot stories, showing time and ag
  • These machines are not 'robots' in the classical sense, but mere automated machines. A robot has some semblance of intelligence, and can adjust to the environment. These things take part A, and put it in slot B. A preprogrammed set of movements.

    Should there be some sensors to detect a foreign body, and stop if necessary? Sure.
    But in no way could they make a value judgement, as in "Save the human, and sacrifice the dog."
  • Talking about Asimov's laws, as the article even states, is crap. No robots work like that. Just making a robot recognize an object as a human is a major achievement, and forget about making it think like a human so that it can follow the laws. The Three Laws are anachronisms, like the Jungles of Venus, or headgear with radiator fins.

    What the Japanese generally do is fence off the robot's work area so that people can't just walk into its path. It's a simple solution that works. If a worker climbs over the s
  • by hcg50a ( 690062 ) on Sunday June 11, 2006 @11:09PM (#15514904) Journal

    This gruesome industrial accident would not have happened in a world in which robot behavior was governed by the Three Laws of Robotics drawn up by Isaac Asimov, a science-fiction writer.

    The machine that accidentally killed the person is not capable of following the 3 laws of robotics. It was like a train hitting somone on the tracks -- someone in the wrong place at the wrong time.

    The three laws require sophisticated sensors and very sophisticated processing, the likes of which I have not seen in any computer yet.
  • Its not really about safety, its about stupidity. If a crazy kid goes swimming with a killer whale and gets eaten, its stupidity, but some nutter who ignores safety rules around heavy machinery gets killed and its the machine's fault???? WTF?

    There are no sentient robots capable of coping with, never mind adhering to Asimov's Laws of Robotics.

    In the words of Mr. White: You can't fix stupid!
  • Did anyone else immediately think of this [amazon.com]?

    By the way, if you've never had to bear the pain of being required to read this book for a class, consider yourself lucky. If this book looks like interesting reading to you, might I recommend grinding your foot off with a dremmel instead? It will be less painful, and you'll sustain less permanent damage.
  • It was just trying to be helpful and protect the guy from The Terrible Secret of Space...


  • Is sex really such a big concern? I would rather know that people who want to have sex with children, have sex with robots.

    As to security, well, I have seen a man lose both of his hands to a paper cutting press (the kind that is used to cut a foot thick stack of paper.) That press could not have 'known' that it wasn't paper, it was cutting but someone's hands. Are we going to put AI into all tools, so that our drills won't drill a human skull and staplers won't staple through human skin and electrical ba
    • Is sex really such a big concern? I would rather know that people who want to have sex with children, have sex with robots.

      Yup. I couldn't agree more.

      It's scary to think that people who claim that permitting people to have sex with a robot shaped like a child is an ethical issue are attempting to control the debate over robot ethics.

      Then again, many of us live in a country where people get jail time for drawing cartoons of sexualized children... so I suppose I shouldn't be surprised. There's no limit to p

  • "Security, safety and sex are the big concerns," says Henrik Christensen, chairman of the European Robotics Network at the Swedish Royal Institute of Technology in Stockholm, and one of the organisers of the new robo-ethics group. Should robots that are strong enough or heavy enough to crush people be allowed into homes?

    Not if their sexbots! I know what you're thinking,"Fat Robots need lovin' too. The Crushinator can stop by my place anytime.", but there's a real health risk involved if you turn the wro
  • Yours Truly 2095 (Score:4, Interesting)

    by Leomania ( 137289 ) on Sunday June 11, 2006 @11:24PM (#15514959) Homepage
    and the moral problems posed by sexbots

    Whoa, transport me back to when E.L.O.'s "Time" album came out (Yikes! 1981) and the song "Yours Truly 2095":

    I met someone who looks a lot like you
    She does the things you do
    But she is an IBM.

    But I digress (before I was ever on topic)... there won't be any moral dilemma for this crowd. The first sexbots will be programmed for "No Geeks" which will only increase their allure for that very crowd. They'll be hacked to remove that restriction, and while they're at it they'll be programmed to hang out at retirement homes, PTA meetings and church services. That'll pretty much doom them to be recalled, pulled from the market, and there'll be only a few remaining examples in the Smithsonian and certain institutions of higher learning for, ummm, "research".

    Remember, you read it here first.
  • by IronicCheese ( 412484 ) on Sunday June 11, 2006 @11:32PM (#15514991)
    To adhere to Asimov's rules of robotics requires that the robot be capable of executing those instructions, and we're nowhere near having machines with the Artifical Intelligence necessary to do that.

    Manufacturing robots are sophisticated, but they're really more properly thought of as "Automatons" in this context, not robots in the Asmovian sense.

    Tragic that this fellow died, but no more of a failing than a farmhand who falls into a thresher.

    It does suggest that these industrial machines might have more safeties on them than they currently do, though.
    • ...but they're really more properly thought of as "Automatons"...

      We've already lost the war on "data", but I'm not going to give up on "automata" without a fight! Other than that, I'm with you.
  • Old Glory (Score:3, Funny)

    by Nick Driver ( 238034 ) on Sunday June 11, 2006 @11:34PM (#15514998)
    Looks like we'd better start preparing for the inevitable. [robotcombat.com] and get some robot insurance.
  • by xtal ( 49134 ) on Monday June 12, 2006 @12:02AM (#15515088)
    The laws are a joke. Robots that kill people are here now, and they're only going to get smarter. The reason is simple; UAVs are nice but they are always vulnerable to ECM jamming attacks, especially at close range against a moderately sophisticated enemy. The way you counter this is by letting the UAV make the final decision to attack or flee.

    You tell me which is more likely to happen.. the UAV is never programmed to make that decision to attack, or the military accepts the possibility of some collateral losses.

    Hint: Some automated defense systems on ships already make these decisions without human intervention.

  • by Kell Bengal ( 711123 ) on Monday June 12, 2006 @12:10AM (#15515113)


    I'm a post-grad student working on a robot helicopter. It has extremely fast rotor blades and is a very real threat to humans if mishandled, so I can speak from personal experience in working on robot safety critical systems. To me, robot safety is more of the same problem faced by machine safety in general and more of the same problems faced by robots in particular.

    Firstly all potentially dangerous machines require correct operation to avoid injury. No one can stop an idiot from ignoring a safety railing of a machine, automatic or robotic. To expect safety after defeating barriers and interlocks is stupid for microwave ovens and toasters, let alone high energy robotic systems. To expect robots to be safe outside of their defined operating parameters is like expecting a car to be made of sponge so no matter how much you ignore the speed limit, you can't kill anyone.

    Secondly, robots seem to suffer a higher demand for intrinsic safety because of the expectation of robot cognition. The reality is, this is the place in robotics where the technology least developed. How do people possibly expect a robot to implement the three laws if the robot cannot flawlessly recognise a human as human? Furthermore, the three laws make no sense for a system that generally works far removed from humans. Putting the sensors and intelligence into a factory robot that should never encounter a human in its powered up state is just stupid. A simple barrier or laser curtain is more than adequate as an interlock, but as we've seen, that doesn't keep humans out all the time. The best the industrial roboticist can practically do is build robot systems that are reliable and stay within their work envelopes.

    For mobile systems like my helicopter, it becomes more difficult since you can't control its workspace - cognition bites you in the arse once again. However, the reality of robot-human safety is that dangerous robots working around humans simply should not be autonomous without direct supervision. We are decades away from machines that are autonomously safe around humans. Software is brittle and easy to confuse no matter how well coded it is - you just can't capture all of the edge cases in the real world when you have millions of possible states. Don't imagine robot helicopters flying around people without a monkey in control - it just won't happen.

    It seems to me that people need to change their idea of robots away from R2-D2 and towards reality. Treat industrial robots like an piece of industrial equipment - with respect. The same idiots who jump the fence of a robot workcell are probably the same idiots who misuse power tools and ignore safety directives. You just can't stop idiots from earning darwin awards. Seriously, it's not hard to stay outside the yellow tape.

    Take your three laws and return them to science fiction, from which they came - they belong to the same realm of fantasy as FTL travel - which is to say, maybe one day but not for a long time.

  • Not at all true!! (Score:3, Insightful)

    by baudbarf ( 451398 ) on Monday June 12, 2006 @12:41AM (#15515199) Homepage
    If that "robot" had been programmed to do no harm to a human it still would have killed him, because it was INCAPABLE of sensing his presence. I rule this to be involuntary (even unnoticed) manslaughter.
  • by Animats ( 122034 ) on Monday June 12, 2006 @02:44AM (#15515430) Homepage
    As the head of a DARPA Grand Challenge team [overbot.com] last time around, I was seriously worried about this. We had to field test the thing, which was a worrisome exercise. In the early phases, we operated entirely in a big fenced parking lot totally isolated from anybody. But later we had to take the vehicle into more accessable areas. We had very conservative algorithms on the LIDAR processing (which is why our vehicle tended to stop and rescan too much at the Grand Challenge), a radar system as backup, and an industrial-grade radio emergency stop system. And liability insurance.

    The next DARPA Grand Challenge requires operating in congested areas, and that's going to require serious work on robot vehicle safety. The way this is going, those things are going to be rolling through small towns in hostile territory in a few years, and they'd better not be running over little kids.

  • by SomethingOrOther ( 521702 ) on Monday June 12, 2006 @04:16AM (#15515587) Homepage

    Yes but this robot obayed the zero'th law of robotics


    A robot must not harm humanity or through inaction, allow humanity to come to harm

    By eliminating this fuckwit from the gene pool, the robot has truely done humanity a great service.

  • by karlandtanya ( 601084 ) on Monday June 12, 2006 @09:34AM (#15516424)
    The author is sloppy.
    (S?)he's casually throwing together three separate fields of safety.
    Industrial robotics, consumer product safety, and android (Asimovs robots are androids, not just robots) morality.

    With respect to the particular incident reported, I suspect the synopsis in the article is as sloppy as the rest of the article.
    Did the engineer really violate safety? Did his boss or the Japanese work ethic give him a choice? Google karoshi and guolaosi.

    If an engineer violates safety procedures and gets killed, publish his experience at the next safety meeting.
    Too f---ing bad. I will not cry for a guy that violates safety procedure and gets hurt. For his family, sure--it's not their fault Dad is an idiot.
    And if it was karoshi, then the hazard the employee was exposed to was the work culture. Compensation for families of karoshi victims is available today (but not in 1981)

    There are safety standards used to protect people from robots, and they work, but you have to follow them.
    Lockout/Tagout (really lockout; nobody uses tagout anymore)
    Avoidance of exposure--passive perimeter guarding (fences); active perimeter guarding (light screens, LASER fences, floor mats, etc.)
    Operator load interlocks--when the operator has to load a robot, you design so that only one (operator/robot) can be in the load station at a time.

    • I can give you a light screen around the robot and you can jumper it out.
    • I can build you a safety fence and you can climb over it.
    • I can put a roof over the safety fence (yes, it's been done!) and you'll just unbold one of the fence sections.
    • I can give you a teach pendant with a deadman switch (sorry, "active motion enable device"), and you can hand it to the electrician while you ride the robot.
    If you're determined to kill yourself, I can't stop you.
    And if you do, your recent co-workers will all grimace when we see the pictures in next week's safety meeting.
    But we won't have any sympathy for you.

    This gruesome industrial accident would not have happened in a world in which robot behaviour was governed by the Three Laws of Robotics drawn up by Isaac Asimov, a science-fiction writer.
    That's not what the 3 laws are about. The three laws are moral values, not machine code.
    They have nothing to do with protecting a person from a machine and everything to do with implementing morality in a created race of sentient beings.
    If you haven't read Asimov's robot stories, you should know that most of them revolve around the unexpected consequences of the three laws and the danger of rigid legalistic interpretation of moral codes.

    Finally, you gotta love this one People are going to be having sex with robots in the next five years.
    Author needs to work on his verb tense. That is better handled by consumer product safety procedure, not industrial robot safety protocols.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...