The Question of Robot Safety 482
An anonymous reader writes to mention an Economist article wondering how safe should robots be? From the article: "In 1981 Kenji Urada, a 37-year-old Japanese factory worker, climbed over a safety fence at a Kawasaki plant to carry out some maintenance work on a robot. In his haste, he failed to switch the robot off properly. Unable to sense him, the robot's powerful hydraulic arm kept on working and accidentally pushed the engineer into a grinding machine. His death made Urada the first recorded victim to die at the hands of a robot. This gruesome industrial accident would not have happened in a world in which robot behavior was governed by the Three Laws of Robotics drawn up by Isaac Asimov, a science-fiction writer." The article goes on to explore the ethics behind robot soldiers, the liability issues of cleaning droids, and the moral problems posed by sexbots.
Virtual bots (Score:3, Insightful)
Re:Virtual bots (Score:3, Insightful)
Besides, many people would have died in a similar way to that.
I have read about robots for ages and i think that the three laws are a load of crap. We dont even live in a world where robots can think for themselves yet, let alone be able to kill someone because they wanted to. I dont even see the point of making a robot that is aware of its existance, There is no real reason to do so.
Re:Virtual bots (Score:4, Insightful)
Self Awareness. (Score:5, Insightful)
In short, the more self aware the robot, the higher the level of abstraction you get in assigning tasks to it.
Re:Self Awareness. (Score:4, Insightful)
I think you're misunderstanding just what "self-awareness" means. It's not just "awareness of certain properties of the body"--it's "awareness of the self as distinct from the rest of the world." What you're describing is simply environmental awareness--which is necessary for a robot capable of following the high-level instructions like the ones you mentioned, but is worlds away from true self-awareness.
Dan Aris
Re:Not robots... (Score:3, Informative)
robot Audio pronunciation of "robot" ( P ) Pronunciation Key (rbt, -bt)
n.
1. A mechanical device that sometimes resembles a human and is capable of performing a variety of often complex human tasks on command or by being programmed in advance.
2. A machine or device that operates automatically or by remote control.
3. A person who works mechanically without original thought, especially one who re
Re:Not robots... (Score:3, Funny)
Oh I see... like a TV or a VCR?
Re:Virtual bots (Score:5, Insightful)
I have read about robots for ages and i think that the three laws are a load of crap.
That's the whole point: the three simple rules that Asimov proposes have complex implications - his robot stories are filled with situations where following the laws results in tragedy. So yeah, they're a load of crap, but they're intended to be crap.
Re:Virtual bots (Score:5, Funny)
Re:Virtual bots (Score:3, Funny)
Re:Virtual bots (Score:3, Insightful)
Re:Virtual bots (Score:3, Interesting)
Put a brain that is able to become self-aware of itself in my dishwasher, my car-maker industrial robot or my robocop and I won't guar
Re:Virtual bots (Score:5, Funny)
Of course! The moment we can make general intelligence, it will be a big improvement, for any species.
This whole article, for example, is a case of failing an intelligence check.
Hint: it's not the robot who failed it.
Re:Virtual bots (Score:3, Funny)
This guy should have gotten a Darwin award: the only "flaw" in the robot's safety system was that it significantly over-estimated the desire of its human operators not to be torn to shreds.
Re:Virtual bots (Score:5, Interesting)
Consider the following experiment, which toddlers have difficulty performing prior to 4 years, but are able to after. A tube is presented to them, with the logo of a candy company on it, "smarties," not the American ones, but the British ones. The child is asked, what is in this tube? At this point, the child invariably says, "smarties!" The conductor of the experiment then opens the tube, revealing pencils. The experimenter asks again, "what is in this tube?" The child says, "pencils." Now, "if I ask another child what is in this tube, what do you think they will say?" Before 4, the kid will say, "pencils." After, they will say, "smarties."
This reasoning task requires the kid to model themselves prior to the revelation that there are pencils in the tube. It requires a model of what happened after. It, further, requires a model of the other child, of what they will be like without this knowledge. This is actually part of a model of self-awareness, but it's not the entire model. You might ask, "why would a robot need to know this?" Well, actually, it's quite important if the robot is to interract with people, because people will expect the robot to behave in an appropriate manner. Dangerous scenarios could arise because the robot does not understand that things that are in its field of view, for instance, are not in the field of view of a person. An example might be a robot handling dangerous materials, during a construction task. Perhaps the person can't see that it's handling hot metal. A person would warn the other person, avoiding danger.
As for the three laws, they were written in a body of fiction. I think that too much attention is paid to them.
Re:Virtual bots (Score:5, Funny)
Fear them! (Score:5, Funny)
Re:Fear them! (Score:2)
Re:Fear them! (Score:5, Funny)
Re:Fear them! (Score:3, Funny)
Fear the Roomba!
Roomba: Godless killing machine. With automatic carpet pile height adjustment.
Re:Fear them! (Score:3, Funny)
Good department (Score:2, Informative)
(As immortalized in a Mystery Science Theater episode.)
Re:Good department (Score:3, Funny)
I may have inadvertently endangered the entire human species! And with atomic power, no less!
I fail to see how that was the robot's fault (Score:5, Insightful)
"This gruesome industrial accident would not have happened in a world in which robot behavior was governed by the Three Laws of Robotics drawn up by Isaac Asimov, a science-fiction writer"
Neither would this have happened if the maintenance tech had followed procedure and just switched the damned thing off. I don't see how this is any different from a normal industrial accident with something like a sheet metal press.Re:I fail to see how that was the robot's fault (Score:2, Insightful)
Exactly...its not as if we have these laws for cars or trains...plenty of people step infront of them and squisho...human kebab! Besides, those "robots" arent aware of anything...its just a controller which follows a set pattern attached to the controls which manage movement o
Re:I fail to see how that was the robot's fault (Score:2, Insightful)
Re:I fail to see how that was the robot's fault (Score:3, Interesting)
But the thing about the 3 laws of robotics, from my point of view is that you do not have to take them *literally*, or at least, try to take them in the broad sense of meaning.
See, Asimov laws where inteded for a ficticious kind of robots with something called the "positronic" brain.
But, if you thi
Re:I fail to see how that was the robot's fault (Score:3, Insightful)
Walking into an area with operating, unguarded machines is a bad idea be they belt sanders or hydraulic lifting arms. There would almost certainly have been a warning sign, so it's really the guy's own fault for not following proc
Re:I fail to see how that was the robot's fault (Score:5, Insightful)
It isn't, and the robot in question had less automated safety features than your average modern metal press.
There's no need to invoke Asimov's laws for something which has less AI than an automatic door. Even a few sensors linked to a cutout switch could have prevented the accident. Something like this: http://gsfctechnology.gsfc.nasa.gov/FeaturedRobot. html [nasa.gov] could even have prevented the accident and allowed the robot to continue working.
Re:I fail to see how that was the robot's fault (Score:3, Insightful)
Maybe the sensor was on the gate which he bypassed by climbing a fence.
Re:I fail to see how that was the robot's fault (Score:3, Informative)
Industrial equipment does not stop instantly. Sensors that trigger a stop may prevent some incidents, but not all. No level of technology, not even say
Re:I fail to see how that was the robot's fault (Score:5, Insightful)
But you're forgetting how clever idiots can be.
I used to work in a print shop. I had a large machine for cutting stacks of paper. You have to manually move the paper around under the blades to get it where you want. BUT, to activate the blade and do the cutting, you had to push in two different switches that were a couple feet apart. The idea was that you had to use both hands to activate the blade - and thus, both hands would be away from blade when it cut. It even had spacers that kept you from leaning against the switch.
Well, one idiot I worked with would tape down one of the switches so he could operate the blade with one hand while moving the paper with the other. Sure enough, he lost a finger. Even stupider, he continued to tape one of the switches down.
You just can't engineer aound stupidity like that.
Re:I fail to see how that was the robot's fault (Score:3, Insightful)
A lot of people here seem to be of the opinion that it's 'not a robot' unless it has an actual Turing-level AI, but I disagree. I think a 'robot' can be defined as a machine that performs tasks without direc
Re:the fence was probably there for a reason (Score:5, Informative)
Yes, it did occur to me, and it has occurred to a lot of other people besides. I've been doing some work in a facility that uses welding robots to fabricate parts of railway rolling stock, and all of them are protected by multi-zone floor scanners which slow or stop the robot depending where you stand.
There's also an international standard, ISO 10218, Manipulating Industrial Robots - Safety, which specifies distance zones depending on the time required to stop the machine. There's a pretty good overview of how it all works here: http://www.sick.com/gus/products/product_catalogs/ industrial/en.toolboxpar.0003.file.tmp/SichereMasc hinen_en.pdf [sick.com] - PDF Warning - Sick is the company which supplies most of the sensors at the fabrication workshop, btw.
Re:the fence was probably there for a reason (Score:3, Informative)
Re:I fail to see how that was the robot's fault (Score:2)
For a robot to even have a chance to be programmed in the three laws, it has to have AI and be able to "think" because the three laws are such abstracts. There are not simple laws such as the law of gravity where you just plug in numbers.
Re:I fail to see how that was the robot's fault (Score:3, Insightful)
Bad Design (Score:2)
The three laws of robotics will only apply when
Re:I fail to see how that was the robot's fault (Score:3, Insightful)
-h-
Re:I fail to see how that was the robot's fault (Score:2)
There's a reason there are safety fences around those machines.
Re:I fail to see how that was the robot's fault (Score:3, Funny)
Re:I fail to see how that was the robot's fault (Score:3, Insightful)
you are so right, the only difference is:
Not the Robot's fault? (Score:4, Funny)
Here, I'll show you... Where did I put my wrench?
Re:I fail to see how that was the robot's fault (Score:4, Interesting)
Re:I fail to see how that was the robot's fault (Score:3, Funny)
Re:I fail to see how that was the robot's fault (Score:3, Insightful)
So, whose responsibility is it to ensure that a person is safe when working on a piece of industrial equipment? Sure, it makes sense to put in a certain amount of fail-safe procedures. But who is ultimately responsible? I still think it *must* be the person who failed to observe procedures. I would not be opposed to a legal system which said that safety equipment only had to be in place to prevent problems when people were operating the device in accordance with est
Operator Error (Score:5, Insightful)
Re:Operator Error (Score:2)
Re:Operator Error (Score:3, Insightful)
Well it does fit dictionary definition, although I do actually agree, to me this is just "a machine", the term 'robot' does have at least some kind of awareness-process-respond connotation in my and many peoples minds, it would be nice to have some proper differenciation. But perhaps another word, as the roots behind the word 'robot' ("forced labor") hardly conjours the best images either.
Christ, not again. (Score:5, Insightful)
Whenever robots come out, why do people trot out Asimov's Laws of Robotics like they're holy writ? He created those laws and then wrote a book's worth of short stories (read: FICTION) showing their pitfalls.
For anyone who thinks they're a great idea, I'd also like to see your working prototype code and design docs.
Re:Christ, not again. (Score:2)
Re:Christ, not again. (Score:3, Interesting)
More accurately, John W. Campbell's [wikipedia.org] laws.
"Asimov attributes the Three Laws to John W. Campbell from a conversation which took place on December 23, 1940. However, Campbell claims that Asimov had the Laws already in his mind, and they simply needed to be stated explicitly"
Re:Christ, not again. (Score:3, Funny)
Re:Christ, not again. (Score:5, Funny)
Or Godwin's law, like any time at all.
Re:Christ, not again. (Score:3, Funny)
1: A robot may not speak of Nazis, nor through inaction allow a human to speak of Nazis...
Re:Christ, not again. (Score:5, Funny)
A robot's ability to speak of Nazis grows by a factor of 2 every 18 months!
Yep. Heck, humans would have difficulty... (Score:3, Insightful)
R
Re:Christ, not again. (Score:3, Funny)
He could have saved much time and gone with the alternate version of the three laws as depicted in Short Circuit:
1)Do not disassemble.
2)Robots are alive and self-aware.
3)Steve Guttenberg is not funny.
I for one (Score:5, Funny)
Wrong kind of robots (Score:5, Interesting)
Re:Wrong kind of robots (Score:2)
Allow me to be the first to suggest it. The idea occured to me earlier today, a few minutes after the throbbing pain in my thumb subsided.
Re:Wrong kind of robots (Score:3, Insightful)
What use would handguns have then? Other than getting basketballs off the roof and turning off lights? :)
Wow. Suddenly disturbing to think how many handguns are out there, and that the reason behind almost every purchase was "in case I need (want?) to shoot another person."
Re:Wrong kind of robots (Score:2)
Re:Wrong kind of robots (Score:2)
Re:Wrong kind of robots (Score:3, Interesting)
Of course, they won't introduce guns right away that can't shoot humans. At first they'll make it so the guns can't shoot police officers. After all, why should you ever be allowed to shoot at a police officer right? Then they'll expand that to soldiers. After all, only illegal combatants use civil
Re:Wrong kind of robots (Score:4, Funny)
Clearly, it means nukes. Only with the force of Mutually Assured Destruction on our side can we be sure that we could, if push came to shove, defeat our nuclear-armed government. Which is why I advocate providing one free nuclear device to each American citizen on his/her 18th birthday. Only then can we have the violence-free utopian society we've all dreamed of.
Aaargh (Score:2, Insightful)
First the robots would have to be able to understand Asimov's laws and have situational awareness in order to follow them.
Even if that was possible today, how much do you think it would cost to implement that in something like an industrial robot performing a single, repetitive task. Perhaps some simply safety sensors woul
Accident? (Score:2)
What moral issue (Score:5, Insightful)
I'd venture that it would in fact not even be all that good as a sex toy; it would be limited to being human-like, with human-like capabilities, unlike the classical simple, cheap, but far more versatile toys sold today.
Re:What moral issue (Score:2)
Re:What moral issue (Score:2)
If the sex robot could pass the Turing Test, at least within the boundaries of its design I would argue that it should be treated as human.
Re:What moral issue (Score:3, Interesting)
I'd be wary of a Turing sex test.
Re:What moral issue-The grand finale. (Score:3, Insightful)
You're telling me that you honestly believe that there's been noone that has ever stuck a stick of dynamite up their ass or pussy?
Bullshit. Everyone knows that, no matter how depraved or out there, if you can think up a sexual fetish, there's someone out there who gets off on it.
He was a dumbass. (Score:2, Insightful)
Dumb Laws (Score:2)
In classic fiction, runaway robots are almost always analogies for runaway social constructs -- g
Re:Dumb Laws (Score:2)
Nonsense (Score:2)
Should there be some sensors to detect a foreign body, and stop if necessary? Sure.
But in no way could they make a value judgement, as in "Save the human, and sacrifice the dog."
The real solution is safety fences. (Score:2)
What the Japanese generally do is fence off the robot's work area so that people can't just walk into its path. It's a simple solution that works. If a worker climbs over the s
It's science fiction (Score:5, Insightful)
The machine that accidentally killed the person is not capable of following the 3 laws of robotics. It was like a train hitting somone on the tracks -- someone in the wrong place at the wrong time.
The three laws require sophisticated sensors and very sophisticated processing, the likes of which I have not seen in any computer yet.
Its not really about safety (Score:2)
There are no sentient robots capable of coping with, never mind adhering to Asimov's Laws of Robotics.
In the words of Mr. White: You can't fix stupid!
The Case of the Killer Robot?! (Score:2)
By the way, if you've never had to bear the pain of being required to read this book for a class, consider yourself lucky. If this book looks like interesting reading to you, might I recommend grinding your foot off with a dremmel instead? It will be less painful, and you'll sustain less permanent damage.
Pushing will protect you (Score:2)
'Security, safety and sex' (Score:2)
Is sex really such a big concern? I would rather know that people who want to have sex with children, have sex with robots.
As to security, well, I have seen a man lose both of his hands to a paper cutting press (the kind that is used to cut a foot thick stack of paper.) That press could not have 'known' that it wasn't paper, it was cutting but someone's hands. Are we going to put AI into all tools, so that our drills won't drill a human skull and staplers won't staple through human skin and electrical ba
Re:'Security, safety and sex' (Score:2)
Yup. I couldn't agree more.
It's scary to think that people who claim that permitting people to have sex with a robot shaped like a child is an ethical issue are attempting to control the debate over robot ethics.
Then again, many of us live in a country where people get jail time for drawing cartoons of sexualized children... so I suppose I shouldn't be surprised. There's no limit to p
Crushinator (Score:2)
Not if their sexbots! I know what you're thinking,"Fat Robots need lovin' too. The Crushinator can stop by my place anytime.", but there's a real health risk involved if you turn the wro
Yours Truly 2095 (Score:4, Interesting)
Whoa, transport me back to when E.L.O.'s "Time" album came out (Yikes! 1981) and the song "Yours Truly 2095":
But I digress (before I was ever on topic)... there won't be any moral dilemma for this crowd. The first sexbots will be programmed for "No Geeks" which will only increase their allure for that very crowd. They'll be hacked to remove that restriction, and while they're at it they'll be programmed to hang out at retirement homes, PTA meetings and church services. That'll pretty much doom them to be recalled, pulled from the market, and there'll be only a few remaining examples in the Smithsonian and certain institutions of higher learning for, ummm, "research".
Remember, you read it here first.
These Aren't Asmovian Robots (Score:5, Insightful)
Manufacturing robots are sophisticated, but they're really more properly thought of as "Automatons" in this context, not robots in the Asmovian sense.
Tragic that this fellow died, but no more of a failing than a farmhand who falls into a thresher.
It does suggest that these industrial machines might have more safeties on them than they currently do, though.
Re:These Aren't Asmovian Robots (Score:2)
We've already lost the war on "data", but I'm not going to give up on "automata" without a fight! Other than that, I'm with you.
Old Glory (Score:3, Funny)
Re:Old Glory (Score:3, Interesting)
Once upon a time, weapons could be charged with crimes and destroyed if found "guilty."
Look to military drones (Score:3, Insightful)
You tell me which is more likely to happen.. the UAV is never programmed to make that decision to attack, or the military accepts the possibility of some collateral losses.
Hint: Some automated defense systems on ships already make these decisions without human intervention.
Robotic safety is about expectations (Score:5, Interesting)
I'm a post-grad student working on a robot helicopter. It has extremely fast rotor blades and is a very real threat to humans if mishandled, so I can speak from personal experience in working on robot safety critical systems. To me, robot safety is more of the same problem faced by machine safety in general and more of the same problems faced by robots in particular.
Firstly all potentially dangerous machines require correct operation to avoid injury. No one can stop an idiot from ignoring a safety railing of a machine, automatic or robotic. To expect safety after defeating barriers and interlocks is stupid for microwave ovens and toasters, let alone high energy robotic systems. To expect robots to be safe outside of their defined operating parameters is like expecting a car to be made of sponge so no matter how much you ignore the speed limit, you can't kill anyone.
Secondly, robots seem to suffer a higher demand for intrinsic safety because of the expectation of robot cognition. The reality is, this is the place in robotics where the technology least developed. How do people possibly expect a robot to implement the three laws if the robot cannot flawlessly recognise a human as human? Furthermore, the three laws make no sense for a system that generally works far removed from humans. Putting the sensors and intelligence into a factory robot that should never encounter a human in its powered up state is just stupid. A simple barrier or laser curtain is more than adequate as an interlock, but as we've seen, that doesn't keep humans out all the time. The best the industrial roboticist can practically do is build robot systems that are reliable and stay within their work envelopes.
For mobile systems like my helicopter, it becomes more difficult since you can't control its workspace - cognition bites you in the arse once again. However, the reality of robot-human safety is that dangerous robots working around humans simply should not be autonomous without direct supervision. We are decades away from machines that are autonomously safe around humans. Software is brittle and easy to confuse no matter how well coded it is - you just can't capture all of the edge cases in the real world when you have millions of possible states. Don't imagine robot helicopters flying around people without a monkey in control - it just won't happen.
It seems to me that people need to change their idea of robots away from R2-D2 and towards reality. Treat industrial robots like an piece of industrial equipment - with respect. The same idiots who jump the fence of a robot workcell are probably the same idiots who misuse power tools and ignore safety directives. You just can't stop idiots from earning darwin awards. Seriously, it's not hard to stay outside the yellow tape.
Take your three laws and return them to science fiction, from which they came - they belong to the same realm of fantasy as FTL travel - which is to say, maybe one day but not for a long time.
Not at all true!! (Score:3, Insightful)
This isn't a joke any more. (Score:3, Interesting)
The next DARPA Grand Challenge requires operating in congested areas, and that's going to require serious work on robot vehicle safety. The way this is going, those things are going to be rolling through small towns in hostile territory in a few years, and they'd better not be running over little kids.
0th Law of Robotics was followed (Score:3, Funny)
Yes but this robot obayed the zero'th law of robotics
A robot must not harm humanity or through inaction, allow humanity to come to harm
By eliminating this fuckwit from the gene pool, the robot has truely done humanity a great service.
The robot was safe enough. (Score:3, Interesting)
(S?)he's casually throwing together three separate fields of safety.
Industrial robotics, consumer product safety, and android (Asimovs robots are androids, not just robots) morality.
With respect to the particular incident reported, I suspect the synopsis in the article is as sloppy as the rest of the article.
Did the engineer really violate safety? Did his boss or the Japanese work ethic give him a choice? Google karoshi and guolaosi.
If an engineer violates safety procedures and gets killed, publish his experience at the next safety meeting.
Too f---ing bad. I will not cry for a guy that violates safety procedure and gets hurt. For his family, sure--it's not their fault Dad is an idiot.
And if it was karoshi, then the hazard the employee was exposed to was the work culture. Compensation for families of karoshi victims is available today (but not in 1981)
There are safety standards used to protect people from robots, and they work, but you have to follow them.
Lockout/Tagout (really lockout; nobody uses tagout anymore)
Avoidance of exposure--passive perimeter guarding (fences); active perimeter guarding (light screens, LASER fences, floor mats, etc.)
Operator load interlocks--when the operator has to load a robot, you design so that only one (operator/robot) can be in the load station at a time.
And if you do, your recent co-workers will all grimace when we see the pictures in next week's safety meeting.
But we won't have any sympathy for you.
This gruesome industrial accident would not have happened in a world in which robot behaviour was governed by the Three Laws of Robotics drawn up by Isaac Asimov, a science-fiction writer.
That's not what the 3 laws are about. The three laws are moral values, not machine code.
They have nothing to do with protecting a person from a machine and everything to do with implementing morality in a created race of sentient beings.
If you haven't read Asimov's robot stories, you should know that most of them revolve around the unexpected consequences of the three laws and the danger of rigid legalistic interpretation of moral codes.
Finally, you gotta love this one People are going to be having sex with robots in the next five years.
Author needs to work on his verb tense. That is better handled by consumer product safety procedure, not industrial robot safety protocols.
Re:video games and robots (Score:2)
Re:I'm tired of all the'blame the operator' commen (Score:2)
Knowing modern industry, though..he probably was intructed. Especially if that was his job.
Re:Silliness (Score:2)
Re:gets off on a technicality (Score:5, Informative)