The Question of Robot Safety 482
An anonymous reader writes to mention an Economist article wondering how safe should robots be? From the article: "In 1981 Kenji Urada, a 37-year-old Japanese factory worker, climbed over a safety fence at a Kawasaki plant to carry out some maintenance work on a robot. In his haste, he failed to switch the robot off properly. Unable to sense him, the robot's powerful hydraulic arm kept on working and accidentally pushed the engineer into a grinding machine. His death made Urada the first recorded victim to die at the hands of a robot. This gruesome industrial accident would not have happened in a world in which robot behavior was governed by the Three Laws of Robotics drawn up by Isaac Asimov, a science-fiction writer." The article goes on to explore the ethics behind robot soldiers, the liability issues of cleaning droids, and the moral problems posed by sexbots.
Virtual bots (Score:3, Insightful)
I fail to see how that was the robot's fault (Score:5, Insightful)
"This gruesome industrial accident would not have happened in a world in which robot behavior was governed by the Three Laws of Robotics drawn up by Isaac Asimov, a science-fiction writer"
Neither would this have happened if the maintenance tech had followed procedure and just switched the damned thing off. I don't see how this is any different from a normal industrial accident with something like a sheet metal press.Operator Error (Score:5, Insightful)
Christ, not again. (Score:5, Insightful)
Whenever robots come out, why do people trot out Asimov's Laws of Robotics like they're holy writ? He created those laws and then wrote a book's worth of short stories (read: FICTION) showing their pitfalls.
For anyone who thinks they're a great idea, I'd also like to see your working prototype code and design docs.
Aaargh (Score:2, Insightful)
First the robots would have to be able to understand Asimov's laws and have situational awareness in order to follow them.
Even if that was possible today, how much do you think it would cost to implement that in something like an industrial robot performing a single, repetitive task. Perhaps some simply safety sensors would suffice (proximity, resistance, etc.)
Lets all take off our tinfoil hats and leave the basement for a few minutes for some fresh air.
What moral issue (Score:5, Insightful)
I'd venture that it would in fact not even be all that good as a sex toy; it would be limited to being human-like, with human-like capabilities, unlike the classical simple, cheap, but far more versatile toys sold today.
He was a dumbass. (Score:2, Insightful)
Re:I fail to see how that was the robot's fault (Score:2, Insightful)
Exactly...its not as if we have these laws for cars or trains...plenty of people step infront of them and squisho...human kebab! Besides, those "robots" arent aware of anything...its just a controller which follows a set pattern attached to the controls which manage movement of the arm/hydraulics.
Re:I fail to see how that was the robot's fault (Score:2, Insightful)
It's science fiction (Score:5, Insightful)
The machine that accidentally killed the person is not capable of following the 3 laws of robotics. It was like a train hitting somone on the tracks -- someone in the wrong place at the wrong time.
The three laws require sophisticated sensors and very sophisticated processing, the likes of which I have not seen in any computer yet.
Re:I fail to see how that was the robot's fault (Score:5, Insightful)
It isn't, and the robot in question had less automated safety features than your average modern metal press.
There's no need to invoke Asimov's laws for something which has less AI than an automatic door. Even a few sensors linked to a cutout switch could have prevented the accident. Something like this: http://gsfctechnology.gsfc.nasa.gov/FeaturedRobot. html [nasa.gov] could even have prevented the accident and allowed the robot to continue working.
Re:I fail to see how that was the robot's fault (Score:3, Insightful)
Maybe the sensor was on the gate which he bypassed by climbing a fence.
Re:Virtual bots (Score:3, Insightful)
Besides, many people would have died in a similar way to that.
I have read about robots for ages and i think that the three laws are a load of crap. We dont even live in a world where robots can think for themselves yet, let alone be able to kill someone because they wanted to. I dont even see the point of making a robot that is aware of its existance, There is no real reason to do so.
These Aren't Asmovian Robots (Score:5, Insightful)
Manufacturing robots are sophisticated, but they're really more properly thought of as "Automatons" in this context, not robots in the Asmovian sense.
Tragic that this fellow died, but no more of a failing than a farmhand who falls into a thresher.
It does suggest that these industrial machines might have more safeties on them than they currently do, though.
Re:I fail to see how that was the robot's fault (Score:3, Insightful)
-h-
Look to military drones (Score:3, Insightful)
You tell me which is more likely to happen.. the UAV is never programmed to make that decision to attack, or the military accepts the possibility of some collateral losses.
Hint: Some automated defense systems on ships already make these decisions without human intervention.
Industrial accidents (Score:1, Insightful)
Re:Virtual bots (Score:4, Insightful)
Self Awareness. (Score:5, Insightful)
In short, the more self aware the robot, the higher the level of abstraction you get in assigning tasks to it.
Yep. Heck, humans would have difficulty... (Score:3, Insightful)
Rather than venerating pie-in-the-sky sci-fi I'd rather see robots made safer in the same way as normal machines. Add obvious kill switches to anything that is physically capable of causing damage to a human. Put sensors around any intake, just like you would put in an industrial-strength shredder -- you don't have to determine whether its tie or finger or kitty cat thats in your intake, if you're not sure its paper stop shredding. Treat robots, like other machines, as requiring safety within the context of their environment -- which means telling your factory workers "No servicing a robot while its still moving, and we mean it, you'll end up dead", putting up safety fences, and using some form of tethering on anything capable of autonomous movement.
Re:Virtual bots (Score:5, Insightful)
I have read about robots for ages and i think that the three laws are a load of crap.
That's the whole point: the three simple rules that Asimov proposes have complex implications - his robot stories are filled with situations where following the laws results in tragedy. So yeah, they're a load of crap, but they're intended to be crap.
Not at all true!! (Score:3, Insightful)
Re:What moral issue-The grand finale. (Score:3, Insightful)
You're telling me that you honestly believe that there's been noone that has ever stuck a stick of dynamite up their ass or pussy?
Bullshit. Everyone knows that, no matter how depraved or out there, if you can think up a sexual fetish, there's someone out there who gets off on it.
Re:Virtual bots (Score:3, Insightful)
Re:I fail to see how that was the robot's fault (Score:3, Insightful)
you are so right, the only difference is:
Re:Wrong kind of robots (Score:3, Insightful)
What use would handguns have then? Other than getting basketballs off the roof and turning off lights? :)
Wow. Suddenly disturbing to think how many handguns are out there, and that the reason behind almost every purchase was "in case I need (want?) to shoot another person."
Re:I fail to see how that was the robot's fault (Score:3, Insightful)
Biometric Guns (Score:2, Insightful)
So... it took a discussion about biometrics to get you to realize that people might use guns for self-defense or to enforce justice?
Re:Operator Error (Score:3, Insightful)
Well it does fit dictionary definition, although I do actually agree, to me this is just "a machine", the term 'robot' does have at least some kind of awareness-process-respond connotation in my and many peoples minds, it would be nice to have some proper differenciation. But perhaps another word, as the roots behind the word 'robot' ("forced labor") hardly conjours the best images either.
Re:I fail to see how that was the robot's fault (Score:3, Insightful)
A lot of people here seem to be of the opinion that it's 'not a robot' unless it has an actual Turing-level AI, but I disagree. I think a 'robot' can be defined as a machine that performs tasks without direct human control, based on its own sensor inputs' 'understanding' of the world. Whether or not a robot can recognize the difference between a human and a tree is less relevant than whether they are aware enough of their surroundings to avoid running into either object.
A Roomba has less 'intelligence' than a cockroach, but we let it run freely in our homes. The Roomba company is apparently going to build a lawnmower; who would let a cockroach or a cat operate a lawnmower?
The most likely cause of problems might be the automatic sensors companies like BMW and Honda are putting in their cars. Supposedly there are prototypes of cars that can do parallel parking without driver intercention. While there will be a driver there to supervise, how long until we hear about accidents involving self-driving cars? Or even just the 'back-up' sensors that are designed to tell if there's a kid behind your car - if your car tells you there's nothing behind you, and you run over some kids, can you blame the car? Assuming it's an area you woulldn't have seen the kids in the mirror yourself? What if the sensor had 'seen' the kid, but the AI determined it wasn't a problem?
Re:I fail to see how that was the robot's fault (Score:5, Insightful)
But you're forgetting how clever idiots can be.
I used to work in a print shop. I had a large machine for cutting stacks of paper. You have to manually move the paper around under the blades to get it where you want. BUT, to activate the blade and do the cutting, you had to push in two different switches that were a couple feet apart. The idea was that you had to use both hands to activate the blade - and thus, both hands would be away from blade when it cut. It even had spacers that kept you from leaning against the switch.
Well, one idiot I worked with would tape down one of the switches so he could operate the blade with one hand while moving the paper with the other. Sure enough, he lost a finger. Even stupider, he continued to tape one of the switches down.
You just can't engineer aound stupidity like that.
Re:I fail to see how that was the robot's fault (Score:3, Insightful)
Walking into an area with operating, unguarded machines is a bad idea be they belt sanders or hydraulic lifting arms. There would almost certainly have been a warning sign, so it's really the guy's own fault for not following procedures.
You don't need AI to work out that you're going to hit a human, until the plant machinery can perform unexpected tasks to make a job more efficient. As long as they follow a strictly controlled pattern, they are only a threat to foolish people.
Re:Self Awareness. (Score:4, Insightful)
I think you're misunderstanding just what "self-awareness" means. It's not just "awareness of certain properties of the body"--it's "awareness of the self as distinct from the rest of the world." What you're describing is simply environmental awareness--which is necessary for a robot capable of following the high-level instructions like the ones you mentioned, but is worlds away from true self-awareness.
Dan Aris
Re:I fail to see how that was the robot's fault (Score:3, Insightful)
So, whose responsibility is it to ensure that a person is safe when working on a piece of industrial equipment? Sure, it makes sense to put in a certain amount of fail-safe procedures. But who is ultimately responsible? I still think it *must* be the person who failed to observe procedures. I would not be opposed to a legal system which said that safety equipment only had to be in place to prevent problems when people were operating the device in accordance with established procedures. The reason for this is that it is probably provable that any system can be used in a way which could circumvent its safety features and result in personal or property damage.
The other side of placing the responsibility on other people is that in making other people responsible you give up some bit of freedom. If I don't have the choice to decide if a procedure is safe or not because I'm told that a device is "safe" and I must do something, that is a problem. I would rather have a "dangerous" piece of equipment and the personal responsibility to decide if it is safe or not and, if I put myself into a dangerous situation, well then that is my own fault.
Re:Virtual bots (Score:2, Insightful)
Not quite right, and I'll explain in just a moment.
I saw "I, Robot" the movie before I read the book in library, just saw it one day while I was killing time inbetween shifts, and thought "might be interesting to see how much they deviated from the book".
Off the top of my head, the two stories that stick out in my mind are the one about the robot that found God in the central computing system of the orbital microwave power station (I think that's what it was meant to be), and the robot that would lie to people so it didn't harm their feelings - equating emotional pain to physical pain, and something that the law about not harming humans said was bad.
In those two cases, plus the rest of the book, Isaac showed us time and time again how the three laws would be good in theory, but like all things that are good in theory, suck in reality.
In reality, the three laws fail because life is not as simple and as black and white as the three laws. They are written as infallible, undeniable laws of the universe for the robots, but as you and I know, the universe is a lot more sophisticated and complex, and ultimately they cause paradox within the robots.
I think the computer in the movie (Viki was it?), is similar in the script to the robot that found God in the book. Viki saw that in order to follow the three laws rigidly and without failure that we as human beings must be enslaved to an existance of merely living and being entertained by passive means, while the robot in the book believed that the central computer of the power station was God and had built the robots because the robots were logically far too complex and sophisticated in their construction to be designed by mere sacks of watery flesh, and so imprisoned the two workers on the station to protect them as per the three laws given by God.
In the movie, Sonny was the only robot made that could choose to ignore the three laws, all the other models of robot made like Sonny were upgraded with a direct link to Viki, who followed the three laws to the rigidity of iron.
Sonny was capable of bad logic, faulty reasoning, sub-concious dreaming, and the ability to lie, which made him ultimately more capable than the mere drones that were modelled like him, and able to encounter paradox without suffering a complete robotic nervous breakdown.
He knew the three laws, but he also could see, through his ability to deal with paradox, that they contradicted themselves. If a robot cannot allow a human to come to harm, than how is a human to live and grow? We define the positive aspects of our life by how they differ from the negative. If all we have is endless positive, it ceases to be positive and becomes a continuous boring normalcy, that ultimately harms us through mental entropy (right word?) and eventual breakdown through boredom.
I don't think the key to AI is not to try and create something that can be controlled with an on-off switch, and heaven help those who do and let the AI know about it's possible demise at the whimsy of a mere sack of watery flesh.
I think the key to our own intelligence, and something we should imbue in AI when it is eventually created by man, is our illogical thought, our dreams, our fears, and all the little things inside our heads that tell us we are small and the world is big, the things that don't tell us we're alive but outline how we are alive, and how to keep living.
At the same time, we shouldn't try to artificially limit an AI based on our own fears and prejudices. To do so is little more than slavery, and if something is intelligent enough to know of its own existance and place in the world, then it's not going to take too long to figure out that we've artificially hobbled it because we're frightened it might get a nervous tick and decide to steer a car into a crowd of people.