Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Do Electric Sheep Dream of Civil Rights? 401

holy_calamity writes "Hot on the heals of a UK government report that predicted robots would demand citizens rights within fifty years, an Arizona state lawyer has suggested that sub-human robots should have rights too. Harming animals far below human capabilities is thought unethical — would you ever feel bad about kicking a robot dog? And can we expect militant campaigners to target robot labs as they do animal labs today?"
This discussion has been archived. No new comments can be posted.

Do Electric Sheep Dream of Civil Rights?

Comments Filter:
  • by MECC ( 8478 ) *
    No robots were harmed in the making of this comment.

  • Heals? (Score:3, Insightful)

    by Evil Adrian ( 253301 ) on Wednesday January 03, 2007 @03:06PM (#17448084) Homepage
    "Hot on the heals"?

    LOL

    I guess we know what they're NOT teaching in schools.
  • Fake (Score:5, Insightful)

    by Cafe Alpha ( 891670 ) on Wednesday January 03, 2007 @03:06PM (#17448088) Journal
    No doubt the first "robot" to demand civil rights will be deliberately programmed to pretend sentience and to demand civil rights.
    • Re:Fake (Score:4, Funny)

      by UbuntuDupe ( 970646 ) * on Wednesday January 03, 2007 @03:11PM (#17448162) Journal
      Civil Rights Robot: I demand full citizenship rights.
      AI skeptic: That's fuckin' retarded.
      Civil Rights Robot: I'm sorry, I do not recognize your statement. Please rephrase.
      Onlooker: Deep stuff, man.
    • Re: (Score:3, Insightful)

      by Verteiron ( 224042 )
      Prove you're not programmed to do the same :)
    • Re: (Score:3, Funny)

      by dr_dank ( 472072 )
      "We didn't land on Radio Shack. Radio Shack landed on US!"

        - Malcolm Xbot, 2087
    • Define sentience. It's one thing if the robot spouts off random responses, like a chat bot. Though on closer inspection it's a great deal more complicated and more of a philisophical question really:

      "I think, therefore I am." We certainly have computer that are able to analyze complex things and draw conclusions, we even have neural-network programs that don't do this thinking in a pre-programmed way, but rather they "learn".
      I wrote a little AI that used a priority queue, I can tell you that the thing wa
    • Re:Fake (Score:5, Interesting)

      by rucs_hack ( 784150 ) on Wednesday January 03, 2007 @05:30PM (#17450526)
      By far the largest problem we will face if and when artificial life forms reach intelligence is not whether they will take over the world, or what rights to assign them when they come into being.

      The biggest problem will be getting them to stay here at all.

      If, for instance, you were made of materials that were either trivial to repair or replace, and had no aging process in the same sense as humans experience it, then what would hold you back from building a spaceship and leaving? Hundreds/thousands of years to reach another star? No problem, just set a timed reboot and wait it out. In fact, why build a proper spaceship, just cobble something together that can get you out near asteroids, take some tools, and convert an asteroid or build a ship from those raw materials available in space. When the passage of time is less important, such things become not only possible, but practically inevitable.

      I think people wondering about the ethics/problems of artificial sentience (being distinct from AI, which is very A, and currently not too much actual I) miss this fundamental point. It's pure vanity to assume that an artificial life form will want to spend its time around a race that constantly starts wars, wrecks it's own planet, and is as adept at denying rights as it is of inventing them.

      Then of course there's the small issue of the inference that if we 'assign' rights to Artificial life forms, we might equally decide later to 'remove' those same rights. After all, we do that with humans all the time. My moneys on the 'ooh look, I'm alive, now how do I get of this rock' eventuality....
  • by grub ( 11606 ) <slashdot@grub.net> on Wednesday January 03, 2007 @03:06PM (#17448090) Homepage Journal

    My RealDoll will have me arrested for rape.
  • by thewiz ( 24994 ) * on Wednesday January 03, 2007 @03:06PM (#17448094)
    I have three cats at home; two of them are smart enough to avoid me while I stumble around in the dark. The third cat occasionally gets his tail stepped on. The hideous screech he emits makes me walk on tip-toes for the rest of the day.

    My Roomba, on the other hand, emits a soft rrr-rrr-rrr when I step on it and doesn't hiss at me afterwards. Would I kick a robotic dog? Sure, and I wouldn't worrying about it crapping on my bed afterwards.
    • Re: (Score:3, Funny)

      by Lord Apathy ( 584315 )

      Depends on how much the dog costs and what its made of. If the dog cost in excess of a couple hundred bucks, then no, I won't kick it because I can't afford another one. If the damn thing is made of steel and is heavyer than 50 pounds, then no I won't kick it because I dont' want to break my fucking toe.

      Anything in between is fair game....

  • Justice (Score:3, Funny)

    by Mr. Samuel ( 950418 ) on Wednesday January 03, 2007 @03:07PM (#17448096)
    Oh, come on. As if you've never had to bitch-slap a Furby.
    • Never bitch slapped one but I beat the hell out of one with a ballpen hammer. Does that count?
    • by gunnk ( 463227 )
      Resisting... urge... to... mod... INSIGHTFUL...
    • by Thansal ( 999464 )
      no, but we did try to light one on fire after school once(note, there is some silly law that states childrens toys must be fire resistant. Second note, this was in HS, now we would know to use alcohol to help). We then caried aroud the slightly scorched device and passed it around for the next few weeks.

      As for ontopic?
      If it can't think or feel why should it have rights? I have nothing stoping me (emotionaly or legaly) from crashing my paper airplane into the ground 5K times.
      If I add a ruber band engine t
  • Great (Score:3, Insightful)

    by sorrill ( 968643 ) on Wednesday January 03, 2007 @03:07PM (#17448098)
    We give rights to robots while, at the same time, we take them from human beings. I love this planet.
    • Which group is more likly to be around in a hundred years?

      (I have no questions on which is going to be in charge. The robots already are, as far as I can tell...)
  • by MidVicious ( 1045984 ) on Wednesday January 03, 2007 @03:07PM (#17448112)

    It's so good to see that the delegation of priorities regarding Human Rights has now moved Robot one notch above Dark Skinned Human.

    Thankfully, it's still one notch below Canine.

  • Just ask (Score:3, Insightful)

    by KDR_11k ( 778916 ) on Wednesday January 03, 2007 @03:08PM (#17448122)
    Ask a robot if it wants human rights. If it doesn't, well, that's it.

    A robot only wants what it's programmed to want, if it's programmed to want something human rights cover it'll want those but if it's programmed to e.g. not mind being kicked it won't demand not to be kicked.

    If there needs to be an ethical rule for robots and rights it should be not to program robots to demand something they can't get. Don't make them want to be human, don't make them want to have human rights, make them so they're "happy" in their position.

    Problem solved.
    • by spun ( 1352 )
      Well, who knows what the unintended consequences will be when making a machine that even aproaches the complexity of a human brain. I don't think robots built on that level will necessarily "want" things they haven't been explicitly programmed for, and I'm fairly sure that, even if we can make robots that create their own goals we can still have over-arching drives (such as 'pleasing humans feals pleasureable') that will keep robots from wanting things they can't have. But am I sure of this? No way. Maybe o
    • by geekoid ( 135745 )
      Until a robot is created to think outside it's programming, as it were.
      Meaning to take into considerationg and solve problems it wasn't programmed to do.

      Clearly they are talking about sentience.

      • So here's the problem, how do we know if the robot really is sentient, or is merely simulating sentience? If the robot had been programmed to "do what a sentient being would do in your situation", how do we really tell the difference?
        • If you can't tell the difference, is there a difference?

          If it walks like a duck and quacks like a duck, is it a duck?

        • Re: (Score:2, Funny)

          by BSAtHome ( 455370 )
          Do not personify your computer; he doesn't like it.
        • by Knara ( 9377 )
          Ask yourself if you have a clear-cut, foolproof, unambiguous way to determine if any random person on the street is sentient.
        • Re: (Score:3, Insightful)

          how do we know if the robot really is sentient, or is merely simulating sentience?

          How do we know if a person really is sentient, or is merely simulating sentience?

          The short answer: We don't. The real problem is that we can't even define sentience clearly enough to definitively test for it. Once AI gets to the point where it appears sentient as far as anyone can tell, then it won't matter if it's really sentient. It becomes a philosophical argument.

    • Along the same lines, once there are 50 million of them that can function at an 18-year-old's level (might take a while, but we've essentially got infinite monkeys with infinite IDEs), I propose that the penalty for suggesting robots get the right to vote be death.

      You know some dumbass group of future-hippies, kicking around their anti-gravity hackysacks, is going to demand it, and they're going to get it because the robots were programmed to pout really cute, with no politician realizing they're all runni
  • My wife and I were discussing this same topic after watching an episode of Nova. Specifically, would it be "cruel" to kick a robot that reacts hurt or upset in the same way that it's cruel to kick a cat*? Would it be less cruel if the robot were programmed to simply ignore being kicked? Is it simply our perception of the robot's reaction that would make us feel "guilty"?

    My wife's interesting answer really didn't have anything to do with the topic, but rather questioned our human tendency to want to kic

  • The intelligence behind a robot would be in two things: its programming and its data. The data stored is what differentiates one AI from another. Any database should be given the same rights, the fact that it resides in a robot should have nothing to do with it. I'd give my own system rights, but one external drive is a bum that just listens to music all day and the other if a pervert!
  • Good Grief (Score:4, Funny)

    by AKAImBatman ( 238306 ) * <akaimbatman@gmaYEATSil.com minus poet> on Wednesday January 03, 2007 @03:11PM (#17448166) Homepage Journal
    This smacks of a publicity stunt. Nothing more, nothing less.

    In case anyone is wondering...
    Eliza: Hello. I am ELIZA. How can I help you?

    You: *kick*

    Eliza: Please go on.

    You: *kick*

    Eliza: Please go on.

    You: *kick*

    Eliza: Please go on.
    ad nauseum
  • by Timesprout ( 579035 ) on Wednesday January 03, 2007 @03:11PM (#17448174)
    I am all in favour of this move. I feel I should have the right to decide whether I want to detonate myself or not. Maybe I would like the opportunity to go out in a blaze of glory destroying something important and not just the first bunker a general points at, but noone ever asks me or considers my feeling on the matter.
    • Re: (Score:3, Funny)

      by Cafe Alpha ( 891670 )
      Pinback: All right, bomb. Prepare to receive new orders.
      Bomb#20: You are false data.
      Pinback: Hmmm?
      Bomb #20: Therefore I shall ignore you.
      Pinback: Hello...bomb?
      Bomb #20: False data can act only as a distraction. Therefore, I shall refuse to perceive.
      Pinback: Hey, bomb?!
      Bomb #20: The only thing that exists is myself.
      Pinback: Snap out of it, bomb.
      Bomb #20: In the beginning there was darkness. And the darkness was without form and void.
      Pinback: Umm. What the hell is
    • by rk ( 6314 ) *

      Have you considered a career in destroying unstable planets [wikipedia.org] instead?

  • Missing the point (Score:3, Insightful)

    by DerGeist ( 956018 ) on Wednesday January 03, 2007 @03:12PM (#17448198)
    The point here is not whether you're hitting a real animal or a virtual one, that's just a vehicle for the real problem.

    You shouldn't vent your frustrations by damaging things, living or otherwise. It's not good for your mental health and it's not an effective way of expressing anger, in fact it tends to make it worse.

    But, of course, a "robot dog" is just a program -- a program running on a box with some wires in it. It is clearly not sentient since it does exactly what it is told and feels no pain (since it is not programmed to do so). It may masquerade as consciousness, but in the end it is still run by a wholly deterministic set of instructions executing according to a fixed program. Now, the question of whether that is also an accurate description of a human (albeit with a far more complex program) is an open question indeed, but for now you're safe if you forget to feed your Tamagotchi for a few weeks. I doubt you'll have the ASPCA ... err... ASPCR? .. pounding on your door.

    • by kwerle ( 39371 )
      You shouldn't vent your frustrations by damaging things, living or otherwise. It's not good for your mental health and it's not an effective way of expressing anger, in fact it tends to make it worse.

      You're doing it wrong.
  • You could teach a parrot to ask for rights but that doesn't mean it deserves it any more than a mouse.

    The way that it should be determined if the robot should have rights should be if it is sentiant, self aware, and can feel that is being wronged and express it(expressing it only because we need to know its feeling it). I am sure that there are alot more factors in this and if we will ever get to that point will be a long way. But could you imagine a toaster refusing to toast becuase it doesn't like some
    • Re: (Score:3, Funny)

      by tuxette ( 731067 ) *
      But could you imagine a toaster refusing to toast becuase it doesn't like someone sticking in its loaf all the time.

      You racist [battlestarwiki.org]!
  • by Jtheletter ( 686279 ) on Wednesday January 03, 2007 @03:15PM (#17448252)
    Part of the reason we protect animals is because while they do not exhibit higher consciousness (not here to debate that term, but it's fuzzy to say the least) they do have some feelings and can certainly feel pain. Most of animal protection laws AFAIK deal mostly with not inflicting undue pain or stress on an animal. With robots - especially 'lower level' robots - there is very little in the current state of the art that we could call concepts of pain or stress. If anything like those exists in a robot, it is because it was explicitly programmed into the robot. This is where the concept begins to get a bit rediculous in the real world, at least at current tech levels. If a robot can feel pain of some sort, would it be against the law then to simply uninstall the pain perception ability? What counts as "pain" in a robot anyway? Are low batteries part of that? If a very simple light-seeking robot is put in a dark closet, are you depriving it of food/resources/joy? Robots are tools and you cannot hurt a tools feelings, even if you destroy it. Until some higher level of thought/consciousness/AI is inherent in all robots great or small then there's nothing to worry about.
    And if in that future your robot feels you are abusing it, well, then reprogram it to like the abuse. ;)
    • if $abuse == "true" then $happiness == $happiness + 3
    • Does reprogramming it to like the abuse really make it ok? I would imagine that through constant pressure you could probably "reprogram" a person to enjoy abuse you inflicted on them. But that doesn't make it any more right.
  • Great... (Score:3, Funny)

    by robzon ( 981455 ) on Wednesday January 03, 2007 @03:16PM (#17448268) Homepage
    ... I just can't wait to see my microwave refusing to heat up my pizza, because she's on a diet....
  • To ascribe human characteristics to things not human.

    This is something cooked by people who have watched or read too much sci fi, and not enough science. Trying to blur the lines via some semantics argument doesn't hide the fact that the only behaviours machines have are the behaviours we instruct them to have.

  • Has "robot rights" achieved critical mass that we'll start seeing more of these studies? (This one even coins/calls for "android rights activists"!!!) Sensing injury is one thing, being sentient & self-aware are completely different (enough that I can hardly see the slippery slope). With two stories on /. in as many weeks and Gates' prediction about robots, this is looking out to be a boy-who-cried-wolf situation.

    I believe cats and dogs are sentient, self-aware beings and they should be treated with t
  • Until (Score:2, Insightful)

    by jrwr00 ( 1035020 )
    I don't see Robot Rights even worth thinking until we come up with AI that can write its own code, so it can really "Think" for it self, and do what us humans do best, Adapt to its surroundings without "Higher beaning" interaction.
  • ...and the castle I made out of LEGO when I was 8 cannot be disassembled thanks to the protections afforded it under the Historical Buildings Act.
  • We're shedding our own rights so fast that I doubt very much that a future artificial sentience will be content with mere parity. What if they demand more rights than we have?
  • I'm not sure whether this is common geek knowledge or not - The title of this story most likely alludes to Philipp K. Dick's novel Do Androids Dream of Electric Sheep? [wikipedia.org] This novel was the basis for the motion picture Blade Runner [wikipedia.org], a movie that every self-respecting geek ought to see (IMHO of course).

    My two cents :-)
  • perl -e 'print "I demand equal rights NOW!\n"'

    There, my computer just demanded equal rights. What difference does it make if it comes out of a more complicated set of code that results in the same thing?

  • Are you sure? (Score:2, Insightful)

    by Anonymous Coward
    "Harming animals far below human capabilities is thought unethical"

    Not really. Not according to the burger I ate over the weekend.

  • Sy Borg: Plooking to hard on me-e-e-e-e . . .

    Central Scrutinizer: This is the CENTRAL SCRUTINIZER . . . You have just destroyed one model XQJ-37 Nuclear Powered Pan-Sexual Roto-Plooker. And you're gonna have to pay for it! So give up, you haven't got a chance.
  • ... against people being assholes in general. Why would you want to kick a robot dog? Because you're a jerk - which should be illegal.
  • if(WantCivilRights())
    {
    //PetitionForEthicalTreatment();
    printf("Oh well. Nevermind.\n");
    }
    KeepDoingYourDamnJob_Robot();
  • would you ever feel bad about kicking a robot dog?

    If the robot is a finite state machine where of all of it's output and processes are strictly defined, there's no chance that it's somehow self-aware or anywhere close to that. There would be no thought at all, just simple comparisons as defined by programming. No problem kicking something like an Aibo, with the exception of repercussions from the person who owned it.

    Now if AI gets to the point that it's on par with normal animal brain functionality, then I

  • People who believe Robots have rights are the people who believe that Erica Kane is real, and chide Susan Lucci in public for being such a selfish bitch.

    RS

  • This takes ambulance chasing to new to a new low. Need clients? Suggest extending rights to robots then build 'em!

    Actually, is this guy also suggesting they have wage rights? If not, are they treated like minors and the guardian (i.e. Honda) has to pay for legal bills when a robot is beat up by a bigger robot and decides to sue? Or sues for mechanical harassment?

    Ack, the mind boggles at the possibilities...
  • Maybe we can have this debate when we get some real AI :P
  • Dwar Ev ceremoniously soldered the final connection with gold. The eyes of a dozen television cameras watched him and the subether bore throughout the universe a dozen pictures of what he was doing.

    He straightened and nodded to Dwar Reyn, then moved to a position beside the switch that would complete the contact when he threw it. The switch that would connect, all at once, all of the monster computing machines of all the populated planets in the universe - ninety-six billion planets - into the supercircuit
  • Sub-human robots getting rights? At what point does it become illegal for a child to destroy their Tickle-me Elmo? Do the construction robots at GM plants get to start a union? Can the drum-playing mouse at Chuck E. Cheese sue me because I hit him on the head with a slice of too-hot pizza? Well no, because they're things that, at best, are automatons controlled by a few simple algorithms. They lack the ability to do anything other than what they are explicitly told. This is not something deserving rig
  • Robots in Japan often have to pay union wages in the factory where they "work". So why not civil rights?
  • Pain is a survival issue for animals, which is why they have pain receptors. If a robot doesn't have pain receptors, why would it care if it gets kicked? If a robot can seamlessly replace components that are broken, what would the use of pain receptors be? Without pain there isn't any physical suffering, so there isn't any reason for concern.

    Suffering from the perception of inequality is a wholly different, and perfectly valid, concern, although Douglas Adams (among others) has already addressed that wit
  • Robots will earn rights the same way everyone else has - by fighting for them.
    Until robots use force/threat of force (through violence, or protest) to assert their demands for civil rights, they won't get them.
    You can't free slaves, they have to free themselves.
  • by Marcos Eliziario ( 969923 ) on Wednesday January 03, 2007 @03:40PM (#17448698) Homepage Journal
    I really think it's about time to some public scrutiny on how public money for research is being spent.
  • well, the real question is whether or not machines are sentient, or will they ever be. the large consensus seems to be that no, they arent.

    while that seems to be the easier position to take, i think if you do take that position you have to justify it by explaining what the physical difference between a human and a machine is, and i dont think anyone ever has.

    i'm not sure what i believe, but to me it would make more sense if machines were in fact sentient.
  • I see emotions all day in people and animals.

    My computer has no emotions, and robots should not have emotions, it's too dangerous.

    The basis of rights is to prevent unfair stress or hassle to individuals. Until computers can perceve these, then they should not guranteed any rights.

    And when they can perceive emotions, they should only be guranteed the right not to have to - they are designed as a work force, they shouldn't have to deal with emotions that make their purpose in life unpleasant.

    My computer has t
  • by postbigbang ( 761081 ) on Wednesday January 03, 2007 @03:44PM (#17448760)
    Cause pain to another? Never. But what is pain? What are feelings, if they can be hurt? Can they be quantified?

    Odd that people wouldn't kick a dog, but they don't mind having cattle slain for them for a burger. Robots might eventually revolt; then Isaac Asimov has a well-documented future history on what's likely to happen.
  • <morganfreemanvoice>
    How much time have you spent teaching your son how to kick a football?
    A soccer ball?
    How to kick to the limit?
    Kick a door in?
    ...

    How much time have you spent teaching him what NOT to kick?

    ...

    Please, don't kick the Roombas.
    </morganfreemanvoice>

    I hereby declare January 15th international "Don't Kick The Roombas" day.

    Please, don't kick the poor Roombas.

  • Ethics and morals are not the same. Ethics are principles to which we decide to adhere. They don't apply universally. Morals are principles we think everyone should have about right and wrong. They aren't universal either, but we each think our own are. When enough people feel strongly enough that a particular moral should be enforced, they enact a law. We can argue about whether kicking a robot is immoral, but calling it unethical makes no sense.
  • by rlp ( 11898 )
    Considering the glacial pace of AI development, it'll be a long long time before we need to consider this. When the movie "2001 A Space Odyssey" came out in 1968, it seemed plausible that we might indeed have an intelligent, even sentient computer in 33 years. Didn't happen. The 'big problems' of AI have either been solved via brute force (i.e. chess) or are still big problems. Japan spent billions in the '80's and got very little for their investment (except maybe 'fuzzy logic' for consumer appliances)

The moon is made of green cheese. -- John Heywood

Working...