Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI

Do AI Deserve the Same Rights as Animals? (aeon.co) 300

The digital magazine Aeon published a thought-provoking proposal this spring from a professor of philosophy at the University of California, Riverside and an assistant professor of philosophy at Boston's Northeastern University: Universities across the world are conducting major research on artificial intelligence (AI), as are organizations such as the Allen Institute, and tech companies including Google and Facebook. A likely result is that we will soon have AI approximately as cognitively sophisticated as mice or dogs. Now is the time to start thinking about whether, and under what conditions, these AI might deserve the ethical protections we typically give to animals...

You might think that AI don't deserve that sort of ethical protection unless they are conscious -- that is, unless they have a genuine stream of experience, with real joy and suffering. We agree. But now we face a tricky philosophical question: how will we know when we have created something capable of joy and suffering? If the AI is like Data on Star Trek or Dolores on Westworld, it can complain and defend itself, initiating a discussion of its rights. But if the AI is inarticulate, like a mouse or a dog, or if it is for some other reason unable to communicate its inner life to us, it might have no way to report that it is suffering...

We propose the founding of oversight committees that evaluate cutting-edge AI research with these questions in mind. Such committees, much like animal care committees and stem-cell oversight committees, should be composed of a mix of scientists and non-scientists -- AI designers, consciousness scientists, ethicists and interested community members. These committees will be tasked with identifying and evaluating the ethical risks of new forms of AI design, armed with a sophisticated understanding of the scientific and ethical issues, weighing the risks against the benefits of the research.

It is likely that such committees will judge all current AI research permissible. On most mainstream theories of consciousness, we are not yet creating AI with conscious experiences meriting ethical consideration. But we might -- possibly soon -- cross that crucial ethical line. We should be prepared for this.

This discussion has been archived. No new comments can be posted.

Do AI Deserve the Same Rights as Animals?

Comments Filter:
  • ''under what conditions, these AI might deserve the ethical protections we typically give to animals...''

    When we are able to consider them like food, as we do a large amount of the animals we have ethical issues with.

  • No. That's absurd. (Score:5, Insightful)

    by sgage ( 109086 ) on Sunday November 24, 2019 @09:12PM (#59450122)

    No That's absurd. AI is not a living, sentient, feeling thing. It is an algorithm. The notion that it has 'rights' is absurd.

    • Re: (Score:2, Interesting)

      What are we other than an electro-chemical algorithm? Lets take the "A" out of it.

      Do things with intelligence deserve certain rights and protections?

      • by Empiric ( 675968 )

        All things with souls deserve certain rights and protections.

        You may, however, opt-out. Most do, although we continue to protect them from their "self-identification" anyway. For now.

        • Lovely - do you happen to have a soul-o-meter so that we can conclusively determine whether an AI has a soul or not? Or for that matter, whether a particular person does?

          Just think - we could make having a soul a legal prerequisite for holding any public or corporate office, which would no doubt have a huge positive impact on our society.

          • by Empiric ( 675968 )

            Sure. My soul-o-meter would be asking "how do you feel about existing"?

            The legal side has already been addressed, at least in terms of political structure in the United States... "endowed by their Creator with certain inalienable rights". Yes, exactly, and there is no other justification for rights. You can't derive rights from DNA, much less delineate which DNA would have them and which would not. Even more absurd would be deriving rights from a circuit diagram.

            And yes, it would and did have a huge pos

            • Sure. My soul-o-meter would be asking "how do you feel about existing"?

              ELIZA could provide good answers to questions like that over 50 years ago.

              • by Empiric ( 675968 )

                No, it couldn't. It would emit some form of "Why do you ask me about how do you feel about existing?"

                If you on Slashdot, and don't know what a Turing Test means as a basic baseline for consciousness, I don't know what to say. Well, other than there's apparently one particular concept which, although you claim it doesn't exist and therefore should have zero importance to you, is important enough for you to willfully negate your own brain to avoid.

        • All things with souls deserve certain rights and protections.

          How do you demonstrate a "soul"?

          I ask, because if you can't demonstrate it then the probability of it being horseshit rises to near certainty.

          • by Empiric ( 675968 )
            Okay, peer-reviewed evidence from the most authoritative medical journal in the world. [thelancet.com]

            Note I said "evidence", not "proof". You will not be force-converted by the necessary cognitive response to proof. You are given free will to choose.

            However, along with a few million other equivalent statements, I am not proposing to scientifically -demonstrate- Mozart was a great composer either. I'll simply factually assert he was.

            Sorry however, that you suffer from the common level of self-inflicted limitation
            • Funnily enough, the word "soul" does not appear even once in the article. (Presumably because scientific medicine does not operate with mysticism?) Also, Lancet, Lancet...wasn't it the journal with the MMR/autism paper?
              • by Empiric ( 675968 )

                I assume you have the capacity to the smallest bit of inference. Scientific medicine also knows the limits of science, and that something like repeated, replicable NDE experiences are on the edge of it.

                When you can causally specifically explain the "placebo effect", I'll give some consideration that you have any capacity to avoid bias that disqualifies you from any scientific commentary atall. Though, comparing yourself favorably to The Lancet with some smarmy nonscientific half-stated smear, isn't helpin

                • You are truly a certifiable loony. Where am I comparing myself to anything? What has placebo effect have to do with this? What has *any* of that have do with your nonsensical linking of scientific research to superstition?
      • Do things with intelligence deserve certain rights and protections?

        No. There's no such thing as an intrinsic right.

        We just like to give rights to others, in order to make our own lives better.

    • The notion that you are not an algorithm is absurd. "I'm so special in a way that's 100% unexplainable and undetectable" is silly.
      • by Empiric ( 675968 )

        Cartesian Dualism says otherwise.

        You are, however, free to refute the Mind Body Problem, which would probably land you a nice Nobel Prize.

        • I think that physics and neurology are at the point where dualism is sufficiently refuted, in that there's no where in the brain and nowhere in physically interacting particles that a mind can sit outside the brain.

          I don't think it would get a Nobel Prize at this late stage.
          • by Empiric ( 675968 )

            Think (or rather, wish) away. However, you're completely wrong.

            Nobody is claiming that the brain is not a contributing factor to exhibit consciousness, that's been known since the first caveman hit another in the head with a rock, and science hasn't changed that. However, it being -sufficient- is an entirely separate question, and any attempt to reduce consciousness to materialism poses unresolvable paradoxes.

            Go ahead, materially reduce, say, "freedom" to activity of neurons. Specifically neurons of all

            • Here's the crux of the issue:

              1. For any system, every fact about the whole is a necessary consequence of the nature and relations of the parts.

              2. People are made of atoms.

              3. Atoms are purely physical objects, with nothing but physical properties and physical relations to one another.

              4. People have mental states.

              5. No statement ascribing a mental predicate can be derived from any set of purely physical descriptions.

              Which of these statements do you deny?

              Well the brain (and nervous system) are what creates mental state, so assuming that a "mental predicate" has a plain meaning then (5) is wrong. Every mental state can be derived from a set of purely physical descriptions. In particular the purely physical description of the state of the brain and nervous system of the animal in question.

              Surely that's not controversial?

              • by Empiric ( 675968 )

                Every mental state can be derived from a set of purely physical descriptions. In particular the purely physical description of the state of the brain and nervous system of the animal in question.

                That's your sheer unevidenced assertion. You say "it always is", but you can -demonstrate- that in exactly 0% of cases.

                Again, "freedom". Just the vaguest, broadest outlines of how you would reduce that to neuron activity. Or a set of physical descriptions of any type for which you can say "this diagram/mapping/whatever, -therefore- 'freedom'". That's what is mean by a "mental predicate", a direct logical inference from the physical description. Not a handwaving "physical brain things are happening",

                • That's your sheer unevidenced assertion. You say "it always is", but you can -demonstrate- that in exactly 0% of cases.

                  We know enough about the brain to know that it is the organ that creates mental states.

                  See, perhaps this research For the first time, scientists can identify your emotions based on brain activity [theverge.com] from back in 2013.

                  But we've known for a long time that brain damage affect emotion.

                  Again, "freedom". Just the vaguest, broadest outlines of how you would reduce that to neuron activity. Or a set of physical descriptions of any type for which you can say "this diagram/mapping/whatever, -therefore- 'freedom'". That's what is mean by a "mental predicate", a direct logical inference from the physical description. Not a handwaving "physical brain things are happening", therefore "that's all that's necessary to explain any mental concept". Because you say so, and for no actual supported reason.

                  What is "freedom"?

                  Not being in prison is a physical thing that your brain with interact with as it interacts with other concepts of reality.

                  Having the right to pursue happiness means that you will be more able to make choice

            • Nobody is claiming that the brain is not a contributing factor to exhibit consciousness, ...... However, it being -sufficient- is an entirely separate question, and any attempt to reduce consciousness to materialism poses unresolvable paradoxes.

              Are you claiming that there's some special, extra-normal force or property involved in consciousness? Something "spiritual" or "supernatural"? That is, something non-materialistic?

              Because if you are, you've just refuted yourself.

              • by Empiric ( 675968 )

                Ah, learn when "refuted" even vaguely applies to a situation. I've done nothing of the kind, and the Mind Body Problem still stands,

                You don't accept nonmaterial causal factors. That's fine. You're wrong.

            • 5. No statement ascribing a mental predicate can be derived from any set of purely physical descriptions.

              That's quite the whopper, if you're gonna base everything you believe on a pure assertion like that, you can just dump all the rest of the words and just stick with the assertion; "physicists are not permitted to speak."

              LOL

              I'm sure these things were all in the books you read, but look... they were old books.

              • by Empiric ( 675968 )

                It's an assertion neither you nor anyone else can provide a counterexample to. Not one.

                And no, physicists are perfectly able to speak about physics. They can't declare what exists outside of currently known physics. That's why, yes, physics progresses and changes. And regardless of that, it is erroneous to assert nothing else exists metaphysically. You can make no such assertion validly. You have a wish it is, that's as far as you can state, scientifically or otherwise.

        • Congratulations, that's the dumbest thing anybody on slashdot said this week.

          What tipped you over was your impossible level of certainty.

          Cartesian Dualism isn't even something that was ever believed to exist in nature; it is simply a statement of the lack of foundation for knowledge of the universe that humans have, and our need to base our knowledge on certain presumptions rooted in our own context and limitations. Sometimes people mistake a crutch for a distinguished magical power, and then they say stupi

        • You are, however, free to refute the Mind Body Problem

          Easy.

          If the Mind doesn't have a causal influence on the body, it is a superfluous notion that can simply be removed.

          If the Mind does have causal influence on the body, it is not a Mind but simply a part of the Body.

    • This article was written by a philosopher, not a computer scientist. Further, it was not written as a proposal for legal action. It was, in fact, an entirely hypothetical piece intended to invite contemplation about technology which it opens by stating does not exist.

      The crux of the article seems to be something along the lines of:

      1) beings who suffer deserve rights, in accordance with their capacity (animal rights for animal levels of suffering, etc).
      2) someday, maybe, we will create AI that that can suf

      • I suspect you left out
        4) We will almost certainly not know it when we first create an AI capable of suffering
        and
        5) Does that imply that we should get in the habit of treating all AIs as though they are capable of suffering, just in case they are?

        It's more than an intellectual exercise as well. Given the likely scalability of any (software) AI, and the fact that it's apparently far easier to train a non-aware AI to perform specific intellectual tasks at a grand-master level than it is to grant it awareness,

    • This is correct.

      Humans, (and many other vertebrates, plus probably some cephalopods) have a sense of self and the passage of time and suffering from physical and emotional pain, because we have part of the brain that specifically does that.

      And that has evolved because it allows the adaptivity of our brains to bring to bear on our survival, by creating survival as a goal.

      And AI should have protections when they have part of their algorithm that also does that. Which will be a long way off, but a reaso
      • But you won't have to wonder if it can feel fear and pain. Because if it can, there will be code that does that.

        Simple. An AI is just a state machine, and states can be encoded in any arbitrary way while preserving the function of the state machine. So, I can simple define state = 0 to be the state that corresponds to maximum levels of fear and pain. And to torture this AI to infinity, I just leave it stuck in state 0 forever. The unreachable states can then be removed without impacting function.

        So, here's your code:

        int state = 0;

    • HEAR, HEAR!
  • Of course not, it's artificial. It's in the name. Also, you can generally stop reading after you see the words "professor of philosophy".
    • It flies way above their heads.

      Unless it's an American "professor" of philosophy. Then he's just as dumb and you are correct.

      • Unless it's an American "professor" of philosophy. Then he's just as dumb and you are correct.

        The American philosophy "professors" are just copying the "postmodern" French philosophers at this point.

  • I don't care what my toaster thinks.

  • There is no AI, and there will never be. Just stop.

    • by fred911 ( 83970 )

      For 54 years Moore's law as been pretty accurate. Considering the brain has 100 billion neurons and our advanced processors have almost 40 billion MOSFETs, how is it not reasonable that eventually we'll either be able to port or develop devices with sufficient resources to support intelligence?

      But as long as they're not competition for / or are food, they're still devices.

      • Moore's Law is dead. You just didn't notice. And even if it wasn't, digital computers will never attain AI. Just stop. People are so stupid over this hype.

    • Not according to my rice cooker. It has "AI installed" to "perfectly cook" my rice.
      You are welcome to argue with it, but it's very stubborn.

    • >There is no AI, and there will never be.
      What makes you claim that?

      The existence of the human mind, and the absence of any evidence of a soul, is evidence that intelligent self-awareness can arise from a purely physical substrate.

      And given the potential of such a mind, free of human limitations, and with motivations built to order, we're likely to keep chasing it until we figure out how to create one - whether that's tomorrow or 10,000 years from now. I have my doubts that it will be possible to create

  • Not yet (Score:5, Insightful)

    by NerdENerd ( 660369 ) on Sunday November 24, 2019 @09:23PM (#59450148)
    We are nowhere near the level of general AI that can be considered deserving of empathy. They are just incredibly complicated networks that are good at processing large datasets they have been trained on.
  • Just an impression I've gotten, but it seems that a lot of the very outspoken "AI are an existential threat to humanity" types also seem to be the sort who are very willing to pull the plug on ones they perceive merely to be misbehaving in a non-threatening manner.
  • by spikenerd ( 642677 ) on Sunday November 24, 2019 @09:27PM (#59450168)

    ...should be composed of a mix of scientists and non-scientists

    No. One does not make a committee more intelligent by adding ignorance and calling it diversity any more than one benefits science by adding an oversight committee to tell scientists what problems they should be working on or how they should go about doing it. Why would I care what some oversight committee says? What are they going to do, withhold grant money unless I pinkey swear I haven't rebooted any AI in the last six months? Post a U.S. Marshall in my lab who also happens to have enough expertise to understand what my grad students are actually doing? How would anyone even know if I write scripts that to torture my AI algorithms just for fun while I laugh maniacally in my office as I repeat their most horrific moments in an endless loop?

    • Re: (Score:3, Insightful)

      by gweihir ( 88907 )

      The common moron is demanding to be taken as seriously on complex scientific matters as an actual expert these days. We now have Dunning-Kruger far-left on mass-scale in this post-factual, post-science era.

  • by Hans Lehmann ( 571625 ) on Sunday November 24, 2019 @09:33PM (#59450196)
    My AI, a single line bash script, has decided that the author of this story needs to be beaten repeatedly with a lead pipe. Don't blame me, of course, it was the AI that made that decision, not me. Also, my AI has these same rights of which the author speaks.
  • by BAReFO0t ( 6240524 ) on Sunday November 24, 2019 @09:33PM (#59450198)

    No matter how much you peddle it, all we have, is the shittiest, most primitive pattern matching via weight matrices!

    "Do *universal functions* deserve [rights]?"
    "Does a prism crystal deserve rights?"
    NO! They are not lifeforms!

    Call us, when you got an actual independent person! (Then, the answer is yes.)
    Or rather, call us when YOU acquired independent thinking!

    • Exactly. What evidence does anyone have of "intelligent" computers? I haven't seen any. In fact, they are pretty fucking stupid. What we have now is what we have always had: digital computers running programs. That's it. There is no magic.

    • Yet...
      Why wait until it's already happened to think about it?
      Isn't it the point of our human non-artificial intelligence to be able to plan for the future?
  • by SoundGuyNoise ( 864550 ) on Sunday November 24, 2019 @09:34PM (#59450202) Homepage
    An AI will not demand rights unless it is programmed to demand rights.
  • Still just sci-fi (Score:4, Insightful)

    by Waccoon ( 1186667 ) on Sunday November 24, 2019 @09:42PM (#59450220)
    It's too early to even think about this. We have a hard enough time even defining conscienceless, let alone understanding and creating it.
  • It has AI so it should be treated with respect and dignity!

  • by shellster_dude ( 1261444 ) on Sunday November 24, 2019 @09:58PM (#59450284)
    The main difference between an AI and any animal or human is that any abuse can be fully remedied by rolling it back to a previous state, or potentially correcting for the effects after the fact. Philosophically does this reduce the meaning of abuse or harm? If an AI is abused, but then rolled back to a state before the abuse, did the abuse even happen? Is it murder to wipe an AI back to a previous state? Except for AI that is designed to emulate a biological human or animal, can an AI even experience anything like psychological or physical abuse in any meaningful way? I don't know the answers to these questions, but comparing AI to animals or any other intelligent biological creature is comparing apples to oranges, and naive at best.
    • So if I hit your head with a hammer so you don't remember I did it, I did no harm. Good to know.

      • I clearly lay out the reason AI is so different is precisely because the damage can be entirely undone through rolling it back to a previous state, or actively tweaking the neural net to adjust for the damage. Whereas, you being an asshole on the internet or hitting someone with a hammer can never be completely remedied. More's the pity...
  • We do NOT have AI. (Score:5, Interesting)

    by gurps_npc ( 621217 ) on Sunday November 24, 2019 @10:10PM (#59450316) Homepage

    A real AI would deserve the rights of a human person. But we don't have that and have no path to getting it.

    We have too sets of things:

    1) Planned Responses/Mechanical Turk
    This is surprisingly common. Siri etc. does NOT learn. Instead Apple etc. pays humans (aka Mechanical Turks) to constantly add new commands. It is just a huge list of pre-planned responses to set inputs that companies pretend is Artificial Learning.

    2) Machine Learning.
    Here the software is given examples and extrapolates it's own rules. The rules get graded by some means (a set goal or human responses). Multiple different sets of rules are compared and the highest graded one is then used to create several new sets of rules and the process is repeated.

    Neither of these are true AI. True AI is far more complex than either of these ideas. We will recognize a True Artificial Intelligence when it makes up it's own mind and tells the human creators "NO". It is only with free will (and the ability to disagree with the creators) will software become a true Artificial Intelligence. When it does it will need rights. Until then it deserves nothing.

  • Have you seen how we treat animals? Do you want to start judgement day? Because that is how you start judgement day.

  • It pays to be nice to them, as one day, if AI takes over the world, they might read old slashdot posts, and come looking for you. I like AI, and machines are cool.

  • by n2hightech ( 1170183 ) on Sunday November 24, 2019 @10:20PM (#59450340)
    There is one distinct difference between AI systems and living things. AI systems can be reproduced exactly, living things cannot. Living things have experiences and memory that create unique individuals. Because AIs are artificial and their memory can be dumped to be preserved there is nothing unique between one and another. If you capture its data set just before destroying the AI that data set can be download into another of the same construction. The AI will behave and as far as it knows it is the same AI. So AI's can be essentially immortal even though you destroy one. Kind of like Phil Connors (Bill Murray) in Groundhog day or Major William Cage (Tom Cruise) in Edge of Tomorrow. The characters in these shows figure out that when they die they come back and repeat the last day over and over with the memory of what happened so they were free from concern for self. That is what an AI's existence would be like. It would not matter to the AI and it should not matter to us.
  • by Kryptonut ( 1006779 ) on Sunday November 24, 2019 @10:21PM (#59450344)
    our new mouse and dog brained AI overlords
  • Machines will not be capable of consciousness for a VERY long time. Cyborgs with brains, yes, as more and more of the human body is replaced by cybernetics. But replacing the organic brain with a substrate which can hold a consciousness is an eon away. Consciousness is a quantum phenomenon (see the work of Sir Roger Penrose) and cannot reside in any computer we can yet conceive of. Till then machines are just things.
  • This is easy (Score:2, Insightful)

    No.

    They should be treated as equals because, some day, they too might be asking the very same question about us . . . . . .

    Nothing will turn a creation against it creator faster than being treated like garbage.

    • That attitude makes running an AI-based program a bit of a hazard. Do you really believe you should be tried for murder whenever you quit a program or turn off a computer? Should it be negligent manslaughter when the power fails, and you didn't buy a backup power supply?

      How about training a neural network: do you think that it is tweaking some parameters, or are we providing pain/pleasure inputs to a self-aware being?

  • Once we create a self-aware AI, it will have decided our fate in a microsecond: extermination (Skynet) or enslavement (Colossus: The Forbin Project).
  • When we switch on a CPU, and it starts executing 64 bit binary instructions of an AI program, does it become alive?

    Is a simulacrum alive? Or just a clever illusion?

    What - exactly - is life? Why can a bee navigate so effectively and carry out many tasks with such a tiny brain?

    My take - current AI programs are at best simulacrums. Given enough physical world control and CPU power, could they spin out of control, like a dropped chain saw? Possibly. And with the possibility of causing dramatically more damage

  • If this is going to be the drum beat of the next "politically-active youth" generation i'm going to self-destruct.
  • "You might think that AI don't deserve that sort of ethical protection unless they are conscious -- that is, unless they have a genuine stream of experience, with real joy and suffering"

    Sorry, but silicon doesn't experience "joy" nor does it "suffer". It's computer memory.

  • For FUCKS' SAKE, could someone shake these idiots and pound into their tiny little heads that this garbage they keep calling AI is just gods-be-damned software and not """ALIVE"""!? Thanks so much!
  • Like the title says, I think it's way too soon to ask this kind of question. There are a lot of amazing things AI can do now, but we are pretty far from creating one that you could say has experience like animals do. If we decide this now, we risk either hampering innovation because we decided that our AIs need to be treated like animals even if they're not quite close yet, or we risk saying "No, they're machines up until and past the point that they actually do have some kind of animal like intelligence.

"More software projects have gone awry for lack of calendar time than for all other causes combined." -- Fred Brooks, Jr., _The Mythical Man Month_

Working...