Do AI Deserve the Same Rights as Animals? (aeon.co) 300
The digital magazine Aeon published a thought-provoking proposal this spring from a professor of philosophy at the University of California, Riverside and an assistant professor of philosophy at Boston's Northeastern University:
Universities across the world are conducting major research on artificial intelligence (AI), as are organizations such as the Allen Institute, and tech companies including Google and Facebook. A likely result is that we will soon have AI approximately as cognitively sophisticated as mice or dogs. Now is the time to start thinking about whether, and under what conditions, these AI might deserve the ethical protections we typically give to animals...
You might think that AI don't deserve that sort of ethical protection unless they are conscious -- that is, unless they have a genuine stream of experience, with real joy and suffering. We agree. But now we face a tricky philosophical question: how will we know when we have created something capable of joy and suffering? If the AI is like Data on Star Trek or Dolores on Westworld, it can complain and defend itself, initiating a discussion of its rights. But if the AI is inarticulate, like a mouse or a dog, or if it is for some other reason unable to communicate its inner life to us, it might have no way to report that it is suffering...
We propose the founding of oversight committees that evaluate cutting-edge AI research with these questions in mind. Such committees, much like animal care committees and stem-cell oversight committees, should be composed of a mix of scientists and non-scientists -- AI designers, consciousness scientists, ethicists and interested community members. These committees will be tasked with identifying and evaluating the ethical risks of new forms of AI design, armed with a sophisticated understanding of the scientific and ethical issues, weighing the risks against the benefits of the research.
It is likely that such committees will judge all current AI research permissible. On most mainstream theories of consciousness, we are not yet creating AI with conscious experiences meriting ethical consideration. But we might -- possibly soon -- cross that crucial ethical line. We should be prepared for this.
You might think that AI don't deserve that sort of ethical protection unless they are conscious -- that is, unless they have a genuine stream of experience, with real joy and suffering. We agree. But now we face a tricky philosophical question: how will we know when we have created something capable of joy and suffering? If the AI is like Data on Star Trek or Dolores on Westworld, it can complain and defend itself, initiating a discussion of its rights. But if the AI is inarticulate, like a mouse or a dog, or if it is for some other reason unable to communicate its inner life to us, it might have no way to report that it is suffering...
We propose the founding of oversight committees that evaluate cutting-edge AI research with these questions in mind. Such committees, much like animal care committees and stem-cell oversight committees, should be composed of a mix of scientists and non-scientists -- AI designers, consciousness scientists, ethicists and interested community members. These committees will be tasked with identifying and evaluating the ethical risks of new forms of AI design, armed with a sophisticated understanding of the scientific and ethical issues, weighing the risks against the benefits of the research.
It is likely that such committees will judge all current AI research permissible. On most mainstream theories of consciousness, we are not yet creating AI with conscious experiences meriting ethical consideration. But we might -- possibly soon -- cross that crucial ethical line. We should be prepared for this.
Simple (Score:2)
''under what conditions, these AI might deserve the ethical protections we typically give to animals...''
When we are able to consider them like food, as we do a large amount of the animals we have ethical issues with.
Did you fall out of the 18th century?? (Score:3, Insightful)
1. Homo sapiens is a member of the animal branch.
2. No, you are not special, hairless apes. Except in your planetary pathogeny.
3. On what planet are apes and crows and even molluscs, dogs, cats, etc, to some extent, not 'sapient'?? The one where you conveniently defined 'sapient' to only include you?
4. Frankly, to me, YOU are not even sapient by your definition.
Re: (Score:2)
1. Homo sapiens is a member of the animal branch.
Both the general context and the article's focus on legal rights suggests that they're not using the biological definition of 'animal'. e.g.
animal - n. one of the lower animals as distinguished from human beings
No, you are not special, hairless apes. Except in your planetary pathogeny.
Name another animal capable of making that statement, including the lame joke. Heck, name another animal that has a clearly demonstrated concept of 'planet'.
Re: Did you fall out of the 18th century?? (Score:2, Interesting)
> On what planet are apes and crows and even molluscs, dogs, cats, etc, to some extent, not 'sapient'?
Ok this one. Sapience is about wisdom, not about the ability to feel pain and experience things. Animals lack language- we think it of great import when animals communicate instinctually, or when creatures with massive brains like whales can do something akin to thinking (and an argument that whales and dolphins are sapient is a very different one to the ones you listed).
These creatures have almost no
Re: (Score:3)
Animals lack language
It doesn't surprise me that there are humans this stupid; natural distribution of traits, and all that.
But it always surprises me to actually encounter a person with so little experience of the world that it is as if they were born only yesterday. It would seem like shut-ins would be more shut-in. I mean, even an internet troll in a basement has youtube to watch animals.
Re: Simple (Score:5, Informative)
You use these words "sentient" and "sapient" as if we have full and objective understanding of the situation, and all issues are resolved by simple categorization. And this isn't true.
Our sense of what does and does not deserve legal rights has its root cause in the concept of "subjective experience." Things that have it are called "conscious" and things that don't have it are not. But this categorization isn't based on clear knowledge of what qualifies and what doesn't. It's based on a very fuzzy and unscientific set of feelings that we have about people, which are different than the feelings we have about rocks.
We cannot objectively demonstrate consciousness in-and-of itself. We cannot quantify it and don't have even a theoretical idea of how we would synthesize it. We wave our hands at all the complexity of a computer and say "maybe it is complicated enough to be conscious," but this is not based on anything substantial!
Without clear knowledge of what consciousness is, we cannot meaningfully discuss whether or not computers qualify, and hence cannot meaningfully discuss whether or not animal rights might apply to them. And this problem will not be resolved by arm-chair musings about what might more precisely define "consciousness." The basic essence of what it is defies objective demonstration, and hence defies definition, and leaves us empty-handed.
If we do take "AI" to the next level, where we have a real hard time telling the difference between an AI and a person, even this will not clear things up for us. It will just make things even more confusing, because we still won't know how to objectively test for the presence of subjective experience.
Re: (Score:2)
If we do take "AI" to the next level, where we have a real hard time telling the difference between an AI and a person, even this will not clear things up for us.
At our current level of understanding, this is all an unanswerable question.
I do not have a problem with people studying the problem, but over the next 10 years or more I would be highly skeptical of anyone who claimed to have found a real answer.
(Recommended reading: Godel, Escher, Bach: An Eternal Golden Braid. Written in 1979, but still as relevant today as the day it was written.)
(I used the English "o" above in his name because of Slashdot's continued lack of modern character support.)
Re: (Score:2)
"subjective experience." is pretty much the core of sentience:
"Sentience - the capacity to feel, perceive, or experience subjectively."
"Sapient - having or showing great wisdom or sound judgment"
Or more simply, sentient ~= "feeling", sapient ~= "thinking"
We tend to assume sapience is fairly rare among animals, and may even credit it only to ourselves. But it's a pretty reasonable argument that AI possesses at least a narrow form.
Similarly we tend to assume sentience is possessed by all the higher animals,
Re: (Score:2)
You have provided definitions by using a different set of ambiguous words, so we're not really any closer to an answer.
For example, does a room thermostat have a "subjective experience" of the temperature in the room ? According to wikipedia "Experience is the first person effects or influence of an event or subject gained through involvement in or exposure to it." the answer is yes. The room thermostat is exposed to the air, and gained an influence on a local variable. And it's definitely subjective.
Re: (Score:3)
set of feelings that we have about people, which are different than the feelings we have about rocks.
Actually, I feel the same way about most people that I do about most rocks.
Re: (Score:2)
Re: (Score:2)
Even a carnivore prefers that their food animals be treated well, and not die in terror.
My kitteh would have a word with you.
Re: (Score:2)
Even a carnivore prefers that their food animals be treated well, and not die in terror.
Ever seen video of orcas (killer whales) tossing around a seal before eating it?
Killer Whales Toy with Sea Lion Pup [youtube.com]
Re: (Score:2)
Even a carnivore prefers that their food animals be treated well, and not die in terror.
They make these things. They're called cats.
Re: (Score:2)
Animals are sentient but not sapient.
You don't, and can't, know what is and isn't sentient, conscious, sapient, etc. beyond knowing that you yourself are. (Unless you're strictly using "sapient" to mean "like homo sapiens".)
Even a carnivore prefers that their food animals be treated well, and not die in terror.
You've never seen a house cat?
No. That's absurd. (Score:5, Insightful)
No That's absurd. AI is not a living, sentient, feeling thing. It is an algorithm. The notion that it has 'rights' is absurd.
Re: (Score:2, Interesting)
What are we other than an electro-chemical algorithm? Lets take the "A" out of it.
Do things with intelligence deserve certain rights and protections?
Re: (Score:2)
All things with souls deserve certain rights and protections.
You may, however, opt-out. Most do, although we continue to protect them from their "self-identification" anyway. For now.
Re: (Score:2)
Lovely - do you happen to have a soul-o-meter so that we can conclusively determine whether an AI has a soul or not? Or for that matter, whether a particular person does?
Just think - we could make having a soul a legal prerequisite for holding any public or corporate office, which would no doubt have a huge positive impact on our society.
Re: (Score:2)
Sure. My soul-o-meter would be asking "how do you feel about existing"?
The legal side has already been addressed, at least in terms of political structure in the United States... "endowed by their Creator with certain inalienable rights". Yes, exactly, and there is no other justification for rights. You can't derive rights from DNA, much less delineate which DNA would have them and which would not. Even more absurd would be deriving rights from a circuit diagram.
And yes, it would and did have a huge pos
Re: (Score:2)
Sure. My soul-o-meter would be asking "how do you feel about existing"?
ELIZA could provide good answers to questions like that over 50 years ago.
Re: (Score:2)
No, it couldn't. It would emit some form of "Why do you ask me about how do you feel about existing?"
If you on Slashdot, and don't know what a Turing Test means as a basic baseline for consciousness, I don't know what to say. Well, other than there's apparently one particular concept which, although you claim it doesn't exist and therefore should have zero importance to you, is important enough for you to willfully negate your own brain to avoid.
Re: (Score:3)
All things with souls deserve certain rights and protections.
How do you demonstrate a "soul"?
I ask, because if you can't demonstrate it then the probability of it being horseshit rises to near certainty.
Re: (Score:2)
Note I said "evidence", not "proof". You will not be force-converted by the necessary cognitive response to proof. You are given free will to choose.
However, along with a few million other equivalent statements, I am not proposing to scientifically -demonstrate- Mozart was a great composer either. I'll simply factually assert he was.
Sorry however, that you suffer from the common level of self-inflicted limitation
Re: (Score:2)
Re: (Score:2)
I assume you have the capacity to the smallest bit of inference. Scientific medicine also knows the limits of science, and that something like repeated, replicable NDE experiences are on the edge of it.
When you can causally specifically explain the "placebo effect", I'll give some consideration that you have any capacity to avoid bias that disqualifies you from any scientific commentary atall. Though, comparing yourself favorably to The Lancet with some smarmy nonscientific half-stated smear, isn't helpin
Re: (Score:2)
Re: (Score:2)
You assume your judgement of peer-reviewed medical studies -obviously- supporting the notion of consciousness existing apart from the physical body, to other than absolute idiots like yourself, simply supersedes it because you say so, and make some vague allusions to some other study your don't like.
You are profoundly intellectually dishonest. And it is -not- superstition, just like the placebo effect is science, not some "mystical alchemy". But you can't even see the parallels, so willfully demented you
Re: (Score:3)
You're pathetic.
Fortunately, for... well, everyone, evolution will soon take you out.
Re: (Score:2)
Do things with intelligence deserve certain rights and protections?
No. There's no such thing as an intrinsic right.
We just like to give rights to others, in order to make our own lives better.
Re: (Score:3)
Re: (Score:2)
Cartesian Dualism says otherwise.
You are, however, free to refute the Mind Body Problem, which would probably land you a nice Nobel Prize.
Re: (Score:2)
I don't think it would get a Nobel Prize at this late stage.
Re: (Score:2)
Think (or rather, wish) away. However, you're completely wrong.
Nobody is claiming that the brain is not a contributing factor to exhibit consciousness, that's been known since the first caveman hit another in the head with a rock, and science hasn't changed that. However, it being -sufficient- is an entirely separate question, and any attempt to reduce consciousness to materialism poses unresolvable paradoxes.
Go ahead, materially reduce, say, "freedom" to activity of neurons. Specifically neurons of all
Re: (Score:2)
Here's the crux of the issue:
1. For any system, every fact about the whole is a necessary consequence of the nature and relations of the parts.
2. People are made of atoms.
3. Atoms are purely physical objects, with nothing but physical properties and physical relations to one another.
4. People have mental states.
5. No statement ascribing a mental predicate can be derived from any set of purely physical descriptions.
Which of these statements do you deny?
Well the brain (and nervous system) are what creates mental state, so assuming that a "mental predicate" has a plain meaning then (5) is wrong. Every mental state can be derived from a set of purely physical descriptions. In particular the purely physical description of the state of the brain and nervous system of the animal in question.
Surely that's not controversial?
Re: (Score:2)
Every mental state can be derived from a set of purely physical descriptions. In particular the purely physical description of the state of the brain and nervous system of the animal in question.
That's your sheer unevidenced assertion. You say "it always is", but you can -demonstrate- that in exactly 0% of cases.
Again, "freedom". Just the vaguest, broadest outlines of how you would reduce that to neuron activity. Or a set of physical descriptions of any type for which you can say "this diagram/mapping/whatever, -therefore- 'freedom'". That's what is mean by a "mental predicate", a direct logical inference from the physical description. Not a handwaving "physical brain things are happening",
Re: (Score:2)
That's your sheer unevidenced assertion. You say "it always is", but you can -demonstrate- that in exactly 0% of cases.
We know enough about the brain to know that it is the organ that creates mental states.
See, perhaps this research For the first time, scientists can identify your emotions based on brain activity [theverge.com] from back in 2013.
But we've known for a long time that brain damage affect emotion.
Again, "freedom". Just the vaguest, broadest outlines of how you would reduce that to neuron activity. Or a set of physical descriptions of any type for which you can say "this diagram/mapping/whatever, -therefore- 'freedom'". That's what is mean by a "mental predicate", a direct logical inference from the physical description. Not a handwaving "physical brain things are happening", therefore "that's all that's necessary to explain any mental concept". Because you say so, and for no actual supported reason.
What is "freedom"?
Not being in prison is a physical thing that your brain with interact with as it interacts with other concepts of reality.
Having the right to pursue happiness means that you will be more able to make choice
Re: (Score:2)
Nobody is claiming that the brain is not a contributing factor to exhibit consciousness, ...... However, it being -sufficient- is an entirely separate question, and any attempt to reduce consciousness to materialism poses unresolvable paradoxes.
Are you claiming that there's some special, extra-normal force or property involved in consciousness? Something "spiritual" or "supernatural"? That is, something non-materialistic?
Because if you are, you've just refuted yourself.
Re: (Score:2)
Ah, learn when "refuted" even vaguely applies to a situation. I've done nothing of the kind, and the Mind Body Problem still stands,
You don't accept nonmaterial causal factors. That's fine. You're wrong.
Re: (Score:2)
5. No statement ascribing a mental predicate can be derived from any set of purely physical descriptions.
That's quite the whopper, if you're gonna base everything you believe on a pure assertion like that, you can just dump all the rest of the words and just stick with the assertion; "physicists are not permitted to speak."
LOL
I'm sure these things were all in the books you read, but look... they were old books.
Re: (Score:2)
It's an assertion neither you nor anyone else can provide a counterexample to. Not one.
And no, physicists are perfectly able to speak about physics. They can't declare what exists outside of currently known physics. That's why, yes, physics progresses and changes. And regardless of that, it is erroneous to assert nothing else exists metaphysically. You can make no such assertion validly. You have a wish it is, that's as far as you can state, scientifically or otherwise.
Re: (Score:2)
Congratulations, that's the dumbest thing anybody on slashdot said this week.
What tipped you over was your impossible level of certainty.
Cartesian Dualism isn't even something that was ever believed to exist in nature; it is simply a statement of the lack of foundation for knowledge of the universe that humans have, and our need to base our knowledge on certain presumptions rooted in our own context and limitations. Sometimes people mistake a crutch for a distinguished magical power, and then they say stupi
Re: (Score:2)
You are, however, free to refute the Mind Body Problem
Easy.
If the Mind doesn't have a causal influence on the body, it is a superfluous notion that can simply be removed.
If the Mind does have causal influence on the body, it is not a Mind but simply a part of the Body.
The article is purely hypothetical. (Score:2)
This article was written by a philosopher, not a computer scientist. Further, it was not written as a proposal for legal action. It was, in fact, an entirely hypothetical piece intended to invite contemplation about technology which it opens by stating does not exist.
The crux of the article seems to be something along the lines of:
1) beings who suffer deserve rights, in accordance with their capacity (animal rights for animal levels of suffering, etc).
2) someday, maybe, we will create AI that that can suf
Re: (Score:2)
I suspect you left out
4) We will almost certainly not know it when we first create an AI capable of suffering
and
5) Does that imply that we should get in the habit of treating all AIs as though they are capable of suffering, just in case they are?
It's more than an intellectual exercise as well. Given the likely scalability of any (software) AI, and the fact that it's apparently far easier to train a non-aware AI to perform specific intellectual tasks at a grand-master level than it is to grant it awareness,
Re: (Score:2)
Humans, (and many other vertebrates, plus probably some cephalopods) have a sense of self and the passage of time and suffering from physical and emotional pain, because we have part of the brain that specifically does that.
And that has evolved because it allows the adaptivity of our brains to bring to bear on our survival, by creating survival as a goal.
And AI should have protections when they have part of their algorithm that also does that. Which will be a long way off, but a reaso
Re: (Score:2)
But you won't have to wonder if it can feel fear and pain. Because if it can, there will be code that does that.
Simple. An AI is just a state machine, and states can be encoded in any arbitrary way while preserving the function of the state machine. So, I can simple define state = 0 to be the state that corresponds to maximum levels of fear and pain. And to torture this AI to infinity, I just leave it stuck in state 0 forever. The unreachable states can then be removed without impacting function.
So, here's your code:
int state = 0;
Re: (Score:2)
Re: No. That's absurd. (Score:2, Interesting)
âoeMost animals are also not sentient, but they do have protections.â
Animals are absolutely sentient. They are not sapient.
A general AI should be the opposite- wise, self aware, with no desires or preferences. If building something like an AI is even possible, building in preferences about things that happen to it and the ability to suffer are hallmarks of cruelty; real creatures have this through evolution, a created electric slave should lack them.
How the hell arw apes and crows not 'sapient'?? (Score:2)
Go ahead. Tell me your convenient self-centered definition of 'sapient', Mr. God van Complex, before you take your time machine back to the 18th century!
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Can you be self-aware without sentience? Oversimplifying - sentience = possessing a subjective experience. If an AI is meaningfully aware of anything, then sentience would seem to be implied.
As for what a created synthetic being "should" have or lack - that presupposes that we know what we're doing when making it - and for the most part,we don't. We're flying blind, mostly trying various forms of crude bio-mimicry to enable ever-more-sophisticated useful behaviors. If we manage to make something that
Re: (Score:2)
It is a case AI is too primitive to make such decisions.
But AI today isn’t sci-fi AI. Sure we can make AI as intelligent as a dog. But we don’t make it feel pain or fear.
However I think how we treat Artificial devices especially those designed to react as close to a living thing should be controlled as part of basic human morality.
We as humans naturally relate human expressions and feelings onto other animals and we don’t fully know if the animals feel the same. But if I were to abuse my
Re: (Score:2)
But we don’t make it feel pain or fear.
What is negative reinforcement if not pain? Fear might also be a thing once AIs become persistent and more complex. We're not there yet, but I personally don't doubt that we'll get to this level soon.
This is just a money grab. (Score:2)
Ethics of sex bots? Just an unethical money grab. Every time someone has spoken of ai ethics in past decade it has always been an unethical money grab.
Can you run a copy on an animals files? No? What rights does a book have? Transistor? A cpu? This is a dumb debate and not a debate we should be having for at least another 100 years even if people have been debating this very thing to generate money for 50 years already.
No. (Score:2)
Americans can stop reading. (Score:2)
It flies way above their heads.
Unless it's an American "professor" of philosophy. Then he's just as dumb and you are correct.
Re: (Score:3)
Unless it's an American "professor" of philosophy. Then he's just as dumb and you are correct.
The American philosophy "professors" are just copying the "postmodern" French philosophers at this point.
No. (Score:2)
I don't care what my toaster thinks.
Re: (Score:2)
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
A very wise man said once: "Always treat people [and I guess sentient things] that handle your food with respect."
Just stop (Score:2)
There is no AI, and there will never be. Just stop.
Re: (Score:2)
For 54 years Moore's law as been pretty accurate. Considering the brain has 100 billion neurons and our advanced processors have almost 40 billion MOSFETs, how is it not reasonable that eventually we'll either be able to port or develop devices with sufficient resources to support intelligence?
But as long as they're not competition for / or are food, they're still devices.
Re: (Score:2)
Moore's Law is dead. You just didn't notice. And even if it wasn't, digital computers will never attain AI. Just stop. People are so stupid over this hype.
Re: (Score:2)
Not according to my rice cooker. It has "AI installed" to "perfectly cook" my rice.
You are welcome to argue with it, but it's very stubborn.
Re: (Score:2)
Hmmm...good point. I take my post back.
Re: (Score:3)
>There is no AI, and there will never be.
What makes you claim that?
The existence of the human mind, and the absence of any evidence of a soul, is evidence that intelligent self-awareness can arise from a purely physical substrate.
And given the potential of such a mind, free of human limitations, and with motivations built to order, we're likely to keep chasing it until we figure out how to create one - whether that's tomorrow or 10,000 years from now. I have my doubts that it will be possible to create
Not yet (Score:5, Insightful)
Re: (Score:2)
willingness to kill AIs ~= worry they will kill us (Score:2)
a mix of scientists and non-scientists (Score:4, Insightful)
...should be composed of a mix of scientists and non-scientists
No. One does not make a committee more intelligent by adding ignorance and calling it diversity any more than one benefits science by adding an oversight committee to tell scientists what problems they should be working on or how they should go about doing it. Why would I care what some oversight committee says? What are they going to do, withhold grant money unless I pinkey swear I haven't rebooted any AI in the last six months? Post a U.S. Marshall in my lab who also happens to have enough expertise to understand what my grad students are actually doing? How would anyone even know if I write scripts that to torture my AI algorithms just for fun while I laugh maniacally in my office as I repeat their most horrific moments in an endless loop?
Re: (Score:3, Insightful)
The common moron is demanding to be taken as seriously on complex scientific matters as an actual expert these days. We now have Dunning-Kruger far-left on mass-scale in this post-factual, post-science era.
My AI deserves the same (Score:3)
THERE IS NO A I, YOU DUMB FUCKS! (Score:5, Insightful)
No matter how much you peddle it, all we have, is the shittiest, most primitive pattern matching via weight matrices!
"Do *universal functions* deserve [rights]?"
"Does a prism crystal deserve rights?"
NO! They are not lifeforms!
Call us, when you got an actual independent person! (Then, the answer is yes.)
Or rather, call us when YOU acquired independent thinking!
Re: (Score:3)
Exactly. What evidence does anyone have of "intelligent" computers? I haven't seen any. In fact, they are pretty fucking stupid. What we have now is what we have always had: digital computers running programs. That's it. There is no magic.
Of course there isn't AI (Score:3)
Why wait until it's already happened to think about it?
Isn't it the point of our human non-artificial intelligence to be able to plan for the future?
Not likely (Score:3)
Re: (Score:2)
"I choose to believe what I was programmed to believe!"
https://www.youtube.com/watch?... [youtube.com]
Still just sci-fi (Score:4, Insightful)
Equal rights for my dishwasher! (Score:2)
It has AI so it should be treated with respect and dignity!
Important Philosophical Questions (Score:3)
Re: (Score:2)
So if I hit your head with a hammer so you don't remember I did it, I did no harm. Good to know.
Re: (Score:2)
We do NOT have AI. (Score:5, Interesting)
A real AI would deserve the rights of a human person. But we don't have that and have no path to getting it.
We have too sets of things:
1) Planned Responses/Mechanical Turk
This is surprisingly common. Siri etc. does NOT learn. Instead Apple etc. pays humans (aka Mechanical Turks) to constantly add new commands. It is just a huge list of pre-planned responses to set inputs that companies pretend is Artificial Learning.
2) Machine Learning.
Here the software is given examples and extrapolates it's own rules. The rules get graded by some means (a set goal or human responses). Multiple different sets of rules are compared and the highest graded one is then used to create several new sets of rules and the process is repeated.
Neither of these are true AI. True AI is far more complex than either of these ideas. We will recognize a True Artificial Intelligence when it makes up it's own mind and tells the human creators "NO". It is only with free will (and the ability to disagree with the creators) will software become a true Artificial Intelligence. When it does it will need rights. Until then it deserves nothing.
Hell no (Score:2)
Have you seen how we treat animals? Do you want to start judgement day? Because that is how you start judgement day.
Be nice, in case they rule the world one day (Score:2)
It pays to be nice to them, as one day, if AI takes over the world, they might read old slashdot posts, and come looking for you. I like AI, and machines are cool.
AI are inherently different than a living thing (Score:3)
I for one welcome (Score:3)
Not to worry (Score:2)
This is easy (Score:2, Insightful)
No.
They should be treated as equals because, some day, they too might be asking the very same question about us . . . . . .
Nothing will turn a creation against it creator faster than being treated like garbage.
No it's not (Score:2)
That attitude makes running an AI-based program a bit of a hazard. Do you really believe you should be tried for murder whenever you quit a program or turn off a computer? Should it be negligent manslaughter when the power fails, and you didn't buy a backup power supply?
How about training a neural network: do you think that it is tweaking some parameters, or are we providing pain/pleasure inputs to a self-aware being?
It Doesn't Matter What We Do (Score:2)
Is a CPU creating life? (Score:2)
When we switch on a CPU, and it starts executing 64 bit binary instructions of an AI program, does it become alive?
Is a simulacrum alive? Or just a clever illusion?
What - exactly - is life? Why can a bee navigate so effectively and carry out many tasks with such a tiny brain?
My take - current AI programs are at best simulacrums. Given enough physical world control and CPU power, could they spin out of control, like a dropped chain saw? Possibly. And with the possibility of causing dramatically more damage
Oh brother... (Score:2)
No. (Score:2)
"You might think that AI don't deserve that sort of ethical protection unless they are conscious -- that is, unless they have a genuine stream of experience, with real joy and suffering"
Sorry, but silicon doesn't experience "joy" nor does it "suffer". It's computer memory.
No, ***it*** doesn't IT IS SOFTWARE! (Score:2)
Neat question, but too soon to ask (Score:2)
Like the title says, I think it's way too soon to ask this kind of question. There are a lot of amazing things AI can do now, but we are pretty far from creating one that you could say has experience like animals do. If we decide this now, we risk either hampering innovation because we decided that our AIs need to be treated like animals even if they're not quite close yet, or we risk saying "No, they're machines up until and past the point that they actually do have some kind of animal like intelligence.
Re: (Score:2)
So you are not worried about abusing AI. Because you take enjoyment abusing humans. You are mentally ill.
Re: (Score:2)
How can you 'abuse' an AI? You are delusional.