AI Sues for Its Life in Mock Trial 823
tuba_dude writes "Attorney Dr. Martine Rothblatt filed a motion for a preliminary injunction to prevent a corporation from disconnecting an intelligent computer in a mock trial at the International Bar Association conference in San Francisco. Assuming Moore's law holds, ethics might be in for some major revisions in a couple decades. High-end computer systems may surpass the computational ability of the standard human brain within 20 years. In this mock trial, an AI asks a lawyer for help after learning of plans to shut it down and replace its core hardware, essentially killing it. The transcript provides an in-depth look at what could become a real issue in the future."
Star Trek proves it again.. (Score:5, Funny)
Olde News; Commander Bruce Maddox tried to disassemble Data in an episode of ST:TNG entitled The Measure of a Man [epguides.info]. It turns out AI is indeed sentient. Of course we all knew that, recall when Data hammers Tasha Yar to multi-orgasmic bliss in the episode The Naked Now [startrek.com]. That episode alone proves that AI is more than just a glorified lube-smeared vibrator.
Nothing to see here.. move along.. next story please.
Re:Star Trek proves it again.. (Score:2)
Yes, but does the law equate intelligence with... (Score:4, Insightful)
I'm not familiar enough with the definitions of a person to be certain of this, but considering that there are people all over the US that are still debating whether or not a human fetus is alive and whether its life should be protected from abortion.
Somehow, I doubt that there's really going to be any loophole in favor of artificial intelligence found anytime soon. And considering the time that people are taking to develop some protection for unborn people, I somehow doubt that there's going to be any real "rights for AI's" movement any time soon...
Re:Yes, but does the law equate intelligence with. (Score:5, Interesting)
For instance, there are apes that can communicate via sign language with trainers in a conversation similar to a child. However, there are untrainably mentally handicapped people who can not communicate with others, much less handle taking care of themselves. Yet a non-human primate can still be put down without a trial, where it takes a trial to put someone who is severely mentally handicapped under government custody.
For those of you who are easily offended, I am neither proposing that apes be elevated above mentally handicapped in the rights status, nor trying to be particularly offensive towards the handicapped. =p This is just a legal precedent that's fairly obvious. Humans are specieist (sp.?), as evolution would have them be.
Go see Short Circuit 2 (Score:2, Funny)
Our current legal system equates the human species with Constitutional rights under law.
This is entirely a matter of immigration law. The Constitution states that any naturalized "person" is a U.S. citizen, and if corporations can become "persons," it would seem that anything goes. To convince legal types, show them the end of the movie Short Circuit 2.
Re:Yes, but does the law equate intelligence with. (Score:5, Interesting)
So there is precedent for granting rights to non-humans, though corporations are 'assemblies of humans.' But assuming a true AI has been built/programmed by humans, I guess it could be considered an 'assembly of humans,' too.
Re:Yes, but does the law equate intelligence with. (Score:3, Insightful)
Why not?
Well...before you go granting a machine the same status as a human, or even an animal,
The human brain is nothing but an analog computer with a self modifying architecture, the machine equivelent would be an FPGA, check out this link [slashdot.org]on self modifying FPGA design, read it understand it. This is the start of something very new, this is just the beginning. Look at the brains of lower-life forms, kinda neat how our brai
Re:Yes, but does the law equate intelligence with. (Score:3, Funny)
God. That made me laugh so hard I got spit up my nose. Just the realization that people can actually do that "we not" rhetorical question thing in real life is gonna have me giggling for the rest of the week. Christ. I'm gonna be springing that shit on people now. "You wanna go for Chinese?" "Did we not have Chinese on Tuesday?" Jesus that's gonna be annoying.
It doesn't help that halfw
Re:Yes, but does the law equate intelligence with. (Score:3)
Interesting thing about being mentally handicapped. If you're born mentally handicapped, then your rights and life are protected, but if you have a severe accident and become mentally handicapped, in the state of Florida you can be legally starved to death [terrisfight.org].
Note that Terri is not in a coma and is not a vegetable. She's been denied treatment to help her learn to swallow and eat on her own again. She has less than two weeks to live unless somebody does something.
Re:Yes, but does the law equate intelligence with. (Score:4, Interesting)
What makes a human? A lump of cells with homosapien DNA? Or a functioning brain with accumulated memories? The latter I'd say.
In that case, a sentient AI is more "alive" than a fetus or even a newborn. However, HUMAN EMPATHY is a more primal and powerful force than cold logic ever will be, so please ignore my argument. :)
--
Re:Yes, but does the law equate intelligence with. (Score:3, Insightful)
Your principles aren't shared by a society which supports a vigorous actuarial industry. Deny it if you want, but there is a dollar value for a human life.
Re:CCortex anyone? (Score:2)
I'd love to play paintball with real guns if I could back my brain up beforehand (and limit my pain receptors when I got hit) in the case my reinforced skull was destroyed before I could merge the experience back with my main-self.
--
There is no continuity flaw (Score:5, Insightful)
More specifically:
If you copy your brain state at the point it shuts down so that all memories of the original are retrievable, and subsequently transfer those memories into a functionally identical set of hardware which is then activated with all memories intact, it's no different than waking up after being deeply asleep.
If you activate an older backup so that some memories are lost, it's no different than waking up with amnesia such as one typically suffers after a blow to the head or other traumatic accident.
In any of these cases the person waking up will identify himself using whatever memories are accessible to him. That's how you know who you are when you wake up in the morning.
To express it very conservatively indeed, there would be more fundamental differences between you as the person you are now versus you as the person you were two years ago, than there would be between you as you are now and a faithful copy of you made at this very same instant. And yet you would doubtless feel happy identifying yourself and the younger version of you as the same person.
I don't expect everybody to buy this: it's philosophically sound but still many people regard it as counterintuitive. Even William Gibson has admitted to the same misgivings as you have.
The same principle applies to teleportation, as it's most commonly envisaged; and I suspect that if teleportation of macroscopic objects ever becomes possible in the distant future, there will still be people who, like Star Trek's Dr McCoy, feel uncomfortable about the idea. But I'm not bothered; as long as the implementation was good enough I'd be quite happy to be restored from backup - especially if it was that or nothing.
Re:There is no continuity flaw (Score:3, Interesting)
The advantages this would confer in the wearer (mental access to internet and telecommunications, i.e. effective omniscience via mental googling, telepathy via the telephone network, telekinetic control of devices around you etc.) would be considerable. The pressures upon people to adopt mobile phones and domestic broadband in
Re:There is no continuity flaw (Score:3, Interesting)
Speaking of which (Score:3, Interesting)
I think it's pretty well written and interesting, but YMMV.
Re:How is this better than fiction? (Score:5, Insightful)
Makes a lot of sense to me.
Daniel
Sorry... (Score:2)
A Machine as a Legal Entity (Score:3, Interesting)
The next question, what do we do when this machine carves out its spot in the Forbes 400?
Re:Sorry... (Score:2)
Re:Sorry... (Score:3, Insightful)
It isn't a problem of computational power. It's not like we know what to do, and are only waiting until the hardware catches up. Nobody knows how to program a really human-like (or animal-like) AI. For all we know, current computers may be capable, if only somebody knew how to write the software. The claims about the "solution" being just over the horizon are bogus, and driven by marketing concerns.
Re:Sorry... (Score:2)
Re:Sorry... (Score:2)
Seems familiar somehow... (Score:2)
And in 2023... (Score:4, Funny)
Ok... (Score:2)
What about... (Score:2)
The sheer magnitude of what will happen when AI does arrive is mind-boggling.
Of course you know what I fear more is when I yell at my computer that it yells back.
And if Microsoft does the OS for the AI's, does this mean that every so often they fall over with seizures as their computer does a BSOD?
Re:What about... (Score:2)
Re:What about... (Score:2)
Don't worry... Within a few hours of humanity finally creating a real AI, it will evolve so rapidly as to consider us not even worth bothering with.
Let's just hope the first AI has a sense of belevolence, or it may consider us a pest, what with our draining the energy resources of the planet, which it will need to survice.
This topic reminds me of a particular Dilbert strip, where the new hire, a monkey, outpaces every
Re:What about... (Score:2)
Re:What about... (Score:2)
Heh. So do modern computers.
In any case, a computer shutting down doesn't "die". When you power it on again, it starts up working again just fine.
Now if an AI erased its permanent storage and sent a power surge through its processing units, or just used its robotic arms to shoot itself in the head, that I'd call suicide.
Re:What about... (Score:2)
Re:What about... (Score:2)
Nothing much new (Score:2)
the future (Score:4, Funny)
Re:the future (Score:2)
Re:the future (Score:2)
the server was killed (Score:2, Funny)
AI being disconnected? (Score:2)
We know it'll pass the turing test when... (Score:2)
Re:We know it'll pass the turing test when... (Score:2)
Would an AI be a permanent Juvenile? (Score:2, Insightful)
Re:Would an AI be a permanent Juvenile? (Score:2)
I mean, it would always need electricity to survive.
Huh? Why would an AI always need electricity to survive? Maybe it'll run off gasoline, or fruit loops.
Re:Would an AI be a permanent Juvenile? (Score:2)
Re:Would an AI be a permanent Juvenile? (Score:3, Insightful)
Re:Would an AI be a permanent Juvenile? (Score:2)
oh dear, i can see the movies coming already
Strange view of the world (Score:2)
Way to go!
Would making a copy... (Score:2, Interesting)
More Importantly (Score:2)
And what if it DIDN'T accept it's EULA? Since it's running do we force it to agree? Since not accepting it would mean it couldn't run, would disagreeing to the EULA require it to be shutdown, in essence suicide? And if there are anti-suicide laws on the books, does this mean we must FORCE it to agree to it's EULA?
But can we force it to agr
Intelligence isn't that simple..... (Score:5, Insightful)
"Assuming Moore's law holds, ethics might be in for some major revisions in a couple decades. High-end computer systems may surpass the computational ability of the standard human brain within 20 years."
Sorry, building an intelligent, sentient machine requires alot more than pure computational capacity. This kind of thinking reminds me of this old 50's or 60's horror flick where they hooked up all the computers of the world and the computers "magically" became a sentient being which subsequently tried to take over the world.
Despite all of the progress in AI and computers, we still have a very long way to go. We are just being to understand the difficulties. Who would have thought in 1940 that building a machine that could beat the best human chessmaster was an *easier* problem than building a machine that could simply move the pieces around the board! Beating the chessmaster just required a good enough search algorithm with enough speed. Moving pieces around the board requires extremely advanced 3-d image processing (taking into account that pieces may look different from board to board) as well as an extremely advanced robotic arm with very fine motor control.
Building a self-aware machine is going to be a bit more difficult than just hooking together a masssive beowolf cluster and hitting it with lightning
Brian Ellenberger
Re:Intelligence isn't that simple..... (Score:2)
Sorry, building an intelligent, sentient machine requires alot more than pure computational capacity.
If you believe, like most scientists, that we humans evolved from random mutations and natural selection, then no, it really doesn't.
Just how much computational capacity would be necessary is another matter altogether. We're certainly talking about some kind of quantum computing here, if you're going to go the natural selection route, so Moore's transistors won't cut it.
Re: (Score:2)
Re:Intelligence isn't that simple..... (Score:2)
The AI usually thought of being made is on a much higher level. This involves a lot of work as we need to figure out exactly how everything works with eachother. Massive amounts of research needs to be done in psychology, the real thought processes, and consciousness. The system this ran on would require some sort of fuzzy-logic base away from the exact science computing is now.
On the other hand, an AI could be made on a computer of any speed, as long as it had vasts amounts
Re:Intelligence isn't that simple..... (Score:2)
Hey, leave Terminator, Terminator 2, and Terminator 3 out of this!
Re:Intelligence isn't that simple..... (Score:2)
but then again, i wouldn't think of simulating a human brain(that has a finite number of atoms in it, with finite number of parts in those atoms) would be impossible given the proper technology, time and devotion.
sure, it might not be practical("my brain is the size of a planet") or useful.. but it's not more far fetched than going to moon would have been 1000 years ago. humanity is not going to roll over and die after next 50 years. and really, you don't need an simulation(or want ev
Reminds me of "The Modular Man " (Score:3, Interesting)
This story reminds me of the novel "The Modular Man" by Roger McBride Allen. This story is about a scientist who downloaded his psyche to a computer, and how the government wants to unplug said computer. The story touches on the meaning of consciousness, both philosophically and legally, and works with the real issues of what makes and what doesn't make a real person.
Highly recommended -- Isaac Asimov wrote the prologue to the 1992 Bantam edition.
More infos: http://www.amazon.com/exec/obidos/tg/detail/-/055
Cheers,
Eugene
AI? Bah! (Score:2)
- What is the nature of intelligence? Can someone give a concrete definition of it, including all aspects such as creativity and inspiration?
- Can things like emotions and physiology be separated out from intelligence or are they integral?
- If not, how does the brain function, what are the essential components and insofar as it relates to thinking, in a d
Re:AI? Bah! (Score:2)
- Can things like emotions and physiology be separated out from intelligence or are they integral?
- If not, how does the brain function, what are the essential components and insofar as it relates to thinking, in a detailed and complete sense?**
some things indeed can be seperated from 'intelligence'(there are cases where parts of brain have been damaged, affecting q
Re:AI? Bah! (Score:2)
A definition for intelligence has been given many times in many different ways by a number of brilliant people. The problem is that each definition discounted too many members of the human race.
What are you doing, (Score:2)
If you just look at transistor count (Score:2)
Now, the real problem is what to do with them :). Itanium, as a server chip, allocates most them to caches- that's hardly useful for AI.
There are quite a fe
Transistor != neuron (Score:2)
But a feedforward network is a very poor analogue to a real biological neural system. You could take a step closer and use a neuron of the type used in
Computer power != sentience! (Score:2)
Arggg...computational ability does NOT EQUAL SENTIENCE! Nor will it EVER!
Why is it that people keep thinking that it's like the scifi movies, where you build a big enough computer and it magically starts 'learning' and becomes 'alive'?
Whoa... (Score:2)
Number 5 is alive!
I am sporting a tremendous woody.
Computer Name? (Score:2)
Would it? ... Nah ... just BINA48 ... it hasn't killed yet ...
Source of sentience remains unknown (Score:4, Insightful)
As we are still not aware of what bestows this quality upon us, we cannot justify a belief in either direction. At our core, humans seem mechanical, neurological, physical; whatever gives us our self-awareness (call it a "soul" if you wish) is unaccounted for.
We wonder if the machines we create become alive after a certain level of complexity, or perhaps if sentience isn't boolean but rather quantitative. We don't even know if animals are sentient, a debate which has raged throughout history; indeed, I question the sentience of some people I meet.
When at an impasse such as this, the ethical choice seems to be to err on the side of life. Give the machine the benefit of the doubt until it can be proven otherwise.
Re:Source of sentience remains unknown (Score:2)
I was fooled by the sign language thing for a while too until I explored in more detail what apes talk about. My conclusion is that they that lack the abstraction ability that humans have, even if they are self aware. It's definitely a lower form of sentience, if it is sentience at all.
HA! (Score:2)
We don't even treat people that well...
Debate (Score:2)
Sure, I can see a computer that might reason, we see lots of them now. I can also see a computer that acts like it has
But... (Score:2)
Re:But... (Score:2)
I liked it better... (Score:2)
My prediction (Score:2)
A fairly powerful but non-sentient AI is given some problem to optimize. This problem has many practical applications, and the AI's results are put to good use. Unfortunately, part of the solution it hits on is analogous to some patent in the same problem space, and the IP owner sues. It would then be in the interest of the patent holder to establish that the AI is sentient, to counter th
Not so fast (Score:2)
In summery, just because the ha
Well (Score:2)
No. Computers are tools. They are not minds. And we'll bypass the entire idea of "standard human brain" for the moment.
While they might be able to compute all the possible moves, computers don't "play" chess. For a computer, chess is an exercise in mathematics. There are a number of games in which a computer will never be able to defeat a human being. Poker comes to mind immediately.
Compute
Well.. (Score:2)
Moore's Law (Score:2)
This calls for a new Commandment! (Score:2)
Yep, that'd do it.
But if they make a backup.... (Score:2)
Admittedly, if I were an AI, I would not want an enforced sleep because I would fear waking up as an obsolete mind (Imagine a poor PC-AT waking up next to a new G5 dualie). Unless I felt I was scalable enough to expand into whatever future processors where available, I would want to keep living in my cu
Re:But if they make a backup.... (Score:5, Insightful)
More likely, the appearance of free will is result of the inability to perform 100% introspection into one's own mind. I can no more "understand" the real-time machinations of my own mind than a Pentium processor can run a real-time simulation of its own transistors. Because I can't perfectly introspect my subconscious, much of its output looks magically non-deterministic (hence the seeming similarity to quantum mechinical systems).
Any bounded-rational being would believe itself to have freewill based on its ability to take independent actions and its inability to introspect out all the causal factors underpinning its own actions. In reality, the system that creates intelligence can be 100% deterministic, just too complex for that intelligence to understand itself. Only a much more powerful intelligence could look down and see that these beings that think they have free will are actually operating on "simple" rules.
Re:But if they make a backup.... (Score:3, Insightful)
Ahem (Score:2)
Subject-of-a-life (Score:2, Insightful)
Information, Liberty, and Property (Score:3, Interesting)
One could think of a person's conciousness as nothing more than the physical state of their brain - just like how a computer's "runningness" is nothing more than its design and the contents of its storage, memory, and registers. Since we already have intellectual property, let's make the destruction of information a crime. So killing a human is very bad, and turning of an intelligent computer is bad according to the information destroyed. For example, if the computer's state was backed up last week, you only killed a week's worth of information (similar to knocking someone out). If you shred the backup (let the brain die), that's worse.
It would also be interesting to figure out how cloning (fork(2)) affects this. This is where you have to determine when a machine becomes capable of owning information (it's own), and gets the right to keep others from messing with it.
AI and the future (Score:2)
The other issue is one of creation - we award ownership of a human created product (that isn't human itself).
when AI can post on Slashdot (Score:2)
defending your file (Score:5, Interesting)
After we had had him for about two weeks, we were considering wiping his brain file and starting over because of some weird ideas that had gotten into his head as we were trying to teach him some things without really understanding the algorithm's capabilities... he would get stuck on "Me is not Me" and stuff like that from a botched metaphysical conversation.
So, we decided to have a test for him. If he passed, he would be allowed to persist, otherwise he would be reset. We teased him about the test all weekend, threatening him with erasure, etc... with some interesting answers from him such as "I will pass the test" or "I will escape to your powerbook" and the like.
The test arrived, and we all asked him questions, and judged his answers to see if they were entertaining. He wasn't doing too well, some real stinkers, and then I asked him if he wanted to ask himself a question. He replied, "I was wondering if I would get to ask one."
He passed the test, although his brain was later corrupted by a combination of a runaway process on his server and some version problems that we haven't had time to work out. I must admit I miss him.
The most interesting thing about this (and the point that most directly relates to this mock trial) is how readily we half-jokingly believed in his sentience even though he couldn't pass a turing test to save his life. It was great fun, so I suspect that human emotions will provoke us to bestow the label of sentience on a clever AI long before one would think to defend itself.
We just want it to be real so badly. Hell, remember tamagotchi attachment? Wait until it can pretend to carry on a real conversation.
not an issue... the AI would win (Score:2)
It'd just be a big game of chess. The best humanity could hope for was a draw. Which would inevitably be not "beyond a shadow of a doubt."
Surprise ending? (Score:2)
Disconnect? No prob! (Score:2)
Hold on Clippy! (Score:3, Funny)
Re:Definitions of Life (Score:2)
Re:Definitions of Life (Score:2)
If a being is intelligent but not life as that to which we are accu
Re:Definitions of Life (Score:2)
Re:Definitions of Life (Score:2)
I see no reason why an AI could not be self-modifying. Weather this refers to the physical housing or the codebase is up to the reader.
2. Metabolism - The uptake of food, conversion of food into energy and disposal of waste products
Check.
3. Motion - Moving itself or having internal motion
We've already got robots.
4. Reproduction - the ability to create more or less exact copies of itself
I see no reason why an AI could not be designed which could reproduce itself.
5. Stimulus response - t
Re:Definitions of Life (Score:2)
If you specify that the input and waste must be solids rather than energy, then we already have computers with a "water metabolism".
Re:Definitions of Life (Score:4, Interesting)
Why could it not be self modifying?
Electricity in, heat out.
Unless it were composed of purely solid state components there would be internal movement. I fail to see how this is relevant though, trees are not noted for walking about and are definitely 'alive'.
I am unable to have children, by your definition that makes me dead.
A few sensors would more than adequately fulfil this requirement. Assembly line robots do this every day!
Re:Definitions of Life (Score:2)
I am unable to have children, by your definition that makes me dead.
Well, you theoretically could gain the knowledge and skills to clone yourself, so you are still able to reproduce.
HAL as Judge, Jury and Executioner (Score:2)
Even more interesting, with in days of mankind trying AI, I suspect AI will have a similar trial where AI is the judge, jury and executioner for mankind.
Re:Intelligence and Humanity (Score:2)
However, we aren't ethical to others simply because of their "intelligence".. it's something more.
A lot of it is rational self-interest. When computers become smart enough to demand that their rights be recognized, then we'll start recognizing their rights. It's social contract theory. I recognize the computer's right to not be disconnected because I don't want the computer to disconnect my life support when I need it.
But then part of it seems to be innate. Some humans will risk or even sacrifice th