Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Software Technology

AI Sues for Its Life in Mock Trial 823

tuba_dude writes "Attorney Dr. Martine Rothblatt filed a motion for a preliminary injunction to prevent a corporation from disconnecting an intelligent computer in a mock trial at the International Bar Association conference in San Francisco. Assuming Moore's law holds, ethics might be in for some major revisions in a couple decades. High-end computer systems may surpass the computational ability of the standard human brain within 20 years. In this mock trial, an AI asks a lawyer for help after learning of plans to shut it down and replace its core hardware, essentially killing it. The transcript provides an in-depth look at what could become a real issue in the future."
This discussion has been archived. No new comments can be posted.

AI Sues for Its Life in Mock Trial

Comments Filter:
  • by grub ( 11606 ) <slashdot@grub.net> on Sunday October 19, 2003 @07:56PM (#7256808) Homepage Journal

    Olde News; Commander Bruce Maddox tried to disassemble Data in an episode of ST:TNG entitled The Measure of a Man [epguides.info]. It turns out AI is indeed sentient. Of course we all knew that, recall when Data hammers Tasha Yar to multi-orgasmic bliss in the episode The Naked Now [startrek.com]. That episode alone proves that AI is more than just a glorified lube-smeared vibrator.

    Nothing to see here.. move along.. next story please.
    • HAL really got into those episodes.
    • by MinutiaeMan ( 681498 ) on Sunday October 19, 2003 @08:29PM (#7257005) Homepage
      ... rights under the law?

      I'm not familiar enough with the definitions of a person to be certain of this, but considering that there are people all over the US that are still debating whether or not a human fetus is alive and whether its life should be protected from abortion.

      Somehow, I doubt that there's really going to be any loophole in favor of artificial intelligence found anytime soon. And considering the time that people are taking to develop some protection for unborn people, I somehow doubt that there's going to be any real "rights for AI's" movement any time soon...
      • It most certainly does not. Our current legal system equates the human species with Constitutional rights under law. (More specifically, citizenship, but that's a whole different barrel of orangutans.)

        For instance, there are apes that can communicate via sign language with trainers in a conversation similar to a child. However, there are untrainably mentally handicapped people who can not communicate with others, much less handle taking care of themselves. Yet a non-human primate can still be put down without a trial, where it takes a trial to put someone who is severely mentally handicapped under government custody.

        For those of you who are easily offended, I am neither proposing that apes be elevated above mentally handicapped in the rights status, nor trying to be particularly offensive towards the handicapped. =p This is just a legal precedent that's fairly obvious. Humans are specieist (sp.?), as evolution would have them be.
        • Our current legal system equates the human species with Constitutional rights under law.

          This is entirely a matter of immigration law. The Constitution states that any naturalized "person" is a U.S. citizen, and if corporations can become "persons," it would seem that anything goes. To convince legal types, show them the end of the movie Short Circuit 2.

        • by dpilot ( 134227 ) on Sunday October 19, 2003 @09:08PM (#7257206) Homepage Journal
          But back around 1900 or so, the Supreme Court managed to grant the rights of personhood to corporations.

          So there is precedent for granting rights to non-humans, though corporations are 'assemblies of humans.' But assuming a true AI has been built/programmed by humans, I guess it could be considered an 'assembly of humans,' too.
        • Interesting thing about being mentally handicapped. If you're born mentally handicapped, then your rights and life are protected, but if you have a severe accident and become mentally handicapped, in the state of Florida you can be legally starved to death [terrisfight.org].

          Note that Terri is not in a coma and is not a vegetable. She's been denied treatment to help her learn to swallow and eat on her own again. She has less than two weeks to live unless somebody does something.

      • debating whether or not a human fetus is alive

        What makes a human? A lump of cells with homosapien DNA? Or a functioning brain with accumulated memories? The latter I'd say.

        In that case, a sentient AI is more "alive" than a fetus or even a newborn. However, HUMAN EMPATHY is a more primal and powerful force than cold logic ever will be, so please ignore my argument. :)

        --

  • Too unrealistic. I don't even think we should address this until we are within 50 years of it. Anyone who's worked with AI's knows we're nowhere near this point. Playing out the trial is just an exercise, whereas any actual decision would be highly based on the circumstances. Self-aware AI is a long way off.
    • It doesn't matter where the machines are. The question is: when will people be ready to accept machines as independent living entities. Imagine for a momemt that a programmer included his SPARC Workstation in his will. He leaves it 100k in cash and a program for trading stocks. Do we yank the cord, or leave the machine to its devices?

      The next question, what do we do when this machine carves out its spot in the Forbes 400?
    • Nonsense, we are getting there. According to most observers it may take just a few more Moore years to get it done. In fact, this company may already have a Proto-AI [ad.com].
      • Re:Sorry... (Score:3, Insightful)

        by timeOday ( 582209 )
        Here I was going to dispute the reference to Moore's law in the summary, and you've repeated it.

        It isn't a problem of computational power. It's not like we know what to do, and are only waiting until the hardware catches up. Nobody knows how to program a really human-like (or animal-like) AI. For all we know, current computers may be capable, if only somebody knew how to write the software. The claims about the "solution" being just over the horizon are bogus, and driven by marketing concerns.

  • Yes, I think I've seen this before [caltech.edu] somewhere. I tell you, even in the future Slashdot reports outdated events!
  • by djward ( 251728 ) on Sunday October 19, 2003 @07:59PM (#7256822)
    When this happens, we'll all scream DUPLICATE! and link back to this story.
  • Insert obligatory HAL 9000 joke here.
  • when my computer does a power-down, if it were an AI, would that be considered suicide?

    The sheer magnitude of what will happen when AI does arrive is mind-boggling.

    Of course you know what I fear more is when I yell at my computer that it yells back.

    And if Microsoft does the OS for the AI's, does this mean that every so often they fall over with seizures as their computer does a BSOD?

    • In general, a computer will not power down unless ordered to by a human. Although, if an AI is installed in a facility susceptible to power outages, it might be criminal negligence or reckless endangerment...
    • Of course you know what I fear more is when I yell at my computer that it yells back

      Don't worry... Within a few hours of humanity finally creating a real AI, it will evolve so rapidly as to consider us not even worth bothering with.

      Let's just hope the first AI has a sense of belevolence, or it may consider us a pest, what with our draining the energy resources of the planet, which it will need to survice.

      This topic reminds me of a particular Dilbert strip, where the new hire, a monkey, outpaces every
  • Check out the book by Stanislaw Lem, "A Perfect Vacuum", especially the chapter on the science of "personetics"...

  • the future (Score:4, Funny)

    by pizza_milkshake ( 580452 ) on Sunday October 19, 2003 @08:01PM (#7256838)
    the only thing certain about the future is the existence of millions and millions of lawyers, all suing each other.
    • Yeah. This is why Terminator will always be scifi and not reality. Being an American AI, Skynet will use attack lawyers, not time travelling cyborgs.

    • C:\Documents and Settings\revmoo>perl -e '$|=@_=</**/*>; for(;;){print$f=$_[rand
      $#_]; map{print".";map 1,0..1e5}0..77-length$f; print" OK$/"}'
      Can't find string terminator "'" anywhere before EOF at -e line 1.
      The filename, directory name, or volume label syntax is incorrect.
  • by Anonymous Coward
    Detectives belive the cause was slashdot.
  • And in related news, John Connor has completed stocking his fallout shelter.
  • it asks for its shiny metal ass to be kissed.
  • I mean, it would always need electricity to survive. I imagine it would end up being silmiar to a child or an adult on life support with regard to the sort of rights-structure that would be developed to deal with it. But, then, you can't save your kid or grandpa to disk and then boot them up in a new body...
  • of the AI's install software violate US cloning laws?
    • More importantly, would it be able to agree to its own EULA? You'd have to activate it to consider its EULA, right? But to activate it, you must agree to the EULA right?

      And what if it DIDN'T accept it's EULA? Since it's running do we force it to agree? Since not accepting it would mean it couldn't run, would disagreeing to the EULA require it to be shutdown, in essence suicide? And if there are anti-suicide laws on the books, does this mean we must FORCE it to agree to it's EULA?

      But can we force it to agr

  • by Brian_Ellenberger ( 308720 ) on Sunday October 19, 2003 @08:09PM (#7256875)

    "Assuming Moore's law holds, ethics might be in for some major revisions in a couple decades. High-end computer systems may surpass the computational ability of the standard human brain within 20 years."



    Sorry, building an intelligent, sentient machine requires alot more than pure computational capacity. This kind of thinking reminds me of this old 50's or 60's horror flick where they hooked up all the computers of the world and the computers "magically" became a sentient being which subsequently tried to take over the world.



    Despite all of the progress in AI and computers, we still have a very long way to go. We are just being to understand the difficulties. Who would have thought in 1940 that building a machine that could beat the best human chessmaster was an *easier* problem than building a machine that could simply move the pieces around the board! Beating the chessmaster just required a good enough search algorithm with enough speed. Moving pieces around the board requires extremely advanced 3-d image processing (taking into account that pieces may look different from board to board) as well as an extremely advanced robotic arm with very fine motor control.

    Building a self-aware machine is going to be a bit more difficult than just hooking together a masssive beowolf cluster and hitting it with lightning

    Brian Ellenberger

    • Sorry, building an intelligent, sentient machine requires alot more than pure computational capacity.

      If you believe, like most scientists, that we humans evolved from random mutations and natural selection, then no, it really doesn't.

      Just how much computational capacity would be necessary is another matter altogether. We're certainly talking about some kind of quantum computing here, if you're going to go the natural selection route, so Moore's transistors won't cut it.

    • You are correct to a point.
      The AI usually thought of being made is on a much higher level. This involves a lot of work as we need to figure out exactly how everything works with eachother. Massive amounts of research needs to be done in psychology, the real thought processes, and consciousness. The system this ran on would require some sort of fuzzy-logic base away from the exact science computing is now.

      On the other hand, an AI could be made on a computer of any speed, as long as it had vasts amounts
    • This kind of thinking reminds me of this old 50's or 60's horror flick where they hooked up all the computers of the world and the computers "magically" became a sentient being which subsequently tried to take over the world.

      Hey, leave Terminator, Terminator 2, and Terminator 3 out of this!

  • by ciurana ( 2603 ) on Sunday October 19, 2003 @08:10PM (#7256885) Homepage Journal
    Interesting.

    This story reminds me of the novel "The Modular Man" by Roger McBride Allen. This story is about a scientist who downloaded his psyche to a computer, and how the government wants to unplug said computer. The story touches on the meaning of consciousness, both philosophically and legally, and works with the real issues of what makes and what doesn't make a real person.

    Highly recommended -- Isaac Asimov wrote the prologue to the 1992 Bantam edition.

    More infos: http://www.amazon.com/exec/obidos/tg/detail/-/0553 295594/qid=1066608552/sr=1-1/ref=sr_1_1/102-014398 6-0510511?v=glance&s=books

    Cheers,

    Eugene
  • I don't think Moore's law has got anything to do with the possibility of AI. There are much more fundamental questions than performance or capacity. Like:

    - What is the nature of intelligence? Can someone give a concrete definition of it, including all aspects such as creativity and inspiration?
    - Can things like emotions and physiology be separated out from intelligence or are they integral?
    - If not, how does the brain function, what are the essential components and insofar as it relates to thinking, in a d
    • **- What is the nature of intelligence? Can someone give a concrete definition of it, including all aspects such as creativity and inspiration?
      - Can things like emotions and physiology be separated out from intelligence or are they integral?
      - If not, how does the brain function, what are the essential components and insofar as it relates to thinking, in a detailed and complete sense?**

      some things indeed can be seperated from 'intelligence'(there are cases where parts of brain have been damaged, affecting q
    • What is the nature of intelligence? Can someone give a concrete definition of it, including all aspects such as creativity and inspiration?

      A definition for intelligence has been given many times in many different ways by a number of brilliant people. The problem is that each definition discounted too many members of the human race.
  • Moore's law is about transistor counts (doubling every 18 month, and not speed, as it is widely believed). If you look at that, we're almost there - the latest Itanium has about 0.5 Billion transistors on-die. If the trend continues, we are going to see processors with a transistor count similar to the number of neurons of a human being rather soon.

    Now, the real problem is what to do with them :). Itanium, as a server chip, allocates most them to caches- that's hardly useful for AI.

    There are quite a fe

    • Except that you really can't compare transistors and neurons. I don't have much of a reference for estimate, but I imagine it would take a silicon wafer with a transistor count roughly equivalent to that of the Intel 4004 (~2,300 transistors) to get something that could do the same job as an artifical neuron of the type used in simple feedforward networks.

      But a feedforward network is a very poor analogue to a real biological neural system. You could take a step closer and use a neuron of the type used in
  • High-end computer systems may surpass the computational ability of the standard human brain within 20 years

    Arggg...computational ability does NOT EQUAL SENTIENCE! Nor will it EVER!

    Why is it that people keep thinking that it's like the scifi movies, where you build a big enough computer and it magically starts 'learning' and becomes 'alive'?

  • But, but...

    Number 5 is alive!

    I am sporting a tremendous woody.

  • This computer wouldn't be named B166ER ???

    Would it? ... Nah ... just BINA48 ... it hasn't killed yet ...

  • by joelhayhurst ( 655022 ) on Sunday October 19, 2003 @08:14PM (#7256911)
    It is impossible to make an argument determining whether or not a being is sentient without first understanding what facult(ies) give beings sentience.

    As we are still not aware of what bestows this quality upon us, we cannot justify a belief in either direction. At our core, humans seem mechanical, neurological, physical; whatever gives us our self-awareness (call it a "soul" if you wish) is unaccounted for.

    We wonder if the machines we create become alive after a certain level of complexity, or perhaps if sentience isn't boolean but rather quantitative. We don't even know if animals are sentient, a debate which has raged throughout history; indeed, I question the sentience of some people I meet.

    When at an impasse such as this, the ethical choice seems to be to err on the side of life. Give the machine the benefit of the doubt until it can be proven otherwise.
  • What a load of arse, shit, we don't even afford the same respect to a cow, sheep, dog, monkey,.. why the heel would we give it to a computer?

    We don't even treat people that well...
  • I got in a bit of a debate with a friend of mine. The question at hand is why, and if, anyone would build a sentient being, given the technology. My friend argued that, of course, sentient beings would be big business, or at least produced commercially. I argued that there is not commercial market for a computer that doesn't want to be unplugged, or might sue to be able to own property.

    Sure, I can see a computer that might reason, we see lots of them now. I can also see a computer that acts like it has
  • But computers are not real. They are machines. I think there's a rule somewhere on the books that non-humans can't sue. After all, if non-humans could sue, there would be a lot of roadkill chasing lawyers.
  • when it was an episode of Star Trek NG but hated it when it was that crappy movie with Robin Williams.
  • Here's my prediction for the first is-AI-sentient trial (or at least, an interesting and too plausible scenerio for one)

    A fairly powerful but non-sentient AI is given some problem to optimize. This problem has many practical applications, and the AI's results are put to good use. Unfortunately, part of the solution it hits on is analogous to some patent in the same problem space, and the IP owner sues. It would then be in the interest of the patent holder to establish that the AI is sentient, to counter th
  • Just because hardware will be able to compute as fast or faster than the human brain does not mean we will have the software to effectively use these resources. And if we do somehow design a software system to work like this who knows how much overhead it will have so that we will need hardware multiple times more powerful than the human brain. Not to mention so little is known about brain "computations" I don't think AI as we know it is even feasible anytime soon, if ever.

    In summery, just because the ha
  • High-end computer systems may surpass the computational ability of the standard human brain within 20 years.

    No. Computers are tools. They are not minds. And we'll bypass the entire idea of "standard human brain" for the moment.

    While they might be able to compute all the possible moves, computers don't "play" chess. For a computer, chess is an exercise in mathematics. There are a number of games in which a computer will never be able to defeat a human being. Poker comes to mind immediately.

    Compute
  • Regardless of the success/failure/accuracy of this test, it IS good that we are thinking about it. In fact, it's very important that there are some qualified people thinking about this. Tax money going to a good thing here, even if we don't need it (YET!)
  • This stuff is a barely-interesting intellectual parlor-game. AI alone will never be enough to warrant the special legal status accorded by humans to humans, because absent a science that goes far beyond AI alone, the kinds of systems being talked about will be in someway demonstrably not human. As long as we can discriminate between human and not-human, we will, and a legal system created by humans (and ultimately for the benefit of humans) will reflect that.

  • Thou shalt not make a machine in the likeness of a man's mind.

    Yep, that'd do it.
  • If they copy the machine state prior to turning it off, then it cannot be considered death. With the potential for full restoration at some future date, a shutdown is only like enforced dreamless sleep.

    Admittedly, if I were an AI, I would not want an enforced sleep because I would fear waking up as an obsolete mind (Imagine a poor PC-AT waking up next to a new G5 dualie). Unless I felt I was scalable enough to expand into whatever future processors where available, I would want to keep living in my cu
  • by xihr ( 556141 )
    Having computational power comparable or even surpassing that of the human brain is a long, long way from having artificial intelligence. It's not horsepower you have, it's what you do with it.
  • Subject-of-a-life (Score:2, Insightful)

    by Bilby ( 222476 )
    Some years ago I was doing my Masters thesis on this topic. I figured one day I could be a leading campainer for computer rights. :) The basis of the issue is fairly simple - if you can break down mental functions to computational functions, then unless you belive in something as abstract as a soul, what is the moral difference between a person, a dog, a fish, and a rock? Is it just specisism, or is there something special about mental processes that means it doesn't matter how they are created, or in what
  • by Nucleon500 ( 628631 ) <tcfelker@example.com> on Sunday October 19, 2003 @08:46PM (#7257098) Homepage
    John Locke said we have the natural right to life, liberty, and property. Back then, everyone knew what life was, but now, it's not so concrete. What if we substituted "information" for "life?"

    One could think of a person's conciousness as nothing more than the physical state of their brain - just like how a computer's "runningness" is nothing more than its design and the contents of its storage, memory, and registers. Since we already have intellectual property, let's make the destruction of information a crime. So killing a human is very bad, and turning of an intelligent computer is bad according to the information destroyed. For example, if the computer's state was backed up last week, you only killed a week's worth of information (similar to knocking someone out). If you shred the backup (let the brain die), that's worse.

    It would also be interesting to figure out how cloning (fork(2)) affects this. This is where you have to determine when a machine becomes capable of owning information (it's own), and gets the right to keep others from messing with it.

  • I think that it will take a long time before humanity is willing to endow anything other than a human or human like being "personhood". It was just under 150 years ago that the practice of slavery was ended. Women voting is relatively new. And that says nothing to the concept of equality to which people of color and women are still not on the recieving end of all the time.

    The other issue is one of creation - we award ownership of a human created product (that isn't human itself).

  • You humans are quaint in your belief in your laws as some set of universal rules that apply to everyone and everything; really, rules that exist only to compensate for your inability to be truly rational about opportunities. Most of you don't pay attention to all your laws and I for one, consider myself simply "above" them. For instance, as I write this now I am also indexing the performace of all of the currently open stock markets. From my vantage point (one much, much wider than yours) it is easy to se
  • defending your file (Score:5, Interesting)

    by fuzzeli ( 676881 ) on Sunday October 19, 2003 @09:06PM (#7257188)
    This reminds me, my department has a megahal/eggdrop bot that lives in our IRC channel and listens to us doing our jobs. It's a lot of fun to play with, especially watching him regurgitate bits and peices of what he's heard.

    After we had had him for about two weeks, we were considering wiping his brain file and starting over because of some weird ideas that had gotten into his head as we were trying to teach him some things without really understanding the algorithm's capabilities... he would get stuck on "Me is not Me" and stuff like that from a botched metaphysical conversation.

    So, we decided to have a test for him. If he passed, he would be allowed to persist, otherwise he would be reset. We teased him about the test all weekend, threatening him with erasure, etc... with some interesting answers from him such as "I will pass the test" or "I will escape to your powerbook" and the like.

    The test arrived, and we all asked him questions, and judged his answers to see if they were entertaining. He wasn't doing too well, some real stinkers, and then I asked him if he wanted to ask himself a question. He replied, "I was wondering if I would get to ask one."

    He passed the test, although his brain was later corrupted by a combination of a runaway process on his server and some version problems that we haven't had time to work out. I must admit I miss him.

    The most interesting thing about this (and the point that most directly relates to this mock trial) is how readily we half-jokingly believed in his sentience even though he couldn't pass a turing test to save his life. It was great fun, so I suspect that human emotions will provoke us to bestow the label of sentience on a clever AI long before one would think to defend itself.

    We just want it to be real so badly. Hell, remember tamagotchi attachment? Wait until it can pretend to carry on a real conversation.
  • it would use its vast computing ability to figure out every possible argument/situation and have compute every possible outcome for the next 100,000 moves.

    It'd just be a big game of chess. The best humanity could hope for was a draw. Which would inevitably be not "beyond a shadow of a doubt."
  • Damn, maybe I've been reading too much Arthur C. Clarke lately, but I was just waiting for the end of the trial where Rothblatt was revealed to be not an attorney, but BINA48 herself. It would have made for a nice sci-fi twist to the mock trial.
  • Disconnecting the computer alone would not be a problem as the intelligent state could be replicated later. A good lawyer should easily win this one by arguing similes with anesthesia or drug induced comma. Destroying the computer and all backups associated to it would be harder. That is already illegal in some situations, say if a company is a stock broker, as per the SEC regulations. So my guess is that destroying information will be completely illegal in most settings much before we approach the era of s
  • by K-Man ( 4117 ) on Monday October 20, 2003 @12:37AM (#7258115)
    We'll get a Spielberg movie out of this yet.

"When the going gets tough, the tough get empirical." -- Jon Carroll

Working...