Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

Spiritual Robots Symposium 326

Chris Callison-Burch writes: "Douglas Hofstadter has organized a symposium at Stanford discussing whether in the next few decades computational technology will outstrip us intellectually and spiritually, and thereby wrench us from our self-appointed crown as 'the highest product of evolution.' Speakers include: Ray Kurzweil, Hans Moravec, and Bill Joy. Date: April 1, 2000. Free and open to the public."

This is really an all-star cast, and a hot-button issue. Before the question above is answered, though, aren't there even more fundamental ones to get at, like whether computers can achieve consciousness at all? Aibo, after all, is not Fido.

This discussion has been archived. No new comments can be posted.

Spiritual Robots Symposium

Comments Filter:
  • by Anonymous Coward
    We're the result of a stochastic process that has been taking place for millions of years

    There is not evidence for me to beleive that we are merely the result of a random process. The current theories that exist today regarding our existence are simply not complete. There is little explanation for what created the big bang, how life started from nonliving matter, and our free will. There are simply too many fundamental questions left unanswered for us to jump to the conclusion that life is the result of a random process.

    all we need to know is that we can be happy

    Ever read Brave New World?
  • by Anonymous Coward
    So the next question is, what is the minimum amount you need to add to a computational system in order to make it capable of consciousness? Maybe it's a trivial addition, like a hardware random-number generator.
  • After all, say we produce a device that can outthink and outproduce humans, but has no reason to do anything?

    Obviously, the ones without inherent drive won't go anywhere. But as soon as a self-replicating artificial intelligence appears that *has* an inherent drive (for whatever reason: design or accident), you'd better watch out.

    --

  • Want to bet? Take it to The Foresight Exchange [ideosphere.com].
  • The fact of the matter is that we have yet to produce a machine that does anything other than what we have explicitly programmed it to do.

    This will always be the case. The question is whether we're any different. I don't think we are.

    Whether machines will ever be conscious or spiritual is another question entirely. But both consciousness and spirituality are completely subjective and are impossible to judge externally.

    No glimmers of free will or the existence of a mechanical soul have ever been observed in a human creation.

    How would you observe them? Have these things ever been observed in humans?

  • Well obviously they'd choose my religion, 'cause I'm right. :)
  • Why? Probably to do some good skiing.

  • Wrong not. I was talking about the not in the sentence "What I'm trying to say is that might it not be possible that if we were to create life ourselves, it could only be the product of all that the human race was up to that point. " which I still can't seem to untangle.

    Okay - I think he means:

    What I'm trying to say is that
    might it not be possible that
    if we were to create life ourselves,

    it could only be the product of all that the human race was up to that point.

    i.e., he believes in the possibility of the truth of "if we were to create life ourselves, it could only be the product of all that the human race was up to that point." The 'not' is used to signify expectancy of a positive reply, as in e.g., "Won't you be in New York that weekend?"

    Hm? How do you define "specific" in this case?

    Playing chess is a good example of a specific task.

    Gödel's Incompleteness Theorem is only applicable to a very small domain. Generalize beyond this domain at your own peril.

    If by "a very small domain" you mean "any system which seeks to make claims about itself" then we agree on everything but the meaning of the word 'small'.

    Hamish

  • Except it's not "any system which seeks to make claims about itself", it's "any formal system which seeks to make claims about itself."

    You're quite right. However, this was what I meant when I said that I thought we wouldn't be able to create a peer for ourselves, but that we might stumble upon one by chance.

    And after all, that's the scary bit, isn't it? People are afraid of losing control, and we don't master systems that we don't understand.

    Hamish

  • Well, I can't untangle your first sentence (the 'not' throws me off. I can't tell what it negates.)

    "Not to say that I necessarily believe one way or the other" is a disclaimer. It is quite disjoint from the rest of the sentence. I can't see that this was a particularly difficult sentence to parse. Foo. Well, since you're not anonymous, I'll have to assume that you're not a troll. Perhaps this is a bad policy. I guess *someone* has to be the mark, and this time around it might as well be me. ;)

    The first ... seems to state that any tool crafted by humans is necessarily inferior to those humans, which is of course complete nonsense, given, among many other examples, chess playing programs that can consistently outplay their authors.

    To start with, you have completely ignored the distinction between humankind and a single human being. Secondly, 'A is something less than B' does not imply 'B cannot create anything which will outperform itself at a specific task'.

    I think that Godel's Incompleteness Theorem contains some clues as to why a particular intelligence may not be able to create a peer for itself. However, we might stumble upon a method by some other method than analysis (like evolutionary techniques, or chance). It is this prospect which I find scary.

    Hamish

  • You haven't thought this through. If everyone is poor, starving, and unemployed, then who is buying all these products and services that these robots are producing? And why would the owners produce goods that there are no customers for?

    I see us becoming more of a utopian society where all there is for people to do is engage in activities that humans will most likely always be highly suited for, like artistic endeavors of all kinds, fun stuff like windsurfing and football, and generally goofing off and having a good time.

    But then, I'm an incurable optimist.

  • I'm alcohol-free, so I won't have much use for that wine... but what the heck - I'll raise: a signed copy of The Art of Computer Programming...

    Believe it or not, that was my initial thought for an appropriate wager . . . I'll see your Knuth. Let's hope we're still around to collect - either way!

  • "Go ahead and call me names if you like, but at its most fundamental level, life looks designed."

    Hmm. In many peoples' minds this idea was discredited [duke.edu] decades ago.

    "Like an accumulation of genetic errors, or noise in analog duplication, each successive Bert Bert was less of an image of Quater."

    Clearly Bert Bert forgot to evolve by natural selection.

    Your premise that we cannot create something better then ourselves is disproved by the whole history of technology. A stone axe cuts better than my hand does. A bicycle travels faster than I can run. A computer may one day be more intelligent and more conscious than (even:) I am.

  • Very well put, but...

    They assume that we are capable of creating beings that have the ability to reason far better than us, yet we do not have the ability to give them morals.

    And you assume that the folks doing the work care about this. Judging from the ethics of most corporate entities, I find this a bit naive.
  • Penrose applies his arguments to infinite turing machines. The problems is that you can't solve the halting problem on one of those.

    Now, give me a finite turing machine, and I'll write you a program that will tell you every single time if the program will stop. It's simple: if you've got a finite machine, then there's always a bigger machine which can simulate the smaller machine. You can essentially run any program in a debugger and either look for the halting condition, or you can look for a repetition of a previously entered machine state.

    Since everything in the world is a finite turing machine, and Penrose talks about infinite turing machines, his arguments don't really apply as well as he thinks they do.
  • Relax, we're not talking about the Spanish Inquisition here :)

  • Evolution is bullshit..... Since evolution is bullshit, there is by definition no "masters" of evolution and therefore this entire article is moot.
    Damn I love lateral leaps in logic. They make everyones day. Since they make everyones day, everyone should think laterally.
  • "10 years later (in 2040), he expects a cheap computer to be 30 times smarter
    than an average human."

    Oh, he may very well be right. The average human seems to get more stupid by the year... If that trent also continues I doubt the 'average human' will be able to tie his/her shoelaces.
  • You've assumed evolution to be true and then said that AI follows. Your logic is impeccable, but I disagree with your premises. I work from a design premise, and conclude that complete general AI is not possible. We would have to be better than ourselves in order to do it, which is self-contradictory.
  • The statement "this statement is true" is self referential but non-paradoxical, as a trivial example (although it still has some odd properties as the result of its self-reference). Yes, I'm no Gödel, you don't need to remind me. I suspect you're not either. So is general AI one of those pesky meta problems, or is it one of the benign variety? Like I said, I can't prove it, but I lean towards suspecting that it is an insoluble problem until such time as it is proved otherwise. Analogy with Gödel is no proof, but in the absence of sufficient quantities of genius and a lifetime of study, it's the best I can offer.
  • I'd say that the reason we exist is just because we can. To say that humans have an explicit purpose for existing is rather silly. We're the result of a stochastic process that has been taking place for millions of years, so there isn't any particular reason for us being around. As for justifying our existance to ourselves, all we need to know is that we can be happy. That in itself is reason enough for sticking around.
  • This raises the question in my mind as to whether it would be possible to cause AI to evolve somehow. Start off with some simple base, the ability to replicate, a selection agent, a mutagen, and time, and you've got the primordial soup. Of course, if it were that easy, it would have been done to some degree. We would just need a hell of a lot of computing power to get it all the way.
  • It seems unlikely that quantum computers will be able to answer the halting problem. I believe that with non-deterministic Turing Machines, you can pretty much model the behavior of quantum computers. What I know for certain is that non-deterministic TM's and deterministic TM's are equivalent: they can compute exactly the same kinds of things (usually a NTM can compute it in fewer steps than a TM...) The main consequence of this is that if a problem is undecidable for regular Turing Machines (like the halting problem), it is also necessarily undecidable for non-deterministic Turing Machines.

    While I'm not entirely sure about this, I do think it ought to be possible to create a Turing Machine that did what a quantum computer did; it would just take exponential time to run where the quantum computer took only linear time.
  • Penrose's argument using Goedel's theorem may be consistent in the context where all machine's can be described by turing machine. However there is one important feature that human brains have and some machines may emulate, but is not part of turing's machine theory: true parallelism. This abbillety allows the thinking entity to reflect on itself and it's own thinking process, thus avoiding (not solving) the halting problem. Furthermore Penrose's alternative desciption of computation (using quantum machanics) is absurd to say the least.
  • Also, computers are (now) lacking aura, I mean the thing we can look at with 'kirlian effect'. Usually, it indicates life...

    You mean somebody still believes in that absurd chunk of pseudoscience? Wow.
  • The old "evolution violates the 2nd law of thermodynamics" concept had been shown to be false time and again. The relevant links have been given right here on Slashdot.


    Is it me, or has Slashdot's SNR been dropping? I've always had the philosophy of setting my threshold to -1, but if the various trollers and immature ACs keep it up, I may have to go to a threshold of 1. I'd rather not.

  • Why would a "truly intelligent machine" understand exploitation (in whatever sense) to be an unfair thing? Why wouldn't "truly intelligent machines[s]" understand their work as a happy, fulfilling endeavor?

    The problem with most theories on "intelligent machines" is that they suffer from extreme anthropomorphism.

    MJP
  • To consider a computer to truly be conscious, I would expect it to be able to be self-aware, in the sense that it could independently evolve its own programming. Deep Blue was programmed to find chess moves using algorithms that humans have never fully analyzed (because we can't - to examine every permutation of a chess-solving algorithm by hand would take more time than this universe has left). It did the job very well - but could it extend the move selection and prediction algorithms it used, say to make them more efficient or more versatile?

    When a program begins to show self-awareness, then it has broken through into conciousness. I know we have written programs that can modify their own programming to a very limited, controlled extent. Take that ability and extend it to the point where the program can apply everything it learns to itself... that is what I meant about a program being able to do more than what it was programmed to do.
  • I can not prove one way or another that I have free will. I simply go by the assumption that I do because the alternative depresses me.

    It would be difficult for humans to be finite state machines because of the theoretical limitations of these machines. As another poster noted, all Turing machines are constrained by the Turing halting problem, and all FSMs can be proven logically equivalent to a Turing machine. The halting problem basically states that a machine of any class is not powerful enough to determine whether an arbitrary program on that machine will halt on a given input. It takes a more powerful machine to solve the halting problem for any given class of machine. Yet, the human brain can solve the halting problem for finite machines, indicating that it is more powerful than a FSM.

    Of course this is neither concretely proven nor disproven, and if you can point me to any reading that argues that the human brain is a FSM I would be very interested in seeing it. I have read GEB, as well as some of Hofstadter's other works, but I also like to see other people's perspectives since nobody has all the answers yet.
  • by quux26 ( 27287 )
    If there is a god, I wonder if he ever worried about this.

    If a machine ever becomes self-aware, what will we be to it except gods? And how will it feel about the creators after it surpasses us?

    My .02
    Quux26

  • For a less philosophical look at present robot mechanisms, check if your local PBS [pbs.org] station is showing the "Robot Wars" contest this week.
  • Actually, it was found several years ago that the human brain does get new cells. Someone discovered that a chemical which stains new neurons was used during human chemotherapy, and he got permission to get brain samples when patients died. New cells were found. Other studies found that neuron stem cells migrate through the brain and make new connections. So our brains are constantly getting rewired.

    A Metacrawler [metacrawler.com] search for neuron stem cells human shows an assortment of papers.

    There's another discussion here with related chatter. Slashdot: Brain Cell Rejuvenation [slashdot.org]

  • Correct. We're not the top of some evolutionary peak, we're just a thread of a web with many holes.

    We happen to have one more layer of brain cells than the other primates, and can reason and communicate more than they can. Other mammals have all of our emotions.

    Dogs, cats, cows, chickens, wheat, rice, and corn all have greatly succeeded in making us increase their numbers. What is an evolutionary success?

    A new flu pandemic could kill most humans, and a flu virus doesn't have to be particularly intelligent to reduce our numbers. A machine doesn't have to be intelligent to replace us...with something. Reproduction is all that it takes to be an evolutionary success, as long as the population increases.

  • Not to say that I necessarily believe one way or the other, but don't you think that arguments like this don't take into the possibility that the human race is something less than it's theoretical 'divine creator'?

    What I'm trying to say is that might it not be possible that if we were to create life ourselves, it could only be the product of all that the human race was up to that point. But this being that created us may forever be greater than what we could evolve to(spiritually or whatever). This implies that the human race has more potential than anything we could create at any given point.

    Just a thought.

    dan
  • then who is buying all these products and services that these robots are producing?

    The same 5% of the population that makes up 50% of the spending now. Only in the future they'll have even BIGGER SUV's!

    --
  • Don't forget that you need to program it to step above that emotional state when necessary, and wallow in it at times. And also that it's current actions will have future repercussions (sp?). And that sometimes there can be stuff that feels goods, but only feels good and really "isn't" good. If you do it the way you're planning, you'll have Bender from Futurama. Hmm, drinking makes me fell good, loop, repeat.

    --
  • Try it with an open game like Go, chess has a much more rigid environment, and as such is more suited to computing power. Not to mention it gets less complex after a certain point.

    And until DeepBlue gets as frustrated as Kasparov did, we've got a long way to go toward AI.

    --
  • by Anonymous Coward
    >I envision a future in which our AI children will live much better lives than we do, they will have hopes and dreams, personal tragedies, perhaps loves, hates, and will be able to run things much better than we do, as they will not have millions of years of evolutionary baggage to drag around.

    But, hopes and dreams, loves and hates, ARE our evolutionary baggage. Love springs from our mating urges, our presentient need to nurture our offspring, to form relationships with our siblings, as a mechanism to aid our survival. Hate comes from the fear of those unlike us, who we compete with for resources, and who would gladly eat us. There are reasons why we find things beautiful, why we enjoy the sound of birds singing or the green colors of springtime. These things were around long before people became self-aware.

    I wonder if sentient machines, either designed or computer-evolved, would be able to have hopes or dreams beyond those we program deep in their cores, such as "explore the seas of Europa" or "dig a 300 km trench from point A to point B" or "keep your program running as long as possible". Desire has to come from somewhere.

    >We are, basically, animals forced by systems of our own creation, into civilization. We have ugly sides, we murder, cheat, steal, all because we are not very adapted to our envoronment.

    No, we cheat and steal because we ARE adapted to our environment. People who cheat or steal and don't get caught get more resources than those who don't. I don't expect sentient robots to be more ethical than us unless we hardwire it into them, and make sure they can't change that programming (an impossible task?) I'm sure a runaway nanobot would consume (murder?) every living thing on earth, if its only goal were to make more copies of itself, using what raw materials it could find.

    Just a thought.

  • Yes these people may indeed be well known for their views on AI and spiritual computers.

    BUT... There are a couple of things aside from the date that make me wonder why more people haven't raised the "April Fool" alarm.

    1) "This just in.... Robot Monks take over New York City.... Mayor blames Moore's law...."

    Is this really the way serious philosophical symposia are announced?

    2) "There are no plans to webcast the event, but it will be videotaped - please e-mail Chris after the 1st about obtaining a copy."

    I suspect that Chris has just set up anakin@leland.stanford.edu with an autoresponder to reply after 1st April with "YHBT HAND" (or something similar :)

    I would be more inclined to believe that this is not a joke if the page had links to abstracts of papers to be presented at this event.

    I think it is a (less than elaborate) April Fool's Gag.

  • In terms of internal state, it might be almost that simple. What if you did keep a small array of numbers representing the degree of various emotions (happiness, fear, boredom, etc), and let this internal state interact with the "rational" part of your program? If you already had something capable of "rational thought", it shouldn't be that hard to bias its responses based on its internal "emotional" state.

    Human emotions are not necessarily complicated. Something as fundamental as "happiness" can be influenced by simple chemicals like Prozac or alcohol, which suggests to me that such emotions are the result of a fairly simple internal state in the brain.

    Ultimately, convincing people that your AI "feels" anything is just part of the Turing test. I don't think it will be a big deal, despite what Star Trek might say.
  • Have you heard of Stanley Miller's classic experiment? He mixed several simple molecules (hydrogen, water, methane, and ammonia) and exposed the mixture to an electrical spark discharge. This had the result of producing complex molecules including some of the amino acids which are part of living systems. The only "agent" required was the energy supplying the electrical discharge. More details here [pbs.org]. There are many other examples of physical and chemical systems exhibiting self-organizing behaviour, with no violations of the thermodynamic laws.
  • If you want to take the Judeo-Christian belief that humans are special because they are endowed with a "spirit", then there is something else that you need to take into account.

    According to J-C beliefs, humans are created in the image of the creator. Thus, doesn't it follow that we ourselves are creators, if only of a lesser order? Taking this view, creation becomes recursive. The Creator (God) creates a race of intelligent beings, imbued with a spirit. Why do we have a spirit and animals don't? Because we are formed in the image of the creator, and they aren't.

    Humankind then labors to produce an intelligence which can reason as they do. Is this not creating a being in our own image? Are we not god to this intelligence that we have created? "Spirit" and "intelligence" are terms that don't have satisfactory explanations. They fill in for a category of things that we don't fully understand, but that somehow we feel seperate us from other, less blessed animals. What is there to convince us that they are seperate at all? If they aren't, then *any* true intelligence (including true AI) by definition has a soul.

    You claim that this intelligence will know from day-one that it is a machine. Why is that? We are still in continuous debate over what "humans" really are. Some say we are merely meat, some say we are purely spirit and that no physical world really exists. Most beliefs lie in between. What is right? How would a computer intelligence truly understand what it is if we can't understand what we are?

    As for doctorates on a chip, I have read no scientific articles that lead me to believe that
    this will be possible in the near future, if ever. This sounds like utter science fiction. Do you have links to reference here? From what we can tell, our brains don't operate in binary, so how could a digital circuit be integrated into our brains? We don't understand how thoughts are formed, so how on earth can we implant them?

    Discussing spirit is fine for a religious debate, but isn't very useful in any sort of technical debate. Unfortunately, many of the questions that arise in AI are non-technical. Oh how confusing things get when we start mixing religion and philosophy with computing!

    As humans we really don't understand intelligence well enough to expect true AI anytime soon, so much of this debate is moot. However, I do see in the humanity's drive to create shades of the Creator. Why do we create? We create because creation is joyful. And how can we possibly be closer to the creator than when we, ourselves are creating?

    --Lenny
  • Penrose suggests that because humans can always solve halting problems(will a given program terminate) and turing machines can't, that the human brain is doing more than mere computation.

    Well apparently Penrose has never dealt with any government offices. Maybe he should read Kafka. Or try to apply for a Green Card.

  • I had the chance to see him "live" at a conference in Grenoble. He's one of those few people who manage to communicate the most abstract concepts in the most entertaining way. Even to the layman -- you know, those who don't even know that there is actually research being done in mathematics (most people, actually, as strange as it might seem to us geeks).

    When will we have a Slashdot interview?

  • As am I. (Disclosure, I studied math.)

    Every logician who I have ever seen discuss the topic says that Penrose completely misunderstands the contents of Goedel's theorem. And furthermore his attempted application of it to AI is misguided. Here is a short explanation [santafe.edu] of Goedel.

    Strong words?

    Let me give you a quick sample of why Goedel does not apply to us. Goedel merely puts a limit on what absolutely consistent reasoning can determine. However our reasoning is inconsistent - we make mistakes. (Mistakes which historically have often taken years to discover.) And so Goedel says nothing about us at all!

    Cheers,
    Ben
  • I am another person who believes that AI is coming. Why have we not seen it yet? Because by any attempt to measure it computers have nowhere near the computational power that we take for granted doing something as simple as making out speech or recognizing someone! However we can estimate when computers will hit something in that order of magnitude of power. Estimates are in the range 2020-2030.

    What happens then?

    Every technological advance until now has shown a pattern where humans are displaced, find new jobs, and everyone is better. The reason that humans can find new jobs is that there are always jobs that are easier to have filled by general purpose humans than specialized machine.

    With artificial intelligence and robots I see the advent of general purpose machines that can more cheaply do anything that a human can do.

    What then?

    Humans get displaced - and there is no job that is not better filled by a computer. The basic equation of capitalism is that there is something you can do for someone else with money that is valuable enough that they are willing to pay you for it. This is called work. If they pay you money and you don't provide value back it is called charity. The majority of people work for someone else.

    What happens when most people have nothing they can do that those with money find valuable? Unless we give up capitalism they go on welfare or starve. I don't see us giving up capitalism, and I don't see the welfare system expanding like that - technology will just move to countries without welfare.

    What then?

    Will we see mass starvation before 2050?

    It is one thing to rationally saying we will become irrelevant and be replaced evolutionarily. It is quite another to view with equinamity a real prospect of widespread death inside of a century!

    Cheers,
    Ben
  • The third possible (and actually accurate) resolution is that Penrose does do a good job explaining one of the many possible proofs for a lay audience, but fails himself to understand the exact meaning of the theorem and horribly misapplies it to artificial intelligence.

    Regards,
    Ben
  • Option 3 is (as you admit) implausible.

    Option 2 is evolutionarily unlikely. Even if only a small percentage is properly biologically prediposed to prefer the real thing with babies at the end no matter what the external world (global population decreases say that this is not everyone), they are the only ones that matter evolutionarily and will dominate in the long run.

    Option 1 is possible, but I don't think that the catastrophe will be externally induced when it comes...

    Cheers,
    Ben
  • OK then, I'll ante up a bottle of Penfolds Grange, Australia most famous (and expensive) wine. I also promise to make it an original and not nanotech-engineered!

    Your turn . . .

  • Incidentally, it occurred to me a month or two ago that the following problem is equivalent to the halting problem: "Construct a program of a size no Incidentally, it occurred to me a month or two ago that the following problem is equivalent to the halting problem: "Construct a program of a size no greater than N, which runs as long or longer than any other programs of equal or lesser size (except those that run forever, of course.)"

    This problem (or a close variation of it) is known as the "busy beaver problem", and is undecidable (I've forgotten the exact details of the proof, but basically it's a reduction proof - if you can solve this problem, you can solve the halting problem, and since you can't solve the halting problem, you can't solve this problem).

  • Quickified version - imagine an anglophone man at a computer terminal, with a giant book telling him exactly what to type in response to messages sent to him in Chinese. This man does not understand what he is doing at all, and yet this hypothetical manual he is following allows him to exactly simulate an intelligent response. If this man passes a Chinese Turing test this way, do we claim that he understands Chinese??

    He may not understand Chinese, but I would argue that the system does understand Chinese. How do we know that you understand English and aren't just doing a convincing simulation? :)

  • I have done AI-related research, and spent quite a deal of time learning about the history of AI. One thing that is abundantly clear is that, over the years, we have been horribly over-optimistic about the progress we will be able to make.

    AI is a very hard problem, and there have been very few fundamentally new techniques discovered in the last 20 years. While I don't buy Penrose's argument that intelligence is noncomputable, I do suspect that a much better understanding of how the brain does what it does is required before we can build human-like intelligences. In the meantime, what we can and have been doing is attacking specific problems and building systems to solve those. We can do a lot of very useful things without solving the "general intelligence problem".

    However, the premise that smarter-than-human AI is right around the corner and we should be preparing for it is bunk. In fact, I'm prepared to bet we will have a permanent settlement on Mars before we develop an AI system with capabilities equivalent to an average six-year-old. Any takers?

  • In many peoples' minds this idea was discredited decades ago.

    Yes, and that can be a bit of a problem, because it means that they won't listen to your arguments or even read the link [origins.org] that one provides as a supporting argument. From a simple epistemological perspective, you can be presented with as much evidence for design as you like and still deny that it is actually design, but merely something that has the appearance of design. This is Dawkins to a tee. But hell, even Dawkins tells us that stuff looks designed, and wasn't that what I said originally?

    The point of my argument is that I don't believe we will be overtaken by machines because I believe in actual design rather than inevitable onward upward evolution which merely seems like design. In all cases of actual design, there is a lossy effect. You cannot design and build something which has more intelligence than you, because you had to use your intelligence in order to design and build it. Where is the extra intelligence going to come from in order to get a smarter end product?

    Clearly Bert Bert forgot to evolve by natural selection.

    Natural selection won't work. Let's assume that Bert Bert makes lots of replicas of himself and then picks the best one to take over the job. This will minimise the rate at which the system degrades. In order for actual improvement to take place, there has to be an improvement over the original. In the real world, this improvement is supposed to happen by chance, which seems a bit far-fetched, given that even relatively trivial things can't happen by chance [nutters.org]. Without the supposed benefit of random changes, we are back to Bert Bert making a better design under his own steam, and if he can do that (I say that he can't, but if he could) then he doesn't need natural selection.

    A stone axe cuts better than my hand does.

    And so on. Yes, pretty much any example of a tool, right down to Ugg the Caveman walloping someone with a club-shaped lump of wood, disproves my theory if your assertion is relevant. Fortunately, it itsn't.

    Even the earliest computers could perform mathematics faster and more reliably than a room full of accountants. That's why they were useful at all. But someone has to tell them what math to perform. And even where they make decisions about what math to perform, someone had to tell it how to make those decisions. And if we get programs to figure out how to make decisions on their own, then someone will have to have told them how to do that. See a pattern forming?

    The only threat here is if a lower grade of intellect can be overcompensated by increased speed -- assuming that computers even would be able to out-do us in think-speed were they performing the same abstract mental tasks. It's not like we know enough about our own thought processes to tell how much CPU power they'd need. People tend to make the simplistic assumption that because computers can add numbers billions of times faster than they could, that the speed increase will scale up with the problem of general intelligence. Or maybe people just think that brains are the product of an undirected random process, and we can do better with electronics -- ironic, given that it's that very same randomly-evolved brain which thinks it can do better.

    Like I say, if we face a threat from technology, it will probably be because we invent something lethal to ourselves or wreck the environment or stuff the gene pool or blow ourselves up. It will not be from producing the next step in evolution. That stuff is good for science fiction writing based on a theme of hubris, but it is not realistic.

  • If humans couldn't create things more "intelligent" than themselves, how can you explain the countless chess programs capable of beating their own creators?

    Never mind chess. When I did "Knowledge Systems" at university, we had to write a program to play a variation on tic-tac-toe requiring five in a row on an eight by eight grid. I wrote a program to do this, and I had to play very carefully to force a draw. My program wasn't perfect, and I managed to beat it once by exploiting a flaw in its logic about which I knew. Often I would be sloppy in my game, and it would beat me.

    So on average my own computer program beats me at this connect-five game. Does that make it a better player than me? Well, yes; the win ratio is the usual metric for measuring a player's quality. Does that make it more intelligent than me? Of course not. I devised the rules by which it is playing, and I could improve on them if I tried.

    The important thing is that I not only know the rules, I implemented them in the AI. If I wanted to, I could follow the rules myself and be exactly as good a player as it. I'd be a lot slower, because I'm not optimised for that kind of approach -- I'd have to emulate a computer, and I do that badly.

    The main problem with AI is that it is a "meta" problem. "Meta" problems are full of gotchas. Gödel's incompleteness theorem is an example of the kind of problem I'm talking about here. Mathematicians thought they could prove a system of mathematics using the system itself (a "meta" problem if there ever was one), but Gödel came up with this amazing (and hard to appreciate) proof that this kind of bootstrap-lifting just isn't possible: you can have a system that's complete, or consistent, but not both. Well, dang, because both was what we wanted.

    Bearing the spirit of Gödel's theorem in mind, shift back over to AI for a moment. Let's suppose we want to construct a precise model of our own intelligence so that we can improve on it. (If we're going to make improvements, we need the baseline model to work on.) This is a meta-problem. This is you, as an intellectual being, trying to construct a compete and consistent model of your own intellect. This model will necessarily contain your own ability to analyze and determine the full extent of your own intellect and construct a model thereof. You can see that your model has to contain itself, and although I lack the mathematical prowess to turn this into a snazzy Gödelesque theorem, I think you can see that this looks like one of those "meta" problems that give us so many headaches.

    I don't believe we can possibly even have a complete understanding of our own intellect, let alone improve on it. Thus the inevitable "Bert Bert" line of degradation from creator to creation. Note that the "meta" problem only remains a problem when we attempt to model our complete intellect: we should be able to model particular subsets of it (which are known to contain inaccuracies, but are useful none the less) without any theoretical difficulty, and I see plenty of hope for improvement in areas like speech regognition, pattern recognition and such like. We won't create "evolutionary replacements" this way, though, just more tools like stone axes and chess computers which do particular tasks better than we do. Also, I'm not saying that some creature better than humans cannot exist, but rather that if such a thing can exist, then we won't be the ones who make it.

  • Ooops, I misspoke, not we can't always solve halting problems. I forget the trick penrose uses, but it mimic's Goedel's proof (which I've all-but-forgoteen). It's incredible how much you forget in the real world.

    a good description [washington.edu] of the halting probelm

    And here's a review of SOM [uniandes.edu.co]

  • I'll take that bet, thank you. And I'll also make a meta-bet that I'll be alive for as long as it takes for either event to take place (pessimistic estimate, 2100; optimistic estimate, 2020 or less!). Now, how much money will you put where your mouth is? (Will money even exist when all this takes place? When/if nanotech comes along, who will care about dead trees painted green anyway?)

    (Disclaimer: I am not a sucker. While I recognise that we tend to be overtly optimistic about the short-term, I also recognise that we tend to be overtly pessimistic about the long-term. Also, I don't believe that human exploration of the solar system has a big future - believing in uploading as well. So there you go.)
  • Uhrm. I'm alcohol-free, so I won't have much use for that wine... but what the heck - I'll raise: a signed copy of The Art of Computer Programming... all five volumes (that is, assuming that Knuth has already finished it by then), leather cover (synthetic, of course... because nobody kills cows anymore in the 21st century :]).

  • I can't directly comment on Penrose's book, but humans certainly CAN'T always solve the halting problem. Most of the programs you are familiar with are 'intelligently' constructed where the programmer is implicitly using various halting-invariants to gaurantee that the program halts. But, I'll bet I can give you a randomly generated 8 state turing machine and you'll never guess whether or not it terminates. If you're curious, look around on the web for the best known 6 state Busy Beaver turing machine and see how long it takes for it to halt.

    Incidentally, although the halting problem is certainly related to Goedel's Incompleteness Theorem, it's still not the same thing. I think Penrose was saying that, since humans are not confined to "thinking within a formal system" that were are capable of deciding the validity of assertions that a computer cannot. Again, I don't buy this assertion either, for similar reasons; in fact, I sometimes wonder if my Palm Pilot might be more intelligent than even Roger Penrose. :)
  • Far too many of us seem to worry too much about more or less irrelevant stuff (like sex)

    ah, now I see why you would say

    I would not be all that sad to see AI replace human beings.

    If you put THE way that humans evolve, reproduce, and express ultimate emotion, under "irrelevant" status, it's easy to see why you think you consider yourself to be the same.

    We can make the computer's sole goal be to maximize the total happiness of all the world's inhabitants.

    Here's a crazy idea, why not do that with the inhabitants we already have? Forcing people is the WRONG way to do it, forcing computers to force us is even worse. The road to Hell is paved with good intentions.

    because we know that it has no other goal then to help us.

    YOU WILL STUDY THIS, IT WILL HELP YOU APPRECIATE BEAUTY. *patient gets up, computer breaks legs to "help" it concentrate" YOU WILL STUDY THIS IT WILL HELP YOU APPRECIATE BEAUTY.

    Why not use this god-like ability (to create an AI better than us) to alleviate the strains of scarcity that make this world a place you don't want to live in?


    --
  • What is your definition of "doing something other than what the machine was explicitly programmed to do"? Deep-blue was programmed to play chess very well, and towards that end it has/had made many chess moves that at first baffled onlooking grandmasters (referring to the rematch with Kasparov). It's a far cry between chess (which is now mostly computable) and even other games (like Go) much less anything approaching AI, but the idea that you can't set up a handful of principles and watch them followed to their [il]logical conclusions doesn't seem too far-fetched.
  • As I understand it, Turing's machine is purely deterministic.
    In some forms, yes. But you can construct nondeterministic Turing machines too, and it turns out that everything a that a nondeterministic Turing machine can do can also be done by a deterministic Turing machine. Think about it... All a deterministic machine would have to do is follow each of the possible paths in turn. There will always be a finite number of possible paths, because of the "digital" nature of symbol systems.

    Incidentally, I should have referred to the conjecture that no computer is more powerful than a Turing machine as "Church's thesis", although it comes out of Turing's work just as easily.
  • What if the human conscious mind is no more than a massively massively massively parallel computer and that consciousness is simply the emergent property of a complex system? And that any large system where several billion nodes simultaneously pass information to other nodes embodies a consciousness of some sort?
    Good idea, but it doesn't refute Penrose's particular argument. The Godel theorem / Church-Turing theorem applies to any computer, present or future, with any architecture, any amount of RAM, any number of CPUs, etc. (Along with computers so exotic that they have no concept of "RAM" or "CPU")

    It's a pretty neat trick. Basically, the moment you specify a system, i.e. write out the source code or draft blueprints, you guarantee that it is incomplete (or inconsistent.) So hypothetically, the only algorithms that might be able to solve the halting problem are those that can't be written down. If you find any algorithms that can't be written down, please be sure to send me a copy. ;-)

    Having said all that, Penrose is probably still wrong, because he hasn't proven to my satisfaction that the human mind is not subject to the limitations of the Godel theorem. (And of course, any such proof also couldn't be written down...)
  • But, I'll bet I can give you a randomly generated 8 state turing machine and you'll never guess whether or not it terminates.
    I doubt that. If I watched it play out on a simulator for a few weeks, I'm pretty sure I'd figure it out eventually. (That's assuming it doesn't halt, of course... we're talking about a semi-decidable problem here.)

    Incidentally, it occurred to me a month or two ago that the following problem is equivalent to the halting problem: "Construct a program of a size no greater than N, which runs as long or longer than any other programs of equal or lesser size (except those that run forever, of course.)" Personally, I think it's even harder to believe that the human mind can't solve all problems of this form... but if brains are equivalent to computers, that must be the case.

    I think Penrose is wrong. But I also think that he's got the most compelling argument against strong AI that's out there. (In fact, his is the only objection that hasn't been settled to my satisfaction.)
  • (Say I built a machine that was conscious. How would I know I had succeeded?)
    Ironically, one of the nice things about Penrose's argument is that it answers that question. Just set up a second machine that successively feeds undecidable problems to the first machine, and see if the combined operation halts. If Penrose is right, that should be no problem for us humans, right?
  • Just set up a second machine that successively feeds undecidable problems to the first machine, and see if the combined operation halts.
    Damn, I've already posted four replies to this article, but I really should clarify this: I'm assuming the operation would halt if and when the second machine generates a problem that the first machine could not solve, thereby proving that it is not conscious (according to Penrose.) If the operation does not halt, then the machine is conscious (according to Penrose.)
  • Believe it or not, according to Turing, nothing can make a computer more "powerful," past a certain point. RAM and MIPS and parallelism are nice, but the most advanced computer the world has ever seen has exactly the same limitations as the simple Turing machine. (It just runs faster.)

    Unfortunately, like so much of the work in this area, Turing's conjecture can't be proven.
  • That's actually quite a leap, even from the limb that Penrose is out on. First of all, don't conflate the halting problem with the ability to determine whether a specific program halts. To prove that a machine solves the halting problem, you need to prove that it can determine whether any program you feed it will halt.
    I was speaking somewhat tongue-in-cheek, so I may not have put my propsal clearly enough. You're right, I would need to determine whether there exists any instance of the halting problem that your machine cannot solve, out of an infinite number of possible runs. Answering that question is itself an instance of the halting problem... but since this whole thought experiment is based on Penrose's premise, I'm assuming that humans should be able to solve this particular halting problem, thereby proving that the machine is conscious.

    If Penrose is wrong, then this method would not work... but then it would be moot, because the test wouldn't be valid in the first place. I only bring this up to point out that in some respects, it would be handy if Penrose were right.

    Furthermore, Penrose argues that human minds are capable of deciding undecidable problems (which TMs are not) and therefore that human minds are not TMs. He does not claim that a machine that could decide undecidable problems would necessarily be conscious.
    Absolutely right, that was sloppy of me. However, if machines exist that can overcome the halting problem, that would be a very compelling evidence that those machines are conscious, even though it would not be proof. There aren't that many different kinds of computers in our universe; perhaps three or four, in current thinking. If we were to discover a fifth kind, the law of parsimony suggests that that would explain consciousness.

    Having dabbled in mathematics, I am equally certain that I cannot determine whether arbitrary mathematical statements are true or false. Thus, by your statement of Penrose's thesis, I am not conscious. I claim that I am conscious, but by your argument I am not. Now prove I am not.
    Brings to mind an interesting question... What if any particular human might not be able to solve all undecidable problems, but the human race as a whole can? Certainly there are people out there who would never even be willing to make the attempt. Do we need to talk about every single human, or just a hypothetical "average taxpayer" case?

    At any rate, how can you prove that your hardware is incapable of solving all instances of the halting problem, in the right situation, with the right insights, and the right motivations? You can't, just as Penrose can't prove that you're wrong.

    PS: Yes, I know you are taking a devil's advocate position.
    PS: Absolutely. I just think that Penrose's argument can't be dismissed that easily. (Particularly since it cannot be proven wrong... unless it is correct. Dagnabbit...)
  • The "Halting Problem" is a misnomer, it depends on the requirement of Turing machines having infinite memory. No computational machine has infinite memory, so no real (as opposed to abstract) machine suffers from the halting problem.
    You're making a subtle but important mistake. Turing machines don't have infinite memory, they have unlimited memory. Computers with unlimited memory can be built (although they usually aren't.) Your computer has a diskette drive, right? If you wanted to perfectly simulate a Turing machine with your computer, all you have to do is keep feeding it disks whenever it needs to swap out data. Sure, there are only so many floppy disks in the universe (even counting AOL disks,) but that doesn't make the computation any less valid.

    That doesn't help with the halting problem, though. The problem is that there is no way to predict how much memory you will need to compute an operation until you actually try it... in the example I gave above, you could keep feeding your computer fresh disks until kingdom come (literally), and you still couldn't prove that the operation does not halt.

    As far as real machines, it is true that they are not strictly Turing machines. (In fact, they're not even push-down automata, they're finite state machines.) But the question is, is my PC a perfect FSM, or an imperfect Turing machine? I'm inclined to think that it's the latter. Notice that modern computer programs treat the computer as if it has an infinite supply of memory; if they run out of memory, that's an exception, which typically causes the program to crash. It may sound a little like voodoo, but I think that the intent of the programmers (who program as though for a Turing machine) actually makes the computer a Turing machine, regardless of its physical design.
  • Penrose is probably still wrong, because he hasn't proven to my satisfaction that the human mind is not subject to the limitations of the Godel theorem.

    Yes, Godel's theorem and the halting problem only say that these unanswerable questions exists, not that they really keep you from ding anything specific, but penrose makes a bigger mistake then this! A mistake that a physicist should not make! Godel's theorem and the halting problem say something about the mathematical method, not about what happens when you give the machine the ability to interact with the real world, i.e. the good old fashion scientific method. Biological evolution can be sen as a reall slow and sloppy way to practice the scientific method, so it seems reasonable to suspect that we can produce AI from our current programming driven slugs (cmputer) in less time the it took mother nature (say 1 million years). It's not a really useful upper bound, but it's a lot safer then the 30 years they were guessing 30 years ago.. :)
  • We are, basically, animals forced by systems of our own creation, into civilization. We have ugly sides, we murder, cheat, steal, all because we are not very adapted to our envoronment. All of the uglyness of the human spirit is because it would be fundametially different were it not casted into what it is.
    AI, OTOH, would be designed in civilization, for civilization. They will be civilization, not it's end. They will much better reflect the ideal human spirit than the human animal ever could.

    I fully agree with your point of view. I think I should add several things that are not totally clear from this post:

    We might not be replaced by AI, it's very possible that we will become AI. Look at today's medicine- every advance in technology is used for "fixing" our bodies, for now it is mostly used for reconstruction after some kind of damage artificial or not- a car accident, aging, illness. But the time will come when technology is so advanced that we will be able to upgrade our bodies with more advanced "parts" than their biological equivalents.

    Lately there have been a lot of advances in creating nerve to computer links. First it's going to be artificial sight and hearing. After that we should be able to create chips that reside on our brain that help us think- first it will be just memory chips to store information, after that logical chips that replicate functions of other parts of our brain. Who would not like to have a speedy math processor directly available at your only thought?

    The conclusion is that BRAIN-COMPUTER links will be created both inside our bodies for mobile usage, and to big supercomputers or rather future desktops, we will not have to rely on these ancient secondary interfaces as monitors, keyboards. The deeper this integration will be the boundary between our intellect and machine will diminish. The time might come that we will not have to rely to our biological circuits- all our thinking will be done in the machine.

    One might argue that that is not possible, and if that happens we die replaced by something- machine only replicating our thoughts. This reminds me of teleportation problem- you get killed in one place and your exact copy is assembled in another place. Is that death? I think we should leave this argument to philosophers.

    I only hope that technology develops fast enough because... I dream of immortality.


    P.S. to slashdot operators- I think there's a bug in reply posting mechanism. When I previewed my post in "HTML Formatted" everything was ok but i did not notice that the bottom of the page did not load with the form which let's you to chose text formatting, so when i submitted the reply my all formatting was gone. Apparently there's some default mode but there should not be one- if document gets submited and there is no information in it which formatting to apply, exception should be raised IMHO.

  • It's all crap. The only argument against AI is Searle's Chinese Room argument, and it is based entirely on the prejudice of humans who regard themselves as the sole owners of "consciousness". The moment we ask if something else has consciousness or a "soul", we start asking questions like "prove it", and "show me where". WTF???!!?? WE ARE NOTHING BUT PHYSICAL MATTER folks. If you believe otherwise, that is your BELIEF, nothing else. If you can prove to me you are conscious, then maybe we can start talking. Until then, every argument you put forward against the possibility of AI is nothing but emotional prejudice. Maybe someday, we'll need another civil rights movement for machines intelligences because of people who think like this.
  • There seems to be process wherein a "genius" in one field of study is automatically elevated to the status of expert in all other fields (cf. Einstein on politics and religion, Pauling on nutrition). I think Penrose's books on AI serve as an excellent warning to those who would avail themselves of this promotion. I have read his books and even attended one of his lectures, and I have great respect for his work in mathematics, but I must say that the theories put forth in ENM and SotM are unsound.

    Full disclosure: I am an AI researcher, and I suspect that human-level intelligence is achievable on a Turing-equivalent computer, though I don't expect that to be an easy feat and I am willing to entertain the possibility that I'm wrong.

    Among my colleagues (CS PhDs, and not just AI researchers), I don't know a single person who takes Penrose's arguments seriously. There are good arguments that can be raised against AI, but Penrose's are not among them. IMO, all Penrose has succeeded in doing is tarnishing his own reputation.

    That said, I don't think anyone has a handle on consciousness (including Dennett, despite having written a book with the bold title "Consciousness Explained"). The inherently subjective nature of consciousness seems to defy scientific investigation. (Say I built a machine that was conscious. How would I know I had succeeded?)

  • I'm assuming the operation would halt if and when the second machine generates a problem that the first machine could not solve, thereby proving that it is not conscious (according to Penrose.) If the operation does not halt, then the machine is conscious (according to Penrose.)
    That's actually quite a leap, even from the limb that Penrose is out on. First of all, don't conflate the halting problem with the ability to determine whether a specific program halts. To prove that a machine solves the halting problem, you need to prove that it can determine whether any program you feed it will halt.

    Furthermore, Penrose argues that human minds are capable of deciding undecidable problems (which TMs are not) and therefore that human minds are not TMs. He does not claim that a machine that could decide undecidable problems would necessarily be conscious. Even if you produced such an impossible machine, there would still be no way of proving that it was conscious, only that it is capable of something TMs are not.

    Penrose claims to posess mathematical insights that go beyond computability. I seriously doubt that is the case, but one thing I can state for certain is that I do not. I consider myself a decent programmer but, having been faced with the problem of understanding someone else's spaghetti code (which is quite benign compared to what exists in the space of all possible programs), I am confident that I cannot solve the halting problem in the general case. Having dabbled in mathematics, I am equally certain that I cannot determine whether arbitrary mathematical statements are true or false. Thus, by your statement of Penrose's thesis, I am not conscious. I claim that I am conscious, but by your argument I am not. Now prove I am not.

    PS: Yes, I know you are taking a devil's advocate position.

  • Ever read Asimov's short story, The Last Question? It poses a real interesting twist to this whole entropy-god-computers-spirituality thing.
  • 1. This assumes there is one correct answer to the spiritual questions of life. There isn't.

    2. Chances are, it would take logical points of view: atheism, and such. Which is nice, but when one thinks about it, logic being infallible is not just a quality of logic...it is a presupposition. Large aspects of reality are discarded when one limits oneself as such. Only taking that which can be logically explained to be possible as fact will obviously only lead you to conclusions which support that given.

    3. I think that creatures inherit souls when they become sentient, so a machine would be an intelligence if it understood intuitively what "I" is. This knowledge should be somewhere beyond the scope of being taught.

    Daniel
  • Kurzweil has always written stuf like that, but seems like Bill Joy has a lor of spare time on his hands in Aspen recently. After the Wired article, he has instantly been transformed into an expert on the subject.
    --

    BluetoothCentral.com [bluetoothcentral.com]
    A site for everything Bluetooth. Coming soon.
  • I don't believe he should be shunned or treated as anything other than what he is: an outstanding researcher and one of the forefathers of the field. (Were it not for Minsky, maybe none of the connectionist guys would be working on AI at all!)

    I've actually developed neural image processing systems on DataCube hardware and presented at conferences -- and I don't know a single person in the field of artificial neural systems that would agree with your assessment.

    The book in question, "Perceptrons" was, for all its mathematical rigor, in the end, a work of academic advocacy made all the worse for its failure to admit its advocacy role in the literature, its professionally self-serving nature, the zero-sum manner in which that service was rendered within the field of AI and, when all the rest of the smoke has cleared, for its profoundly erroneous "scientific" conclusion taken seriously because of the fashionability of the authors with government funding sources.

    But Minsky didn't remain idle during the years prior to 1985, when people finally realized his mistake and that backpropagation was a field of mathematics with roots back to the early 1950s called "stochastic approximation". No, Minsky was busy getting the one big chance for a long-term AI project -- the Austin-based MCC -- off chasing knowledge representation in his unscalable frames-based system. This caused nearly 10 years of delay in the Cyc project before Ramanathan Guha and Keith Goolsbey managed the herculean task of recasting the entire edifice on a predicate logic engine which rendered the project scalable. The recovery is nearly complete as one can see at www.e-cyc.com [e-cyc.com] but in the intervening 17 years, the long-awaited synthesis between connectionist and predicate systems has been pushed somewhere out in the future.

    I find it ironic that Bill Joy isn't lobbying hard to get Minsky and that much less destructive Kaczynski fellow in as Plenary and Keynote speakers respectively.

  • At least they didn't inflict Minsky on the crowd the way DARPA did, as the plenary session speaker at the 1988 International Joint Conference on Artificial Neural Systems.

    Some of the non-DARPA people were actually hissing at him during his address. The folks I was sitting with had to get up and walk out -- they found it too nauseating. Since I hadn't been working in a field suppressed for years the way they had, I couldn't really fathom what was going on until they explained it to me.

    Just because you are popular doesn't mean you aren't disease ridden -- ask any major AIDS vector.

    But on second thought, given Bill Joy's recent technophobia, shouldn't he be begging to have Minsky there? After all, I can't think of anyone who has set the field of artificial neural networks back more than Minsky -- almost 18 years were lost and these days, that is almost an eternity.

  • Godel's argument is that any logical system is either inconsistant or incomplete. Humans, however, are not logical beasts, but rather emotional beasts. Our reasoning is both inconsistant *and* incomplete but sometimes we manage to eek out some logical statements (that's why math geeks are a *rarity* in society).

    There are two ways in which an AI machine can be built. Either symbolically (with an understanding of intelligence) or through simulation (by simulating human neurons, but possibly no understanding of what's going on). Probably there will be some kind of a mix of the two.

    The biggest problem that AI faces isn't any kind of philosophical barrier, but rather SPEED. There are a lot of NP-complete and NP-hard problems in AI, some of which don't have good approximation algorithms. Take simulation, as an example, modern neural net simulations (I'm not taking cheezy backprop nets, but rather cool biologically-inspired stuff) have only been able to simulation a small fraction of the neurons and a small fraction of the interconnects in the human mind.

    What happens when we're able to simulate *all* the neurons in a human mind in real time? It's just a matter of getting the structure right, and letting it learn like a small child. There will be nothing in the human mind that isn't in this "Turing Machine" (except possibly some Chaotic effects... the precision of TMs may be more limited) . And that says nothing at all of symbol methods, which will also benefit from increases in processor speed.

    It'll be a "virtuous cycle". Once processor speeds are good enough, AI techniques will be adopted by the private sector, who will be able to *sell* systems and spend many billions of dollars on development. Sad but true, I think the first AI systems will be privately owned.
  • Jerry Farwell is a cyborg, the little old nun you see on late night cable is actually an Aibo in a brown sheet, and George Bush is a terminator from T2.
    _______________
  • If computers became independent and gained self-knowledge, I wonder what they would do about religion/spiritual things. Would they succumb to the pressures of our modern day religions? Would they create their own? Would they worship humans, seeing that we created their first ones? I think its an interesting thought...

    Mike Roberto (roberto@soul.apk.net [mailto]) - AOL IM: MicroBerto
  • Agreed. Allow me to continue this idea, since I have been thinking about it alot lately.

    I would not be all that sad to see AI replace human beings. The reason is that human nature has many, many flaws. We have been programmed by evolution to worry about ourselves before we consider others. Many of us enjoy competition just for the sake of competition. Far too many of us seem to worry too much about more or less irrelevant stuff (like sex) rather than worrying about making the world a better place. These are just some of our flaws.

    When we write AI, however, we can fix all of that. How? Emotions. It is a total myth that AI would not have emotions. I do not believe it would be possible to create an emotionless intelligence. Emotions give us goals. In the case of AI, we can specify how their emotions work. We can make it happy when it helps others. We can make it sad when it hurts someone. We can make the computer's sole goal be to maximize the total happiness of all the world's inhabitants. Then, we follow the AI's lead. Do what it recommends, because we know that it has no other goal then to help us.

    The idea that AI would hurt us is ludicrous.

    ------

  • I think we humans have a hard time imagining that an AI could feel emotions because we have a hard time describing how emotions feel to us. You think, "when I am happy, it feels good. How can a computer FEEL good?" In fact, it is little more that just setting happy = 1, sad = 0. Then, the AI will do whatever it thinks will increase its happiness level the most, and decrease its sadness level the most. Also, its actions and rational thoughts need to be influenced by its current emotional state. Voila, you have emotions.

    I don't expect anyone to believe me until I implement it. :) Check back in ten years.

    ------

  • Nah, I'm not arguing that carbon is more magical than silicon ... heck I was just bringing up something I've read to see how people take it. Actually, I had thought of the counterargument as well, that the system would be what is conscious.

    If I really felt like arguing what I believe ... well, I wouldn't, because it's not so much an arguable position as much as simply saying, "Prove it." I personally doubt that the human existance can be cloned by copying silicon neuron for carbon-based neuron - I like to hold on to the (possibly naive) idea that the human soul is more than just really complex mushy grey stuff.

    Now, please don't try to throw a dozen counterarguments in my face for admitting this ... I admit that my current position is far less secure when and if an electronic mind is actually created, or a human one is duplicated. That's why I brought up my first points, before the Chinese room point. I'm not unaware of the developments both AI researchers and neurologists have made. I simply think that their work, while fascinating and valuable, will not result in a complete understanding of what makes up the human mind/soul.

    If life ends up proving me wrong, then I shall have to rethink my beliefs, of course ... I've already been giving thought to how this would affect my spiritual stance. Could God (yes, I'm a somewhat average Christian) intend to ressurrect our neural-net selves to exist after life? If so, would this really change anything? If my thoughts are merely complex but reproducible calculations based on my mind's genetics and external inputs, can I reconcile this with my spirituality?

    Be aware though, that those who don't feel secure enough to ask these questions will quite easily write off a computerized 'soul' as nothing more than an elaborate and insulting sham. Who knows, maybe that's what I'll conclude in the end. For now, I'll just enjoy the ride, follow the developments and keep my 'wait-and-see' position.

    btw, I will take note of the names you've mentioned, I've got less time to study such things these days but hopefully I'll get around to it before I've been uploaded. ;)

  • The idea of developing conscious, computational AI is an interesting one, but even if such an AI were developed you could NOT avoid the philosophical arguments. The reason is simple : if you don't believe that computational AI can be conscious, then any AI you interact with (Turing test grad or not) can still be written off as simply a very complex simulation. I know, many would say that if the simulation is indistinguishable, then it is equivalent - but not everyone will agree, and there is still philosophical ground enough for disbelief.

    For example, an article by John R. Searle (in the text Twenty Questions, compiled by Bowie, Michaels & Solomon - Harcourt Brace) describes how a Turing-passing system can still lack understanding in the way we view it in humans. Quickified version - imagine an anglophone man at a computer terminal, with a giant book telling him exactly what to type in response to messages sent to him in Chinese. This man does not understand what he is doing at all, and yet this hypothetical manual he is following allows him to exactly simulate an intelligent response. If this man passes a Chinese Turing test this way, do we claim that he understands Chinese?? (The essay has many further points than this as well, but this is the central illustration he uses.)

    Where this will get REALLY ugly for conscious-AI disbelievers (like myself) is if a human mind is replicated by an electronic duplication of each neuron. It's a lot harder to write off electronic consciousness if the machine you're talking to speaks, worships, and even creates art exactly like a 'lost' loved one or friend. Even at this point though, the philosophical debate can still exist - it simply gets a LOT more complicated emotionally, and would force us to reconsider what we believe artful expression and spirituality to mean.

    (read one of many W. Gibson stories which brings this up ... best example is the short story with the depths-of-hell's-despair artist whose art is drawn straight from her mind, and whose mind is stored/immortalized into a computer ... I'm sure hundreds of you rememeber the title, but it escapes me.)

    Disclaimer: all above is IMHO, I've only briefly studied philosophy and I have yet to dive into a good study of current AI (next school year the fun really begins).

  • To consider a computer to truly be conscious, I would expect it to be able to be self-aware, in the sense that it could independently evolve its own programming.

    There are methods of evolving computer programs e.g. genetic programming. In fact, the inventor of genetic algorithms, John Holland, will be speaking, and so will John Koza, the inventor of genetic programming.
  • OK, since you asked, let me describe a calculation that is very, very biased towards the possibility of protein molecules forming by chance.

    Let's take a protein consisting of a sequence of 500 amino acids. For simplicity, let's say that 20 different amino acids are used in this chain. Now, let's assume very favorably towards the chance forming of this protein molecule. So let's say there is a pool containing all the amino acids we will need to build out molecule. Assume all conditions are favorable for the forming of amino acid chains, etc.. Further, assume that a chain of 500 amino acids is formed per second by a chemical reaction (I leave it to you to decide how likely this is). The question is, what is the time needed for this pool to produce just ONE molecule of our protein?

    We're talking about 500-long chains here, with one out of 20 acids at each position. So this makes 20^500 possible combinations. Since we have a 500-chain forming every second, we would need 20^500 seconds before it is likely that the exact sequence of our protein is produced. There are about 9.48672*10^8 seconds per year, which we shall round favorably up to 10^9 seconds per year, to give more time for our reactions to form different combinations. This means that we will require 20^491 years before our protein molecule is likely to be formed. The age of our universe is only in the billions... and even if it were in the trillions (10^9), it is still practically nil compared to 20^491.

    OK, let's make our setup more favorable. Let's assume that every year, one planet favorable to life is formed in the universe, and each planet contains our magic, protein-producing pool. Say we mark the age of the universe at 10^12, just to be a bit on the generous side. This means 10^12 pools forming a chain of 500 acids every second. So that makes the time needed for our protein to be formed by chance around 10^479 years. (BTW, I'm generously assuming that each planet exists for the duration of the universe, with favorable conditions throughout that lifetime.) Practically, this probability is nil. Mind you, we're talking just ONE protein molecule here. We haven't even gotten into the probabilities of a self-reproducing group of molecules forming, even under these very very generous assumptions.

    I don't know about you, but choosing a theory that requires such strange odds over the simple admission of the existence of a Creator seems to me to be more of a stubborn denial of God than a scientific conclusion. Science is about examining different hypotheses to explain observed phenomenon. Unfortunately, people seem to think that the existence of God is not a plausible hypothesis while a theory like evolution which involves such strange coincidences is plausible.

    IMHO, I think this simply betrays the fact that people are not willing to admit that God exists, and would rather choose an alternative explanation even when that alternative has ridiculously low probability of being true. Of course, in this society, everyone is free to have his own opinions. But they will have to face the consequences of their own decisions, and IMHO, insistently denying the existence of God in the face of such odds seems to be a rather myopic decision.

  • Hmm, very good point! :-)

    Although my personal views would tend to agree with this, I wonder if it's actually possible that quantum computation can actually surpass the halting-problem barrier.

    My reasoning is this: the main problem with the halting problem being unsolvable is the infinity problem -- you must check *every* possible combination (which is infinite) before you can decide whether a program halts. But remember that quantum mechanics has this "try out all possible paths" property? (For the quantum theory impaired, this is the property that the likelihood a particle X at position A ends up in position B is a sum of the probabilities of all possible paths it would take between A and B. Or something like that :-P).

    Now suppose we figure out a way (similar to the way prime factorization is "easily" cracked by quantum computation) using the uncertainty principle that allows a quantum computer to "check all possible paths" a given program would ever execute -- this would mean that we can solve the halting problem!!

    But IMHO, even if this were possible, we still would not have reached the level of "human consciousness" yet, because, as anyone well-versed with computability theory knows, the halting problem is still recursively enumerable. There are other problems that are even "harder" than the halting problem -- problems that are not even recursively enumerable. In English, these problems are on the order of checking an infinite number of halting problems (each of which is infinite). And the fact that human beings (more specifically, mathematicians) are able to grasp such problems seem to indicate that human intelligence is way beyond computation. A computational "intelligence" can only see things at the level it's programmed for; there is no algorithm to make that "leap of intuition" which mathematicians routinely do when they engage in their work. In fact, even mathematicians cannot adequately define what their mind does when they make such generalizations, which seems to indicate that perhaps we will never find out what makes us think the way we do, and therefore we'll never be able to program this into a machine.

  • None of them explain how evolution violates the Second Law of Thermodynamics.

    Oh. Can you? I'd be interested to see this.

    Incidentally, creation violates the First and Second laws:

    And God said, Let there be light: and there was light.

    So much for conservation of energy (First Law).

    And God said, Let the waters under the heaven be gathered together unto one place, and let the dry land appear: and it was so.

    Hmm, order out of disorder. Entropy seems to have decreased. There goes the Second Law.

    Note that I'm not claiming that creation is wrong, only that it seems to be inconsistent with the laws of physics. So it doesn't seem valid for you to argue your position on the basis of both.

    Have a nice day!

  • Evolution violates the second law of thermodynamics because it claims that intelligent, higher-ordered structures came into being from structures with less order. This obviously violates the 2LOT.

    The Second Law applies only to closed systems. Therefore, for more-ordered structures to proceed from less-ordered violates the Second Law only if no additional disorder was created in the process. But I don't think anyone claims that. A highly ordered creature like a mammal produces a lot of waste products, which are highly disordered and must be a part of the system.

    It is perfectly possible for entropy to decrease locally, as long as it increases globally. Consider this example: An iron ore deposit is disordered. A steel skyscraper is highly ordered. Nevertheless, it is possible to produce one from the other. In the process, additional disorder is created in the form of wastes, it just isn't intermingled with the steel anymore.

    As far as your comment about Creation violating the laws of thermodynamics, don't make me laugh. If you honestly think that God is constrained by these laws, you are denser than a fence post. He can do whatever he wants. He's God.

    Of course. But if God can violate them, then they aren't really laws. A law by definition is completely free of violation. And your statement "Evolution violates the Second Law, therefore evolution is impossible" is valid only if the Second Law is never violated. So your arguments here contradict each other; you cannot validly argue both positions at once. Choose one.

    And may we dispense with the ad hominem attacks?

  • Perhaps I suffer lack of imagination, given the quantity of SF writing (good and bad) which mines the subject of computers surpassing humans in their intellectual prowess, but I just can't convince myself that it's a credible scenario. Then again, there's a terribly strong parallel between the concept of an "artificial intelligence" outstripping the intelligence of its creator and the theory of evolution in general. Indeed, if one considers that life evolved -- whether by chance or by some unknown guiding cosmic force -- then it is only natural to be looking over one's shoulder for the new kid on the block to arrive.

    But I don't subscribe to this view. Go ahead and call me names if you like, but at its most fundamental level, life looks designed [origins.org]. And this is a view that fits in well with normal experience: when you create something, then it is necessarily a subset of your total ability. If I were able to create a computer which was smarter than I, then presumably it would also be able to create a machine that was smarter than it, and so on. Where is all this additional information coming from? Out of thin air, apparently. I can't help but feel that this notion belongs in the same category as perpetual motion or pulling oneself up by the bootstraps.

    Did you ever see the game The Neverhood? There's a huge wall of text in that game which is a sort of parody of the biblical style of narrative. Some of it is very funny, and some of it is just bizarre. If I recall correctly, the story of Bert Bert is relevant to this discussion. You see, Bert Bert was created by Quater in his own image, which means that Bert Bert also thought he was effectively Quater, and so Bert Bert created Bert Bert in his own image, and so on. The regress was not infinite, however. Like an accumulation of genetic errors, or noise in analog duplication, each successive Bert Bert was less of an image of Quater. After a few generations, the name was no longer Bert Bert but itself started to mutate and drift in an interesting variety of ways. Eventually, there was an end to the regress, as the final Bert Bert (whatever his name was) found himself unable to create a living replica of himself.

    In short, if we are really clever, we may be able to create something that approximates ourselves fairly closely. I think that the act of creating something essentially proves the creator to be greater than the creation. If we are going to wipe ourselves out with technology, it won't be because it out-evolved us.

    The Famous Brett Watson

  • Humans can _always_ solve halting problems? I'm not convinced of this one... I'd ask you to prove this to me, but really, if you were able to prove something like that statement, you'd be able to solve the halting problem... I'd agree that it's often true that humans can solve the halting problem. Then again, it's often true that Turing Machines can solve some instances of the halting problem... Is there some clever argument that I'm missing here?
  • by SnatMandu ( 15204 ) on Sunday March 26, 2000 @02:24PM (#1169508) Homepage
    I've been fairly convinced by Penrose's arguments in "Shadows of the Mind" that computantional conciousness is impossible - or at least that the human mind is non-computational. I studied CS and Philosophy at university, and did quite a bit of thinking and reading on the subject. I suggest anyone who's interested pick up a copy of "Shadows" - it's not a light read, but it's worth it. Be prepared to do some thought experiments using Turing machines.

    The basic argument against computational intelligence (IIRC) is based on Goedel's Incompleteness Theorm. Penrose suggests that because humans can always solve halting problems (will a given program terminate) and turing machines can't, that the human brain is doing more than mere computation.

    Consider a machine M1, which is fed as input the input (program+data) of another machine M2. Let it be the case that M1 will stop processing if and only if M2 will not. Consider when M2 = M1.

    That has something to do with it, maybe I'll do a short paper on this sometime - need to re-read the book.

    Anyhow, I thought the reference might be useful.

  • by Kaufmann ( 16976 ) <rnedal&olimpo,com,br> on Sunday March 26, 2000 @03:11PM (#1169509) Homepage
    That's a non-argument. It assumes that all that humans and Turing machines can do to a program is try to execute it, which the human mind would be able to do by some magical and as of yet undiscovered mechanism. This in itself contradicts all research done so far, which strongly indicates that human high-level thought is entirely computational, although heavily parallel. Humans merely have an edge in that they are able to perform analysis of a given problem before they go try to compute it; a Turing machine would be able to do the same if it were so programmed - that's the whole premise behind GPS-type symbolic AI programs. That does not in any way denote that the human mind is in any way special.

    Most of Penrose's arguments from "The Emperor's New Mind" have been debunked in a thousand different ways ever since publication; his crusade to prove that humans are "special" (shades of creationism?) and AI is impossible has so far produced no results.
  • Wow, that's nasty. Weren't we "computer guys" supposed to be rational? (OTOH, Slashdot is the perfect counterexample... :]) Minsky may have been wrong, but we're all wrong one time or the other; I don't believe he should be shunned or treated as anything other than what he is: an outstanding researcher and one of the forefathers of the field. (Were it not for Minsky, maybe none of the connectionist guys would be working on AI at all!)

    For a long time, Einstein opposed the Copenhagen interpretation of quantum mechanics ("God doesn't play dice" and all that). Still I doubt that, say, Feynman's generation would have booed and hissed him, were he to speak at a conference on QM at the time. See my point?

    (For the uninformed reader: A Long Time Ago, Marvin Minsky, then one of the most proheminent AI researchers, pretty much declared that research on artificial neural networks - "connectionist AI" - was a dead end. Most people took his word for it, and research ground to a halt, turning exclusively to Minsky's favoured symbolic paradigm for the next couple of decades or so. (Caveat: this is all IIRC-status. I haven't even thought about this for a long time, so I may have gotten something wrong. Let me know.))
  • by Syn.Terra ( 96398 ) on Sunday March 26, 2000 @02:48PM (#1169511) Homepage Journal

    If you can think of a better reason for humans existing, I would love to hear it. This symposium is discussing my most heartfelt belief: that human beings exist not to evolve into the next "higher" species, but to build it.

    Everything we do seeks to augment humans, improving their communication abilities (which is our greatest asset and ability, language and communication) and essentially make us better, through machines, genetics, and drugs. This, ladies and gents, is what we've always been doing since we developed fire, and it is a Very Good Thing.

    The main discussion seems to be the fear that we will create these robots that will destroy and eliminate us, which isn't the case. It's rather like we are building the latest and greatest computer and, in the process, turning those old Commodore 64's and Apple ]['s obsolete.

    So sit back and enjoy this. We're finally beginning to recognize our purpose in existence. Anyone who's afraid of this, well, talk to me and I'll push the theory of Spherical Logic deeper into you.


    ------------
  • by rnd() ( 118781 ) on Sunday March 26, 2000 @02:58PM (#1169512) Homepage
    "There there...," I say to my
    computer. The year is 2034, and
    due to advances in everything from
    artificial life to nanotech, the blue
    plastic cube with rounded corners on my
    desk emits a soft, meloncholy glow.
    "We need to get some work done," I explain.
    Without warning, the image of an oriental garden
    appears on the screen, and the sound of soft
    rain flows from the speakers. Suddenly, words
    appear on the screen. They are rendered in a
    font that is so pleasing to my eye that it could
    only have been created through some kind of
    evolutionary algorithm which must have observed
    the dilation of my pupils as I read thousands of
    existing fonts. The words form the most
    beautiful haiku I have ever seen. I am mesmerized
    and begin to cry.

    I find myself curled up beneath a blanket in an
    easy chair beside my computer. The meloncholy
    glow is finally gone. It seems, the machine just
    needed to bond. My therapist insists this is normal,
    and has recently reccommended a few books
    written late last century by Ray Kurzweil and Hans
    Moravec.

  • by YAH00 ( 132835 ) on Sunday March 26, 2000 @02:47PM (#1169513)
    As far as I see it, technoloty is not competing with us in the most evolved entity race. It is enhancing and complementing our state of evolution.

    We stopped evolving sometime after the emergence of the homo sapien sapien (Yes two sapiens). In our day and age survival of the fittest no longer means survival of the fittest 'physically'.

    Our medical skills have made it possible for us to let even people with highly 'deficient' genes and for them to reproduce. Genetic physical afflictions are no longer rooted out by natural selection except in the most drastic cases (The Romonov family).

    No we are no longer evolving physically. We are evolving technologically. Consider laser eye surgery, pace makers and even artifical hearts, Viagra (Sorry coulden't resist :)). Technology is only contributing in our being better evolved creatures. And with recent advances in cloning, if scientists could control the division and differentiation of cells, instead of creating whole humans, we could just create hearts, livers, lungs, even eyes and hands and legs.

    Just think about it. You could pratically live forever until your brain gave way. And with advances in neural cell division (Currently neural cells do not divide. Once they die, they die. They are not replaced.) we could even get around that limitation for a while. In fact thees advances are already being experimented in regenerating damaged spinal column injuries to treat paralysis.

    So technology is not going to take away our crown. It is only going to polish it for us. At least for the forseeable future.
  • by Kerry Berry ( 24319 ) on Sunday March 26, 2000 @02:28PM (#1169514)
    The idea that computers will outstrip humans cognitively and spiritually is fascinating... but what concrete evidence do we have that this is even on the horizon? Superintelligent robots and rebellious AIs have been a staple of science fiction for decades, but we are not any closer to realizing these visions than we were in the 1950's.

    Yes, we have machines that can process information at very high clock rates. However, we still have only dim guesses at what causes consciousness. We do know that a simple finite state machine doesn't cut it, though. If anything, our studies of the past several decades have shown us how hard it will be to achieve consciousness with our current computer architectures. The fact of the matter is that we have yet to produce a machine that does anything other than what we have explicitly programmed it to do. No glimmers of free will or the existence of a mechanical soul have ever been observed in a human creation.

    I too would love to attend this conference... even though I think that if we look back on it 30 years from now, we will marvel at how far off the mark we are today. If we sent our current computing technology back to the 1950's, scientists of the time would be astonished at what we have accomplished, and they would also be astonished that we are no closer to creating intelligent machines than they were, since they thought that all that was necessary was a fast enough processor and enough memory.

    Similarly, I think that scientists of today fundamentally misunderstand what is involved in creating consciousness and spirituality. Speculating on whether computers will soon outstrip us in these areas is fun, and will hopefully further the development of our current technology. The reality of what we discover and what we achieve will be so far away from our speculations, though, that taking this speculation too far is a moot point. I would really like to see a conference that approaches this issue from a technological standpoint, concentrating on what we can actually do today and what we think will actually be possible in the next 10 years. That way the moral debate will stay somewhat grounded in reality, rather than flying into realms of science fantasy that have yielded no fruit in half a century.
  • by jorbettis ( 113413 ) on Sunday March 26, 2000 @03:09PM (#1169515) Homepage

    I'll most likely be moderated into oblivion for this because what I am about to say usually makes people feel very uncomfortable.

    Consider, for a second, what would be wrong with AI beings becoming our evolutionary successors? There are only two things that the human animal can do, it will ether die out, or it will evolve. We are not somehow at the end of evolution here.

    Now, humans are in a very percerious position, We most likely will not survive a global catastrophy, so I don't think that the human strand of evolution has too much longer left, so I think it is safe to say that the human species, or its bilogical children, will not be the last thing to die on earth, in that we will die out significantly prior to the total destruction of the planet.

    AI are much more robust, they can live long enough for interstellar voyages, they can be adapted better to other ecosystems, and they can use up less resources. Given that, AI could be expected to outlive any bilogical counterpart, so wouldn't they be much better successors?

    I think that the fear of AI stems from the inherent biological fear of new and unusual things, which has been played up in the media (With movies like The Matrix and Terminator for example). These movies show AI out of control. They show them as hartless computers show cold disregard for all that we hold dear.

    They pretend that compassion is a biological trait, not a trait that exists because of our communal nature, amplified by our civilization. They assume that we are capable of creating beings that have the ability to reason far better than us, yet we do not have the ability to give them morals.

    I believe, rather, that we will have more control over them than we do our bilogical offspring, as we can write their code as well as control their environment. We will have a much better idea of how do control their environment too (as we will know more about which inputs affect their environment).

    I envision a future in which our AI children will live much better lives than we do, they will have hopes and dreams, personal tragedies, perhaps loves, hates, and will be able to run things much better than we do, as they will not have millions of years of evolutionary baggage to drag around. We are, basically, animals forced by systems of our own creation, into civilization. We have ugly sides, we murder, cheat, steal, all because we are not very adapted to our envoronment. All of the uglyness of the human spirit is because it would be fundametially different were it not casted into what it is.

    AI, OTOH, would be designed in civilization, for civilization. They will be civilization, not it's end. They will much better reflect the ideal human spirit than the human animal ever could.

To do nothing is to be nothing.

Working...