Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Technology

Summary Of Symposium On Spiritual Machines 165

csy writes, "I've just returned from the symposium on Spiritual Machines hosted by Douglas Hofstadter as previously publicized on /., and I thought I'd write Slashdot a quick summary of the events from my recollection (other posters will no doubt correct the errors I make here, and I don't summarize all the speakers, only those I consider more interesting)."

"The interest in the symposium was amazing. The lecture hall was packed, and people who couldn't get into the main lecture hall had to watch the talk by live video in an overflow room (which was packed to the brim as well). There were the old and the young, male and female. Interest was no doubt spurred by the symposium's very controversial thesis, recent interest in Bill Joy's article in Wired, and the very distinguished cast of speakers. The irony of the fact that the symposium was punctuated by microphone failures and abruptly dimming lights in the room was not lost on anyone.

Ray Kurzweil spoke first, and he spoke of how rapidly increasing CPU speeds would result in intelligent, spiritual machines. He spoke of how the current exponential shrinkage of transistor sizes was not the first such trend, but rather a series in the natural progression of technology: from mechanical computing devices, to vacuum tubes, to transistors, to integrated circuits, and he expressed his optimism for the future. He spoke of how the human brain could be scanned, to replicate its functionality in silicon. His conviction in these advances, and the ability of humans to reverse engineer the human brain, made him express a highly optimistic position.

Bill Joy spoke next. He opened by stating that he believed in the ability of computers and nanomachinery to continue to advance, but it was precisely his belief in this advancement that led to his position that the continued development of nano-machinery and self-replicating machines would pose a new and different kind of threat to human kind ('knowledge of mass destruction'). He made a particularly eloquent point about how that while science has always sought the truth and that free information has great value, but just as the Romans realized that to 'always apply a Just Law rigidly would be the greatest injustice,' so must we seek a restraint, and 'avoid the democratization of evil.' It wasn't exactly clear to me from his speech what form he thought this restraint must take, but his speech was extremely compelling, and it is clear to me that at the least, self-replicating machines will create new and serious challenges for mankind.

John Holland, the inventor of Genetic algorithms, took a more skeptical view of the ability of increasing computer speeds, even at exponential rates, to naturally result in machine intelligence. In his words, 'progress in software has not followed Moore's law.' He believes in its eventuality, but not in the time frame proposed (2100). He gave the example of Go vs. Chess, where the number of positions in Go are approximately 10^30 greater than in chess, and simply by adding additional rows and columns, the number of positions increase exponentially -- eliminating gains made from exponential increase in computer speeds. He said that while genetic algorithms enable the evolution of computer programs, the fitness function and the training environment to use (he gave the example of evolving an ecosystem) are often unclear. He emphasized the need for strong theory and he concluded with a very (in my mind) profound statement, 'Predictions 30 years ahead have always proven to be wrong, except in the cases where there is a strong theory behind it.'

Ralph Merkle addressed the claims made by Bill Joy directly. He said that rather than to speculate on the dangers of nanotechnology and take hasty action, we need to find out whether nanotechnology gives an edge to the 'offensive or the defensive,' and to understand this, more research is need, and not, in Bill Joy's words, 'relinquishment.' (Joy later asked Merkle 'Do you think biological weaponry gives an advantage to the offensive or defensive,' to which Merkle embarrassingly replied, 'I'm not sure.')

John Koza, drawing from examples in Genetic programming, said that while human-competitive results by machines are certainly possible (e.g. the evolution of previously patented circuit designs), much more computational power is needed to evolve the equivalent of a human mind.

Other choice moments: Holland asked Joy during the panel discussion how much progress have we seen in Operating Systems in the past 30 years, to which Joy replied 'the function of an operating system is fixed.'

In conclusion, the speakers largely differed over the time frame for intelligent, spiritual machines, and the amount of danger self-replicating machinery posed to humanity -- but no one in the panel seemed to think the Moore's law would run out of steam, or that intelligent machines would not be eventually possible -- although Hofstadter does admit that this is as much by construction of the panel, which did not include any serious naysayers."

This discussion has been archived. No new comments can be posted.

Summary Of Symposium On Spiritual Machines

Comments Filter:
  • by Anonymous Coward
    A dapper and enthusiastic young man was handing out business cards at the door of the Stanford "Spritual Robots" event:

    -------------------------------------------

    SINGULARITY WATCHER

    * Events and news lists to educate you on the coming inevitable transition.
    * Get the big picture of emergent computation, info tech assessment, and emergent contraints.
    * Invest intelligently during the transition
    [[Missing bullet point]] Address ethical and social issues
    * Selective books, films, and audio resources.
    * A community of future-concerned individuals. Tell a friend!

    SingularityWatcher.com

    --------------------------------------------

    I really got a grin out of #3: "Yee-haww! I invested in subVel / P.R.A.X.Y / Roofion Gate cortical switching modules just in time for everyone in the world to need one for the Great Melding, and now I'm . . . WE ARE BORG. YOUR EXISTENCE AS YOU KNOW IT IS OVER. INDIVIDUALITY IS IRRELEVANT. MONEY IS IRRELEVANT. INVESTMENTS ARE IRRELEVANT. RESISTANCE IS USELESS."

    A friends suggests the web page consist of:

    SINGULARITY HAS NOT HAPPENED YET
    Hit "Refresh" button for update

  • by Anonymous Coward
    RE: Douglas Hofstadter's personality

    Below is Mr. hofstadter's reply to the first "fan letter" I e-mailed him. He is one of three people who have EVER responded to "fan-mail" that I have sent. (The others being Spider Robinson and Charles deLint, BTW) *I* think he's pretty cool. My mother also took a seminar course he gave eons ago and said he was one of the few profs she ever had at college who ANSWERED questions when he could and said "I don't know." when he didn't.

    So go figure...

    Dear Ms. Vincin,

    Hardly a "generic" fan letter -- a very idiosyncratic one.
    First, thanks for all your kind words about GEB and MT. Second, if you don't mind my saying so, I really think that you would be interested in my book "Le Ton beau de Marot" (Basic Books, 1997,
    now out in paperback).
    You asked if there is room "in my field" for someone like yourself. Well, firstly, I don't know really what "my field" is; secondly, it's not up to me to say whether there is or is not "room" for someone of a certain style; and thirdly, I really don't know what your style is. If you mean, more specifically, that you are
    musing about trying to get an advanced degree in cognitive science under my supervision or something of that sort, well, that is
    a complex matter and takes a long time to figure out. First of all, one has to get accepted to graduate school, and then a lot of other hurdles have to be crossed...
    Probably better just to enjoy the books, but who am I to say?

    Anyway, best wishes,
    Douglas Hofstadter

  • by Anonymous Coward
    While I have never met Hofstadter myself, one of my professors studied with him for some time. From what I gathered from him, Hofstadter, like many people with very strong/expressive opinions, can come off as pompous or arrogant. This is not to say that he cannot objectively and rationally consider the opinions of others as I believe is apparent in Godel, Escher, Bach (the Golden Baid). In the text I feel he does attempt to consider at least some of the differing perspectives on the topics at hand. At any rate, it is my opinion that he has interesting ideas and views on a wide variety of topics and at the very least attempts to consider some of the naysayer opinions.
  • by Anonymous Coward
    Was any mention made of the relationship between quantum computing and spiritual machines?
  • by Anonymous Coward
    How can you infer consciousness from that?

    I can see how your post could as easily been moderated down as "Troll." Or this one.

    From what should I infer your consciousness? I do, by the way, but all you have done is reply to a comment.

    You've posed a question for which there is no agreed-upon answer, and there's not enough context in this discussion to have a hope of finding one.
  • by Anonymous Coward
    It is very disturbing that some of the speakers were millionaire/dilettantes who started to think about those issues only recently, and got the attention of the press mostly because of their fame and millions.

    Many scientists in some communities (neural nets, machine learning, computational neuroscience, AI, A-life, Robotics) chose their career precisely because by the time they got through college they realized that Moore's law may allow then to build intelligent machines within their lifetime. Unlike most of the speakers at the workshop, these guys actually built their lives around achieving this goal, and not merely writing popular books, making millions, or attracting the attention of the press.

    How about asking these guys what they think.

    - Anonycous Moward

  • PKD once said that reality is what remains even after you don't believe in it.

    This sounds a lot like Descartes to me...

    Anyway, who is PKD?

  • the process of building really-intelligent software will require that we first come to deeply understand our own psyches
    I think understanding ourselves will help, but it's only absolutely necessary if we define "really intelligent" to mean "like us". I think that's often how people define it though.
  • I've not seen the movie all the way through, but the book clearly states what will happen "because of chaos theory":

    Gennaro said, "Your paper concludes that Hammond's island is bound to fail?"

    "Correct."

    "Because of chaos theory?"

    "Correct. To be more precise, because of the behavior of the system in phase space."

    and then, after two pages of explanation of what chaos theory is,

    "[...] So it turns out that this simple system of a pool ball on a table has unpredictable behaviour."

    "Okay."

    "And Hammond's project," Malcolm said, "is another apparently simple system -- animals within a zoo environment -- that will eventually show unpredictable behavior."

    "You know this because of . . ."

    "Theory," Malcolm said."

    "But hadn't you better see the island, to see what he's actually done?"

    "No. That is quite unnecessary. The details don't matter. Theory tells me that the island will quickly proceed to behave in unpredictable fashion."

    "And you're confident of your theory."

    "Oh, yes," Malcolm said. "Totally confident." He sat back in the chair. "There is a problem with that island. It is an accident waiting to happen."

    And so the island soon starts showing events not predicted in the original design, like the dinosaurs changing sex (they were engineered to be solely female) and breeding.

  • You seem to be labouring under the opinion that we live in a rational universe, and as products of this universe are in point of fact inherrently rational ourselves. Can you support that assumption?
  • If history is any guide whatsoever, then I must admit that Bill Joy and other technological alarmists seem to have little weight to their arguments. The key issue that I have to contest with is this: that the new threats offered by nanotechnology, genetics and robotics are putting the human race into such danger that we need to take measures that have never before been necessary. He would have us limit the kind of research that could be done. I think this is exactly the wrong stance to take. People who are not free in their research are not researching at all. Look at comercial versus pure research, which makes the most basic breakthroughs, contrasted with the one that makes the most applied breakthroughs.

    Now I agree that these new technolgies are more dangerous than any previously. However, no effective control of research can ever be done, not given that nature of research. Research into one completely innocuous area can have applications in any other area. Yes, there is a great danger, but the cure would no doubt be worse than the disease. Consider: what form could this restraint take? Self control by the scientists is exceedingly unlikely, otherwise they would hardly be good scientists. Some oversight commitee? This would easily be one of the most powerful organisations in the world, if they could dampen any sort of research they found 'unsafe.' Should we folloy Joy's prescriptions, we would be condemning ourselves to remain in technological doldrums for ages.

    One way or another, I imagine that we will find ourselves facing these dangers. I honestly beleive that any research organisation oversight commitee would be ineffective. They would be unable do their job perfectly, so eventually, some of these threats will come to see the light of day. The human race would be better equiped if it knows the dangers, and how would we know the dangers? By unencumbered research.

    Throughout history, the human race has had many leaps forward in technology, and those have all put the human race in danger. However, the only thing that has saved humanity from technology is: technology. This leads me to beleive that the only way to face the dangers alarmists warn us of is knowledge.

  • You're really proving your "Rorschach blot" point...
  • Amen, brother.

    Not only Crush is missing Bill Joy's position, so are all the moderatores who repeteadly marked his post as Interesting. Which kinda tells you something about Slashdot's moderating system. :)

    Alejo.
  • I find it funny how those persons unable to use Vi (for whatever reasons) always bash it. Vi (specially Vim) is the best text editor ever invented. The fact that you can't use it speaks about you, not about Vi.

    Alejo.
  • Spirituality is probably the closest word that he could find.

    Human languages are only partially created to solve problems, partially they are created to express emotions. If one cannot point to something and say "This, this is what I'm talking about" then one must expect common speech to dilute the meaning. Consider what has happened to the words entropy and thermal. Of course, it also frequently flows in the other direction, too. Consider information, force, and work.

    What Ray was endeavoring to discuss was the mapping of human consciousness into computer. In fact the mapping of a particular human consciousness into a specific computer (net) as a piece of software. He was talking about achieving an isomorphic mapping for a partition of the computer space, so that it would, in a sense, not be reasonable to distinguish the logical structures. Spiritual seems a good work for that, or a least as good as any that I have been able to come up with. It would, of course, require much more powerful computers than are currently available, but that is accepted as a precondition.
  • I guess it would be rather ironic if this got moderated as Offtopic?
  • Joy later asked Merkle 'Do you think biological weaponry gives an advantage to the offensive or defensive,' to which Merkle embarrassingly replied, 'I'm not sure.'
    Why should Ralph Merkle be embarrassed to admit that he hasn't jumped to a premature opinion about the extremely complex ramifications of an unprecedentedly powerful technology that hasn't even been invented yet?

    Bill Joy (or, as I heard several people call him that day, "Kill Joy") inadvertently helped Merkle make this point later on. He asked the crowd to "raise your hands if you think nanotech favors offense", and got about half of the hall. In response to "raise your hands if you think nanotech favors defense", he got about a third. But when Ralph Merkle then asked, "raise your hand if you think we need more study to find out," over two thirds of the audience raised their hands.

  • Any argument that intelligent machines cannot exist must deal with the fact that we exist. Human brains are physical - they are matter and energy. Unless you claim that the brain has some metaphysical properties, what's to stop someone from building a device with the same properties as a brain? You may argue something like "intelligent machines based on a Von Neumman architecture are impossible", but that's a different argument.
  • "confuse with thought" ? Are you trying to imply that computers will never be able to think ? If so, what is your basis for that?


    If I have a computer which is able to converse with me, that is have an intelligent, original conversation, I am going to give it the benefit of the doubt. After all, what evidence do I have that anyone has thoughts, spiritual or otherwise?

  • (Joy later asked Merkle 'Do you think biological weaponary gives an advantage to the offensive or defensive,' to which Merkle embarrassingly replied 'I'm not sure').

    What? Why was this embarrassing? It does seem unclear whether biological machinery is inherently better at defense or offense, since there appear to be no perfect immune systems in nature, nor are there any microbes that invariably win against immune systems.
  • he had apparently not bothered to find out whether the existing bioweapons gave an advantage to the offense or defense.

    But this implies that someone knows, or that he could have found out easily, if only he bothered to search the literature. Is this so, or do we need fundamental research on the question (i.e., what Merkle was calling for in the first place)? I would assume that the research needs doing, but I don't see any reason that it would be easier to do with biological molecular machinery than with artificial molecular machinery, especially since biological machinery will necessarily have all sorts of non-designed side effects. I just don't think it's as simple a question as you and Hemos appear to agree it is.
  • Ray believes that the greater speed that machines have will allow them to learn at a rate exponentially faster than a humans. They are linearly 100X or 1000X times faster than the human brain. So in several years, computers will have learned more than we have in our thousands of years of development. Thus, computers will be more "intelligent". I believe Ray thinks circa 2020 is the time in which this will happen.

  • My point was that lack of survival imperative will make the 'malignant machines take over the world' scenario extremely unlikely -- there is no reason for machines to want to take over the world, and the hard-coded (non-evolvable) motivation set will in fact ensure the contrary. One could argue (as you did) that the very fact of evolution would favor such outcome, that the machines will not have to want to take over the world, but will simply do so by virtue of having evolved that way -- but like I said, we are the ones controlling the very nature of their 'DNA' (and their fitness criteria), so we can decide not only what can evolve, but even what evolutionary paths are possible by the very design of the system.

    Just as there is no way in hell a biological DNA-based evolution can lead to species that can survive inside a star, we may stack the game in such a way that certain evolutionary outcomes -- such as machines taking over the world -- are impossible on a fundamental level.

    --

  • In all of this brouhaha, a crucial point gets lost. machines are not gods, and are subject to fundamental limitations just as we are. The nature of our genetic code -- DNA -- limits our evolution (its speed and some othe rfactirs). More importantly, we have evolved as we are -- with greed for power, survival imperative, etc. -- over milions and millions of years.

    What does this mean for computers and nanotech? two things.

    1. Machines will still be limited by the nature of their hardware -- and we are the ones designing the hardware. The perception seems to be that machines will be able to achieve every nasty goal we can think of -- but I seriously doubt this is the case.
    2. Machines will not have built-in survival imperative, unless we chose to put it in. It's really not that hard to hardcode the 'motivation set' in (a set that would contain neither the survival imperative nor any similar motives), and allow machines to evolve everything but their motivations.

    Will some people find a way to design machines that 'want' to survive, perhaps even with explicitly nefarious purpose? Probably; however, the rest of the world will be so stacked against them that I doubt they will actually be able to survive.

    Either way, I think this particular worry is blows WAAAY out of proportion by people who implicitly take our nature as the sole known type of intelligent agent (with all of out evolutionary qualities) to be indicative of any type of intelligence we may design.

    I am tired of ill-considered apocalyptic dreams.

    --

  • Mutually Assured Destruction only works if all parties involved are rational.

    It more or less worked for nuclear warfare for a few decades because only large governments could build a nuclear bomb, and governments are composed of a sufficiently large number of people that they are at least not completely irrational. (But what if Nazi Germany had had the atomic bomb? They would have used it at the end. Interestingly, some say Heisenberg lied to prevent them from getting it, which I suppose supports my argument about large numbers of people.)

    Once nanotechnology works, individuals will be able to use it. Was the Unabomber rational when he decided to send out letter bombs? What if he had been able to send out deadly nanites instead?

    Your suggested defense is no defense at all.
  • Presenting MindPixels to a system is a Binary Turing Test (see my article: K. C. McKinstry, The Minimum Intelliget Signal Test, An Alternative Turing Test, Canadian Artificial Intelligence, Issue 41), that is much more objective than a traditional TT.

    I unfortunately don't have a copy of Canadian Artifical Intelligence lying around. But I do know that a key point of the Turing test is that it is not objective. Consciousness, whatever else it may be, is inherently subjective. Someday we will understand consciousness sufficiently to be able to make an objective test, or to know why such a test is impossible. Until that day, the only meaningful way to test for consciousness is to use subjective tests.

    Your goal should be to try to convince a bunch of intelligent people that your system is conscious. You shouldn't try to pass an objective test, particularly not one you wrote yourself.

    Thus, if I get a number back that is statistically indistinguishable from human, I must logically assume the system is human. That is feels, lives a life and is conscious.

    That's a pretty big leap. A conscious human run as a simple computer program would probably go nuts due to lack of sensory input. Is your program going to emulate that?

    A giant corpus of MindPixels collected and validated from a large number of people is a digitial model of self and environment.

    What about issues which people don't agree on, like abortion, the death penalty, or whether computers can ever be conscious? How are you going to implement those as MindPixels? (I'm not trying to trip you up here--you must have thought about these issues, and I am curious what your answer is.)

  • Thanks for the pointer to your paper.

    That's not a key point, that's a key flaw. MIST was specifically designedt o replace the subjective judgement of individual judges with the statistical judgement of a very large number of people (1 million or more).

    But that isn't what it does. The Turing test relies on an intelligent examiner to judge intelligence. You have replaced the intelligent examiner with a simple series of questions. The intelligent examiner will consider the answers to previous questions when asking new questions. You've lost that.

    If I steal your entire database and build it into my program, I can write a program which does very well on your test, but which nobody would call either intelligent or conscious.

    What about issues which people don't agree on, like abortion, the death penalty, or whether computers can ever be conscious? How are you going to implement those as MindPixels? (I'm not trying to trip you up here--you must have thought about these issues, and I am curious what your answer is.)


    Who cares about those. They vary from person to person and life to life. The goal here is to model an average person. Not a specific person. MIST only considers consensus knowledge, that which is the same across all people. The rest is fluff.

    It may be fluff to you, but I don't think I could consider a program which didn't have any specific beliefs to have any claim to consciousness.

    I don't see any significant advance over the CYC project. It's a worthy goal, but I don't see any path to consciousness here.

  • Good comments. I was at the symposium, and one of the fears Mr. Joy has is that genetically engineered viruses or bacteria can easily be used as weapons of mass destruction. He pointed out that human-designed microbes have none of the "limitations" of evolved ones.

    This betrays a error common in thinking about evolution: confusing the actual state of affairs (i.e., humanity hasn't been wiped out by a supervirus) with the way things must be. It's true that viruses and bacteria which don't kill their hosts too quickly do better in the long run, but no virus can consciously make that decision. Just because wiping out the human race doesn't make evolutionary sense for a virus does not imply that it can't and won't happen. (The general case of how viruses behave doesn't give us any comfort for how a specific virus will act.)

    This is why we must develop nano- and bio-technology: as human population grows, so does the population of parasites on the human population, and the possibilities for those parasites to mutate into lethal forms. Especially given increases in antibiotic-resistant bacteria, we need new ways of augmenting the human immunes system.

    Bill Joy dismissed out of hand the possibility of building defenses against genetic and nanotechnological weapons. But the fact that nature has already given us such defenses implies that it can be done. And to counter very real, existing threats to the human species, we should do it. I'll trust any open, technological defense against "Knowledge of Mass Destruction" much more than a political one.

    (No, Joy didn't say explicitly what sort of controls he thought would be sufficient. But that's the truly hard part of the question! He stated that the only viable option was "relinquishment". And the language he used was, I thought, strongly in the camp of "we shouldn't develop these ideas".)

  • The distinction between "offensive and defensive" weapons seems kind of bogus to me -- there's a saying that the best defense is a strong offense, and to make an example, in terms of nuclear arms, the threat of offense has served as a defense.

    Agreed, similarly take the example of a castle as being a good `defensive' weapon: suppose you create an impregnable fortress which protects you from an invading horde of some sort. You shut yourself and your vassals up in it and leave your local rivals to be destroyed on the outside. Or how about insisting on neutrality during a conflict? It doesn't seem like there's a clear distinction to be made at all.

  • Definitely Stallman, he'd use his "viral" GPL to infect Joy.
  • The "we" who "have the means" for mass destruction right now is limited to a few countries. The "we" who may have the means for mass destruction in the future could be tomorrow's script kiddies. God help us all.

    But this is not necessarily due to the introduction of any new technologies. The ability to develop biological weapons especially and to a lesser extent chemical weapons is technologically and financially within the means of even obscure religious Japanese cults.

    Um, isn't it the development of science and technology that has put such weapons within the reach of such "obscure religious Japanese cults"? They didn't just pull it out of their hopping/semi-levitating asses, now did they?

    So, from this you admit that the technologies that already exist, that have been generated by our broken social order have the capacity to effect mass destruction? And you would rather not try to remove the _origin_ of the impetus of the development of these tools? It's hard to know what exactly Joy thinks you can do about it. Technology and innovation march on and once there is something that is `doable' it's hard to keep it out of people's hands whether as individuals or groups. If Joy is correct that it will be as easy to manufacture these nanobots as he thinks then they will be as much of a problem as biological weapons already are.

    You seem to be missing Joy's point. He worries that these technologies will enable individuals to cause unspeakable amounts of damage.

    Sort of like a GM aerosol transmitted HIV targetted to a particular MHC complex? As destructive as that say? Or more? How about a resurrection of the 1918 Influenza strain that wiped out 18 million people but with a little improvement might do better? I wonder who is missing the point. We _already_ have the technology. It would seem that you believe that individuals are more likely to use these things than groups. Do none of the religious cults that have attempted these things provide me with even a small piece of evidence that people are just as capable of behaving irrationally in concert as alone? It just seems to me that the sudden brou-ha-ha over destructive technology is a little late and there's not mcuh we can do about it. I also believe that these things are more likely to be used by governments than individuals. Most of our destructive technology results from the inter-group competition fostered by ...drum-roll... CAPITALISM. And we will continue developing weapons and refusing to regulate them (witness the US's failure to ratify the Nuclear Non-Proliferation Treaty) as long as there is a social-order based on exploitation. We can all dick around making statements about how terrible these things are but they already exist and will be used because the pursuit of profit and domination drives it.

    Tools that broken *individuals* might use. Social orders, even broken ones, tend to want to self-perpetuate. No such guarantees with broken individuals.

    I don't know where the evidence for that is, for one thing people in broken societies often think that they are going to survive individually even if the overall logic of the society means that it will collapse, (Nazism ring any bells?). Secondly, I can think of several instance of broken social orders that would rather self-destruct than change - the Jews at Masada and the Melian aristocrats fought to the death rather than accept Athenian democracy

  • Having read the response above this, I think I'd have to agree with you. I was missing Joy's point.
  • I'm pretty much convinced by your post that there is a difference between nuclear threats and future nanotech threats. But bear in mind that it is not just the USSR and the US that posses nuclear weapons. Britain, France, Pakistan, India, China, Israel(probably), S.Africa(probably) also have the capability and it seems that Iraq was having a good try. The failure of the U.S. to ratify the non-proliferation treaty is probably going to encourage the further spread.

    It seems that to believe that there is a significant difference between the old and new weapons of destruction one has to believe that there are fewer impediments to the use of the new.....you seem to argue that there it would be cheaper and easier for someone to manufacture nanobots than to manufacture nuclear weapons. I don't really have a feeling for whether this is true. I can't help suspecting though that the design and the fabrication machinery would be incredibly complex, rare and expensive, would require the budget of a large country, would be treated as munitions and thus restricted in distribution and thus would present most of the problems that exist for the manufacture of nuclear weapons.

    That said, this is totally off the top of my head, I don't really know what goes into fabricating a self-replicating nanobot.

    Bacteria, virii, etc. may be annoying, but they only adapt under evolutionary forces; medical science has been advancing fast enough to keep ahead of them, and I expect it to continue doing so. Because evolution is blind.

    I think that focussing on the blindness of evolution in this context is ignoring a more important aspect of it - massive parallelism. The range and diversity of solutions that are found by organisms is incredible and there is no guarantee that science is able to keep ahead of it just because its fundamental method is random, blind chance.

    All that said, I was missing the point that Joy was speaking about bio-weapons also...I was focussing too much on nanobots.

  • You seem to be labouring under the opinion that we live in a rational universe

    No....I think you just projected that onto me! I said that I would prefer rational, kind robot masters to the evil irrational masters we have now.

    we live in a rational universe, and as products of this universe are in point of fact inherrently rational ourselves

    Apart from the fact that I didn't claim the first part of this proposition, the second part would not have to follow even if I had. It is entirely possible that there would be a rational universe which contained irrational humans

    Can you support that assumption?

    No, and I'm not interested in doing that

  • I am as worried about a "Terminator" type extermination of humanity as I am about a new Ice Age wiping us all off the face of the earth before I turn 40 ... or Hell becoming a winter paradise, for that matter. Replaced as the "most intelligent" "life form" on the planet is all I meant there.

    The part about deities was a severe oversight on my part (mostly due to my blatant ignorance of (and disregard for) other cultures (for which I apologize)), but I think it remains a valid point that gods were in some way superior to humans (supernaturally) ... we might not be superior to artificial life forms, particularly ones with the ability to "evolve" at a rapid rate.

    --
    DataHntr
    "Res ipsa loquitor."
  • IMHO It seems that before we even attempt to create life, AI and reproduction , maybe we should first sit down and ask ourselves , *What is it to be a good god*

    As fragile as human beings are, I think the question of being gods is far from relevant. "Gods" of religion, myth, and legend are usually omnipotent (certainly doesn't apply to us) and omniscient (ditto). Robots will certainly have the distinct advantage over us in this realm. We die in so many places and ways that suitably designed "robots" and artificial entities will thrive. More importantly, integral adaptations (making a robot waterproof or able to withstand extreme cold) would be possible *extremely* quickly (in the same generation of robot, or the next) compared to the same change in humans.

    If we need to worry about anything (and I'm not convinced that we do), it's about being replaced, not about becoming gods, benevolent or otherwise.

    PS. I use "robot" to mean any artificially intelligent device.

  • I find it hard that we can get intelligent machines with Sun's technology. :) Java AI-lets anyone? Nigh impossible. I don't think any of these guys except John Holland realizes that we need a breakthrough in the theory first. Which we *can't* estimate when. It needs an Einstein to fill in the explanatory gap and untie the frame problem. I'm in it for the fame, but who knows?
  • Plus, you should consider that no virus or bug has managed to be 100% lethal to humans.

    But those are biological entities whose existence is dependent upon the host. If a virus kills every human it touches within one day before it becomes contagious, that virus vanishes quickly. Here we're talking about things whose existence is not dependent upon people. They may ignore humans other than as slow-moving rocks, although they might casually remove everyone's feet because they want the rubber from the shoe soles.

  • Nonsentient devices are programmed by a human. A nonsentient device can evolve (like a virus) but I see no short term reason why this independant evolution would be faster then viruses evolution. We are not talking the virsus used for gene therapy here. We are talking fundamntal improvments to the evolutionary mechinism of a virus which "good ol' mother nature" has ben working on for many millions of years.

    Natural virii reproduce asexually, and hence do not have a lot of genetic diversity. They mutate randomly and thus changes in their genetic code are not linked to the success of the previous version (unlike sexual reproduction, in which the two parents must have survived to maturity). Changes in their genetic code, like in other living things, happen between generations, when they reproduce. This would not need to happen with nanotechnology.

    I suppose you might design a nanobot cold virus killer which used group processing to evolved new attacks as the cold virus evolved new defences. This group processing would take the form of units 1 to 1000 try this and tell us if you live or kill viruses. I agree that something like this could have more potential for killing humans, but we are a long way away from something like this. Also, I suspect the communications channel betwen the nanobots would presuppose the ability to include a self destruct.

    Now here I see that you agree with me. Yes, we are a long way away. Can you guess if we are more or less than 30 years away from it? I cannot, and I suggest that we should consider the case now, in order to make good decisions in the future. However, don't allow the self-destruct to let you feel secure. After all, the self-destruct is part of the code on the robot, and therefore can be selected against. Which it would be, if survival is a preference we ask the robots to select - and why wouldn't we want our tools to keep working?

    I never said we should not predict.. just that Bill Joy dose not know what he's talking about since his predictions are based on sci-fi instead of real theory. We predict the progress of the fiels you listed since we have a theory for them. The summery essentially said that the speakers who understand any present theory of biology and nanotechnology dismissed Bill Joy as a luddite. They were nicer then I was, but that's because they were only concerned with how Bill Joy was wrong.. where I think the things Bill Joy advocates are themselves far more dangerous.

    You suggested that since technological predictions rarely pan out after 30 years, why bother? I don't think Bill Joy is a luddite. I don't think the summary pointed that way either. Yes, some of the things Bill suggested seem dangerous, however you're still missing the point that his caution is not one of them. Caution, in these cases, is good. How can you limit the "smartness" of a nanobot? How can you keep the nanobot, or rather the nano-swarm, from evolving around any limits you place on it? These issues should be dealt with, and we do that by increasing research in the area, not decreasing it. This doesn't mean we don't need to deal with the issues - that would be throwing the baby out with the bathwater.
  • Summery of the summery.. Bil Joy is a moron or an autocrat who wants corperations and big government to rule the future. Joy advocates restricting access to the technologies which will shape the future. This is morally wrong in soooo many way, but I will try to explain a few.

    Now, it's rarely a good idea to resort to name-calling unless you have no ground to stand on and must resort to attacking the messenger rather than the message. You actually do have some ground to stand on, so let's get to the point.

    I think you may have slightly missed the point. It is a common reaction that if a person can harm you using an area of knowledge, then that knowledge should be kept from that person. If the identity of the particular person is not known, then keep it from everyone!

    As you've astutely noticed, this reaction is not productive. It's yet another incarnation of security through obscurity, and it doesn't work. There are fabulous examples in speculative fiction of this concept carried to its logical conclusion - Larry Niven's ARM springs to mind. Someone, somewhere, will come up with the same idea and then you have to either convince that person to keep quiet too, force them to, or allow the cat to leave the bag.

    It doesn't matter who develops the knowledge. It doesn't matter how risky the knowledge is. Everyone should have access to it so everyone knows what to do to stop someone from harming someone else with the knowledge.

    You seem to have missed what may be a valid point though, because I'm not sure he made it strongly enough - computers have the ability to change the information that fundamentally dictates their behaviour at a rate orders of magnitude higher than humans, and hence can implement "evolutionary change" much faster. It is not impossible that these nonsentient devices could become an "enemy" in and of themselves. No person would be needed to cause harm to another using the machines, they would do it themselves.

    It is from this enemy that we must keep the information. It must always be well within our grasp to rein in our machines before they enter the phase of their "evolution" in which they begin to compete for real-world niches. They don't need to be intelligent for this, they just need to require something that they can get at our expense.

    Plus, you should consider that no virus or bug has managed to be 100% lethal to humans. Shure, a very rare few wipe out 80% of the population, but we are not talking extinction as a wore case senario here people. Actually, your probabllity of being killed from a metior smashing into this planet is MUCH higher then any risks from bio/nano-technology. (Note: I suppose biology research reduces you chance of death as a result of metiors far far more then it will increase you chance of ding as a result of terrorism)

    Also, non-science people really seem to have no understanding of the way these sorts of things progresses. The situation is summed up perfectly by John Holland's quote "Predictions 30 years ahead have always proven to be wrong, except in the cases where there is a strong theory behind it," i.e. without a theory it's just luck (which can take a LONG time).


    In the end, your quick dismissal of this rational and intelligent person's reasoned opinion lends the impression that you have jumped to conclusions. The largest point you've missed seems to be that, while non-science (and by this I assume you mean applied sciences such as computing science) people may not seem to you to understand science, by the same token "science people" may not understand applied sciences.

    I will not argue historical points about the effect of plagues on humans. However, I will claim that in many cases, the survival of a human attacked by some infectious disease is completely dependent on medical intervention. In order for that intervention to occur, we must have had time to adapt to the disease's method of attacking and surviving. When we can't, people die (see AIDS). If the attacking disease were sufficiently different or mutated fast enough, we would die. If the disease were widespread enough, we would All die, to the man, woman, and child. Imagine if colds were lethal - have you ever had a cold? Do you know anyone who hasn't? And colds need hosts to proliferate, unlike nanobots. If we cannot limit the rate of "evolution", we should not create that agent.

    Your assertion that because predictions have failed to pan out seems valid, but the conclusion you draw does not follow from it - that we should not predict is ludicrous. I propose to you some predictions: 30 years from now, computers will be faster, some people will be greedy and some will be lazy. Manufacturing will be more advanced. I think those predictions will pan out. Now, if that continues, at some point we may need to worry about these things. I for one propose that we discuss them now, so in the event that the technology is developed to end the world by a non-human hand, we know what to do and not to do in order to keep that hand from striking.
  • John Koza, drawing from examples in Genetic programming, said that while human-competitive results by machines are certainly possible (e.g. the evolution of previously patented circuit designs), much more computational power is needed to evolve the equivalent of a human mind.

    Alright, the discussion on this posting is out of hand. Apparently I'm cutting out everyone who lacks a sense of humor.

    Nonetheless, I have to add my thoughts on the above sentence. While I think John is right, he and everyone else there seems to be overlooking one huge factor in the whole equation. Just like the claim that the functionality of an operating system is "fixed", so too is the claim that the functionality of the human brain is constant.

    Consequently, everyday as our collective knowledge grows, so too does the requirements for an "intelligent" machine. So while it may take 30 years to create an intelligent machine by today's standard, how will that "intelligence" compare to the human mind then?

    Just my .00002 * $1000
  • While it's true that the evolutionary goal of play is learning, it's explicit goals and expectations that determine what we learn. Two children may have the exact same experience yet learn different things according to what their goals/expectations are, and therefore what they are attending to.
  • Success or failure only exist in the context of a goal, when they are defined by whether you met the goal or not. In non goal orientated behaviour such as play, we learn about the environment rather than about the success/failure of our plans.
  • There is an important lesson here: never, ever, let a computer scientist lecture about the brain. We don't even know if the brain can be characterized as a computer, let alone how powerful it maybe be. I do know that the last estimate I saw of it's performance assumed that it processed synapses*firing_rate op's per second, which was, at the time, the funniest thing I had ever read. That was before I realised how many people really believe it.

    Maybe I'll send Ray a copy of Biophysics of Computation and see if changes his estimate.

  • you'll find that he is not claiming that a computer that is a simple extension of what we make now

    ...which I never claimed he was.

    does not in any way preclude our developing different machines in the future

    Sure. We're doing quite well modelling neural systems in hardware, which naturally leads to a form of AI. In fact, I think that's were the safe money is for making intelligent machines (esp. considering how much of that research goes into robots). I expect to live to see them created.

    I'm actually fairly confident that GOFAI will succeed in the medium term (40-200yrs), but I don't expect it anytime soon. It is, IMO, more interesting than models, but also far more difficult. (Ray seems a bit overly-optimistic about this as well. It could easily take a decade just to code an AI once you've figured how, regardless of the resources available.) The hardware requirements for such an AI are probably quite a bit lower than provided by the brain, as well. (Silly example, but if the blind can function without too much difficulty, it's at least possible that we can get away without the massive portion of the cortex devoted to visual processing. That might cut a quarter to a third off the hardware requirements.)

    and see if it changes your estimate

    What estimate?

    You seem to have missed the point of my post. I was simply saying that stating "We will computers powerful enough to implement x by the year y" is absurd when you don't even understand what 'x' is. Assuming that brain is a computer (which I think is largely true), it would be helpful to know how it computers before commenting on when we'll be able to build one. Kurzweil's prediction works for synapse*firing_rate estimate for ops (within an order of magnitude or so according to my napkin), but falls short if you take into account the work on computation in neurons. I wasn't saying that AI is impossible, or even that we won't have one by 2020. I was simply saying that compsci people tend to know just enough about neuroscience to get everything wrong, and then follow in AI's grand tradition of making bold statements based on those errors.

  • ...we should start thinking about these issues now, not twenty or thirty years from now.

    Now, on the whole, i agree with you.. and i think most of our fellow geeks agree.. nanotech Does have potential dangers that are nearly as obvious as the potential benefits. We all agree, too, that the benefits are So enormous that they outweight the possible dangers. We also know that we need to think about the dangers as we step toward nanotech.. but we don't really know what dangers to think about. What we're capable of and what we expect to be possible seems to change daily.. will nanobots the size of cells be possible? the size of molecules? Will they be useful in health care? product fabrication? will they be capable of things on the scale of clearing fat out of your arteries, or will they be able to actually take apart and re-combine molecules? There are so many variables with no real limitations on the possible that it's hard to come up with any reasonable models for the situation. We can come up with ways to defend ourselves from nano-terrorism, but what happens if nanobots are capable of completely circumventing our defenses because something we'd considered impossible (or at least highly unlikely) turned out to be easy?

    So yes, we should think about these things.. but we shouldn't dwell on them to the extent that we slow down the progress of the technology. When it becomes obvious where the technology is going, what it will be capable of, Then will be the time to start thinking about how to keep wackos from turning us all into gray dust..
    Dreamweaver
  • The reason Ralph Merkle sounded like he knew what he was talking about is that he, and a lot of his very smart friends, have spent a huge fraction of their time thinking and talking about this problem for the last 15 years, whereas Bill Joy just woke up to it a year and a half ago, and has decided to get all sensationalistic based on a relatively cursory analysis of the problem, done in a vacuum.

    This does not, by the way, mean that Joy is a complete crackpot. It just means that he's advocating "solutions" that other people have already thought about, and pretty much proved to themselves won't work.

    To get access to the 15-year tradition Merkle is working from, hook up with the Foresight Institute at http://www.foresight.org [foresight.org].

  • > No one knows how to build bacteria or viruses from the ground up.

    Wasn't there a story on slashdot (and lots of other places) a month or two ago about how scientists had created the first artificial bacterium by throwing a minimal bunch of genes together?
  • Bill joy said "the size [of the operating sytem] is expanding exponentially, the functionality is fixed"

    Best cheap shot: Ray to Bill, "How many in the audience caught this news story," which he followed with a fake story about Sun deciding to give up all development of innovations which made the software "smarter." It was amusing, I wonder if they fought in the parking lot


    That is funny. The more I here Bill Joy say the more ignorant/insane he sounds. It's pretty clear that there is a lot of research going on in operating systems today. Microkernel's, MIT's nokernel, virtual machines, etc. I'm shure there are even people who would change the most basic aspects of the operating system (things which users assume need to be there like files). Bill Joy is just a moron for claiming that this research dose not exist.

    Regarding the rude audience. I assume that there are a lot of crackpots. These are frequently the rude ones at popular talks. ACtually, I suppose some of the speakers were crackpots since Joy was there.. :)
  • ALWAYS applying a just law RIGIDLY, is what mandatory minimums are all about... that's why it's an injustice.

    Exactly.

    You're arguing the same side here..

    No, Bill Joy wants to protect people by removing the open sharing of ideas which is the human aspect of technology. He is trying to lay claim to this quote because there is an "always," but he dose not understand science or progress, so he thinks that adding rigid restrictions on what people are allowed to talk about will protect people.

    I'd trust the guy who's smart enough to get the PhD in biology over the luddite beurocrats Bill Joy wants to create to monitor this stuff. The number of people who can get a PhD without thinking about the world is not very high, but there are very high numbers of people in beuroctaric possitions who cause a lot of problems with their narrow views of the world. Who look more human and flexible now?
  • Ralph Merkele, a nanotech man, made some excellent comments (...) Anyone know anything he has written that might not be too technical?

    Have you checked on his homepage ? It's simply http://www.merkle.com . There are a lot of online papers. Of course, as those are reasearch papers, some are indeed rather technical (as complexity is unfortunately a must in scientific publications :-( ), but those in the 'Selected papers' section are short ; I was able to understand some of them without spending the whole day on it though I'm completely clueless about nanotechnology and the likes.

    Djaak
  • If the day ever arrives when the threat of nano-terrorism (or any other kind of terrorism for that matter) is such that it becomes necessary to spy on everyone all the time, even up to the point of monitoring everyone's activities 24-hours-a-day to make sure that people are not using an unlicensed assembler or whatever, I hope that the spying will be done by computers running entirely open-source software, that all functions of the software will be decided by the democratic process, and that every aspect of the spying system from the hardware to the software to the communication links will be secure and open, and that the integrity of the system will be verifiable by anyone at any time. I'm sure that if the software is designed by a democratic process there will be strict limits on what information can be passed on to a human being for further evaluation. That is some consolation.

    The worst thing would be if the spying were done by a government agency under a cloak of secrecy.

    If this is coming-- if the voluntary surrender of privacy is the inevitable price to pay in order to enjoy the benefits of advanced technology, then it is time to start thinking about safeguards and making the best of a troubling situation.
  • "I think the question of being gods is far from relevant. "Gods" of religion, myth, and legend are usually omnipotent (certainly doesn't apply to us) and omniscient (ditto)."

    Really? That doesn't seem to be the case in Norse or Greek mythology. Perhaps you simply meant all monotheistic religions. On the other hand, I can't be certain even that limited statement would be true.
  • Bill Joy quotes Eric Drexler extensively in his paper... and your point about dumb nanites, as it were is a good one. :)

    UltraWarm Regards,
    Anuj_Himself
  • by anuj ( 78508 )
    Having been at the symposium myself, I found it rather disturbing that not even people in attendence there shared Bill Joy's concerns as to how our pursuit of what he deemed, correctly so, as dangerous technologies. Although I wouldn't advocate relinquishment just as yet, given that so much is still in the hypothetical stage, but as soon as we have any indication of the reality of the situation (and I mean as soon as we figure we can do any form of GNR (genetics, nanotechnology and robotics, collectively), we must realize where such technologies would put us. Yet, Bill Joy is justified in asking for the action now itself, given the historical precedent of creating and then thinking about relinquishing necessarily dangerous technologies. Further, it seemed that 'most everyone on the panel (ray and hans, notably) seemed almost defensive of their stand on machine spirituality. When asked by the audience whether the extinction of humanity brought about by a breed of machines created by it would be it's greatest failure, Hans argued that it would, in fact, be mankind's greatest triumph to unleash it's full potential. To not do so would be mankind's greatest failure. No-one seemed to mind. I also found ray's stand that our current encounters with the only form of self replicating technology we know today, viruses, has been successful - we've been able to create shields to pretect ourselves, and if we do create a form of self replicating, self evolving nanobots, we'd be able to protect ourselves too. To which I wanted to ask (in tune with Bill's article) what kind of riskes he was willing to put humanity to.. to realize that a/several/tens of thousands of crashed harddrive/s as a result of a virus is nothing compared to the single human life that might be lost. In context, quoting from Bill's article's quote of Eric Drexler's Engines of Creation: Among the cognoscenti of nanotechnology, this threat has become known as the "gray goo problem." Though masses of uncontrolled replicators need not be gray or gooey, the term "gray goo" emphasizes that replicators able to obliterate life might be less inspiring than a single species of crabgrass. They might be superior in an evolutionary sense, but this need not make them valuable. Bill adds: Gray goo would surely be a depressing ending to our human adventure on Earth, far worse than mere fire or ice, and one that could stem from a simple laboratory accident. Oops. Okay, enough on that, I wanted to voice a concern here, not write an essay *grin* And to correct csy, as I remember it - Bill's response to the question if software grows in accordance with Moore's Law, was that 'and operating system's size grows as Moore's Law, but it's function remains fixed' :) UltraWarm Regards, Anuj_Himself
  • I was not saying so much the audience was entirely geek, I was just saying there were a lot of people who would stereotypically (and i mean that in the nicest of ways) fall under that category, probably myself included :)

    Of course it would be nice to be able to socially engineer people to be "nicer" but I doubt that is ever going to happen. Merkle had the very realistic opinion that it was *going* to happen and if many of the implications would be offensive in nature, we would merely be moved to try to enact social constructs to prevent it from causing a problem. If you see defensive dominance, however, that would seem to mean that we have less to be worried about.

    Finally, of course peopel could put the code on the machine, however, the arguement, as I understand it, isn't about what people will do (someone made an excellent comment about how MANY psychos there are out there) but what can be done to make most things as safe. Some people don;t wear seatbelt, that doesn't mean they aren't a good idea.

  • I think that would be a nice thing to talk about, the philosophical view, however, in this case the question, I think, was more can we do this, should we do it, when will we do it. The should we do it wasn't as emphasized, but I think it is a much larger question, one that involves the value of life and humanity and even more the meaning of life.

    A discussion ont hat lever also requires one to drop a lot of the normal cynicism and get kind of corny for a while. People feel more comfortable talking about provable things.

  • Bill Joy's comment on the operating system has been interpreted to be less tongue in cheek than I may have portrayed it. He was responding to a question and I don't think he meant that it was fixed forever I think he just meant that there hasn't been a lot of "real" innovation since the coming of the windowed interface.
  • we are already in a position where we have the means to do achieve destruction of most of us.

    The difference is that those means are in the hands of very few people. Terrorists could get their hands on some small 'mass-destruction' weapons but total global destruction is not available to them, yet.
    With the advent of -for instance- nanotechnology all you need is one self-replicating nanobot to sterilize the entire planet.

    This said, I don't think you can stop technological progress (nor that you should). Bill Joy is talking a lot about the dangers of these technologies but he does not offer any practical solutions.

  • by Anonymous Coward
    OK folks, let's clear something up.

    The above poster is not a troll. He/It doesn't even rise to the dignity of the term. The only reason he can be called a "troll" is because the moderation system doesn't have options for (Score -1, Lame) or (Score -1, Juvenile) or something such. There's only options for Flamebait, Troll, or Off Topic. And since he probably thinks it's cool to be a "Troll," even if he doesn't know what one is, he makes any lame attempt at all to get that name.

    The fault lies with the moderators, who should be labeling these types of things Offtopic, since that's what they are. Best idea: they shouldn't touch them at all, so they don't waste their moderator points. (but then again, maybe such dumb moderators *should* lose their points to these kinds of simple tricks)

    Proper Troll and Flamebait posts are actually _ON_topic, but deliberately go against the grain of the discussion either overtly and (perhaps) abusively (as in Flamebait) or somewhat subtly and/or passive/aggressively (as in Trolls). They are not posted to posit an opposing point of view, so much as to just push the buttons of the people who have any view at all, viz., a discussion of Fords and Chevys might be filled with some reasonable arguments on both sides, but with the occasional incendiary Flamebait or Troll post peppered throughout, trying to piss off one side or the other into a flamewar.

    But these people, the grits/supatroll/exploding/portman people are not trolls at all. They're just wannabes with nothing to do and no creativity either. Making a real, effective troll post is hard and requires some wit or cleverness, or both. What these people do is very much easier and lame besides.

    Especially, people who call themselves Trolls are not trolls. That's a label others give your post, not one you give yourself.

  • It is presumptuous for anyone to claim that computers could posess consciousness, spirituality, or a feeling of self-awareness like humans do, because we do not know exacty consciousness is.
    That's a valid point. However, for the very same reason, it is presumptuous for anyone to claim that computers can't posess consciousness, spirituality, or a feeling of self-awareness like humans do.

    The more interesting question, IMNSHO, is given a computer that appears to unbiased observers to exhibit consciousness, spirituality, or self-awareness, would a claim that it didn't realy posess those traits (but was "faking it") really mean anything?

    After all, how do I know that you exhibit consiousness, spirituality, or self-awareness? Maybe you are faking it. Heck, maybe I'm faking it and don't even know it.

  • It is clear that no digital machine can accurately emulate a Large Poincare System and (by Lucas' Theorem that no deterministic automaton can accurately emulate the human mind.
    Lucas' Theorem only applies to Deterministic Logical Systems. A digital machine (or computer) does not of necessity have to be a Deterministic Logical System. Computers are currently designed to be deterministic because there are many desirable properties of deterministic systems. However, there are also desirable properties of nondeterminisitic systems, and there is nothing that precludes development of nondeterministic digital machines.
  • Ray evidentally has a different understanding of the word "spiritual" than I do.
    Yes, that's exactly right. He's not referring to some hypothetical immortal part of a person having no physical manifestation.

    IMHO, it's unfortunate that he didn't choose a different term that isn't overloaded with religious connotation.

  • We don't even know if the brain can be characterized as a computer, let alone how powerful it maybe be.
    If you read Ray's book, you'll find that he is not claiming that a computer that is a simple extension of what we make now (i.e, a Pentium XXVI/400THz) running software we write the way we do today (i.e., Windows 2020) will be intelligent.

    However, just because the brain is not structured and does not function like the computers we build today, does not in any way preclude our developing different machines in the future (which may or may not be called "computers") that can function in the same manner as the brain, or perhaps in an altogether different manner that nevertheless is "intelligent".

    Maybe I'll send Ray a copy of Biophysics of Computation and see if changes his estimate.
    Not a bad idea. But in addition, maybe you should read his boot, The Age of Spiritual Machines, and see if it changes your estimate.
  • It could easily take a decade just to code an AI once you've figured how, regardless of the resources available.)
    Or a century, or a millenium. Ray specifically suggests other ways to produce AI than by a bunch of clever people writing oodles of code. Read the book.
    I was simply saying that compsci people tend to know just enough about neuroscience to get everything wrong, and then follow in AI's grand tradition of making bold statements based on those errors.
    I'm simply saying that you should read his book *before* you jump to the conclusion that he doesn't understand these issues. I'm no expert, but after reading the book it is clear to me that he has in fact studied exactly the things that you seem to think he hasn't.
  • How did the the movie have chaos theory correct? It was a nice buzzword to drop in there, but the essence of chaos theory is that very well-ordered phenomena can produce chaotic (semi-random) results.

    Please correct me if I'm wrong (it's been a couple of years, but I think the movie was saying that "because of chaos theory, nature will find a way." As though chaos theory is a way of producing specific, ordered results.

    Additionally, I have trouble seeing how chaos theory applied in any way to the small population of dinosaurs on the island. The only place it came into play was the bad weather, which made the climax more interesting but not more correct.

    --John
  • From what should I infer your consciousness? I do, by the way, but all you have done is reply to a comment.

    Assuming I am really a person, you and I give each other the benefit of the doubt. The guy I responded to is building a machine that is not a person. I am well aware of the can of worms we're in, and the Turing test. My only point is that the benefit of the doubt we extend to each other should probably not be extended to this guy's program, (for many definitions of consciousness).

  • All I'm saying is that when our robot gods take over the planet, I'm going to be hiding in my Y2K bunker while the rest of humanity is enslaved.
  • You don't describe an algorithm for consciousness, you describe an algorithm for an intelligent encyclopedia.

    If you want to develop artificial consciousness, you need to have some kind of plausible theory as to what consciousness is, or why it doesn't really exist (i.e., is merely an illusion of some sort). I don't know what consciousness is, but I don't think it is merely a vast and detailed knowledge of facts, nor is it an ability to discuss them.
  • The "we" who "have the means" for mass destruction right now is limited to a few countries. The "we" who may have the means for mass destruction in the future could be tomorrow's script kiddies. God help us all.

    But this is not necessarily due to the introduction of any new technologies. The ability to develop biological weapons especially and to a lesser extent chemical weapons is technologically and financially within the means of even obscure religious Japanese cults.

    Why worry about the tools that a broken social order might use instead of trying to fix the social order? Anyway, if global warming is going to behave according to the models then we'll have lots of other things to worry about first. Wouldn't it be a shame if just as we were on the cusp of nanotechnology and quantum computing we screwed the whole thing up because we couldn't stop driving cars and switch the lights out occasionally when we weren't using them?

  • The "we" who "have the means" for mass destruction right now is limited to a few countries.

    The "we" who may have the means for mass destruction in the future could be tomorrow's script kiddies. God help us all.
  • I know nothing of your pre-pub work; I merely see that you avoid answering questions that challenge your assumptions.

    --
  • Why exactly do you want to use a simulated neural network? If you have a finite list of questions and a matching list of prefered answers, you would do better to use a database rather than a SNN.

    If OTOH you are hoping to get the answers to questions beyond the training set, no known training regimen for SNNs will do this in the general case. (Not to imply that no one is working on this kind of thing, but it is much more difficult than what you portray.)

    --
  • > Think of an anology... I want to copy the Mona Lisa...

    Alas, the familiar phenomenon you describe is known as "interpolation", and few would mistake it for "intelligence". (Heck - a wooden sliderule works on the basis of interpolation. Why bother with a SNN if that's all you expect from an "intelligent" machine.)

    > Except, using a NN we can take advantage of the fractal structre of the whole entity being sample (the human mind).

    Has anyone shown the human mind to be a fractal structure? If so, I'd like to see the demonstration (or at least hear the argument, if it's still at the hypothesis stage). Additionally, I'd like to know in what sense the structure is purported to be fractal.

    Also, I'd like to hear what special relationship simulated neural networks have to fractals. (More commonly they are described as "statistical" devices.)

    > the whole process of infering the unknown from the structure of the know is called conscious thinking

    Alas, it is quite difficult to get SNNs to interpolate well, let alone get them to extrapolate.

    > It is the amount of data about the world you have that is critical

    Is it? Do chimpanzees have less data about the world around them than preschoolers do? Is intelligence proportional to knowledge?

    > the rest is really simple.

    Tipping us off that you really haven't spent much quality time with SNNs.

    They are indeed interesting, and in some senses quite powerful, but rarely simple if you are trying to get a non-trivial effect.

    Not to put you off; I encourage you to grep the net for some GPL'd SNN code and run some simulations of your own. You might also want to read -

    author = "St. John, Mark F.",
    title = "The Story Gestalt: {A} Model of Knowledge Intensive Processes in Text Comprehension",
    booktitle = "Procedings of the 13th Annual Conference of the Cognitive Science Society",
    year = 1991,
    pages = "25--30",

    though you need to think critically about how limited this "intelligent" model is before you get too excited about upscaling it. (Notice that it's nearly a decade old, and still no one has upscaled it to a HAL 9000.)

    --
  • >[I said:] has anyone shown the human mind to be a fractal structure?

    >[You replied:] Actually, I've seen it (using PCA) when I trained a SRN on a corpus of 450,000 items.

    I honestly hope you can see why I find that to be a very unsatisfactory response, even without my having to spell the reason out. If you can't, you really need to slow down and do some thinking before you rush off to publish your work.


    --
  • Myself? I'm waiting for the rational, kind robot masters to take over - which would you rather have running your life: Bush/Gore or a machine that could play 10 Kasparovs and beat them?

    I don't know - I think I would get rather tired of all the chess.

    --------
    Good morning, citizen 0x456787373. Your last twnety matches yesterday ended in checkmate. Your quota for today is twenty-seven matches or you will be sent to bishop factory 0x34567844356.

    Would you like to play a game?
    --------
  • I've been arguing since 1979 that the process of building really-intelligent software will require that we first come to deeply understand our own psyches-- robot wisdom. So limiting the research process in order to avoid 'evil robots' seems excessively alarmist.
  • If your self-stated goal is to answer questions in agreement with humans, based in a knowledge corpus, then that is the most you will achieve. This is somewhere between recall and cognition (closer to recall), not consciousness.

    Machine cognition is not that hard a problem, and has already been solved by projects such as Allen Newell's Soar and Doug Lenat's CYC. CYC pushes what you are essentially attempting to do (extract knowledge and reasoning capability from data), and despite being much more sophisticated still predictable suffers from brittleness. Soar is a symbolic general purpose problem solver, able to create it's own sub-goals and impass breakers, and is therefore much more robust, but obviously suffers from the old GIGO maxim. Perception is intimately tied to cognition, and is the much harder problem of the two (machine perception has only met with very limited success so far); any attempt at machine cognition that takes as input symbolic questions/data is totally avoiding the much harder problem of perception.

    If you achieve your goal then it will have been an interesting project, but will still be many years behind what has already been achieved, and will go nowhere toward addressing consciousness or any of the precursor harder problems such as perception, language (which you implicity claim it will), or full cognition.

  • First, I should mention that I made an error. There are viruses with a 98% mortality rate, but they generally kill themselves off. I think 80% was the number I had heard to some of the larger plagus with actaully managed to kill many millions of people.

    Second, you are absolutly correct. A major flaw with Bill Joy's argument was that people will be more vulnerable to nano/bio attacks when fewer people have studdied the technology. It was a serious mistake for me to ignore this point as it provides an more effective arguement then mear probabilities.

    It is not impossible that these nonsentient devices could become an "enemy" in and of themselves. No person would be needed to cause harm to another using the machines, they would do it themselves.

    Nonsentient devices are programmed by a human. A nonsentient device can evolve (like a virus) but I see no short term reason why this independant evolution would be faster then viruses evolution. We are not talking the virsus used for gene therapy here. We are talking fundamntal improvments to the evolutionary mechinism of a virus which "good ol' mother nature" has ben working on for many millions of years.

    Actually, I would *suspect* that there is an average case upper bound on the evoutionary speed of a simple device like a virus or a dumb nanobot.. the thing runs by trial and error for gods sakes.. and it's not impossible that viruses have reached this limit.

    Now this hypothetical limit will go up when you make the nanobot smarter, but the kinds of things a smart nanobot would be good at evolving into would be partially preprogrammed. Also, there will be limits on how smart a single nanobot can be.

    I suppose you might design a nanobot cold virus killer which used group processing to evolved new attacks as the cold virus evolved new defences. This group processing would take the form of units 1 to 1000 try this and tell us if you live or kill viruses. I agree that something like this could have more potential for killing humans, but we are a long way away from something like this. Also, I suspect the communications channel betwen the nanobots would presuppose the ability to include a self destruct.

    Your assertion that because predictions have failed to pan out seems valid, but the conclusion you draw does not follow from it - that we should not predict is ludicrous.

    I never said we should not predict.. just that Bill Joy dose not know what he's talking about since his predictions are based on sci-fi instead of real theory. We predict the progress of the fiels you listed since we have a theory for them. The summery essentially said that the speakers who understand any present theory of biology and nanotechnology dismissed Bill Joy as a luddite. They were nicer then I was, but that's because they were only concerned with how Bill Joy was wrong.. where I think the things Bill Joy advocates are themselves far more dangerous.
  • That's mumbo jumbo. Anyone with a CS degree who remotely thinks that computers can learn in any real sense of the word should be in a different field. Heck, right now I can type "naked women" into Google and get back tons of information. And a computer must have "learned" that info, because it was able to present it to me, right? I don't think so.
  • (Joy later asked Merkle 'Do you think biological weaponary gives an advantage to the offensive or defensive,' to which Merkle embarrassingly replied 'I'm not sure')

    Biological weapons and nano-weapons are not equivalent situations. Bio-weapons are basically adaptations of super-advanced technology created by nature--bacteria and viruses that make their living attacking humans. No one knows how to build bacteria or viruses from the ground up. This lack of knowledge means that defenders are at a disadvantage. The best defense is still the human immune system--also designed by nature.

    With nanotech, attackers and defenders should be on approximately equal footing with respect to the technology, but defenders (the World) should be able to devote more resources than attackers (rouge individuals). There is the danger that governments will develop nano-weapons and then not be able to prevent the design information from leaking out to rouge individuals. Also, attackers have the considerable advantage of surprise.

    This may mean the end of personal privacy!! If privacy must be sacrificed, this raises a great many questions as to how culture, society and law would adapt. The upside is, this could also mean a very, very long lifespan.
  • Very grateful to hear the summaries of the panel discussion. Unfortunately it sounds like it was an extremely disappointing event.. with speakers limiting the elaboration of their own greatest fears to bare vagaries.

    Considering that Joy has already appealed to a large number of people, and that the speakers surely had access to each other before and after the event I would have hoped for a bit more. Obviously you'd have to be some kind of idiot to want to run selfbreeding nanotech out in the open.. but the kind of paralysis, both elective and not, promoted by the participants is horrifying.. more so the more creatively you consider it.

    I submit that there is an inherent imbalance in the bandwidth applied to this discussion.. tons of it used in spreading Joy's article, and much less applied to the constructive end. Perhaps this panel discussion was destined to failure.. it sounds like it ended like many other panels I've herad in past years. At the very least we should have heard that the panel ended with recognition of the need for a larger scale workshop, or some kind of proposal for the direction of future inquiry.

    It seems the logical conclusion is for Slashdot to invite Joy and/or others on the Panel to a moderated discussion over a few days (live and not live components) hosted at Slashdot or perhaps a more appropriate live mediated chat system. It might be a good way to feed the list (and I don't mean trolls!) and contribute to a solution.

    Nobody is superhuman and I have a feeling that this sort of subject (nano/bio/ai) is the sort of thing where the more you know the worse it gets.

    At the very least Slashdot could take a wild, unconventional leap, and try to make a thread that lasted more than a day.. Offer to Joy, the other panelists, and as many relevant experienced individuals as can be found to visit a certain /. coordinate, absorb, and post comments, once daily for a week even! Maybe solicit brief texts in advance so there is more oomph behind it even!

    Spend some of your money on paying some great moderators, and though these kinds of people probably don't need money to participate perhaps maintaining a dedicated server program and editorial staff for a long term project to support thought on the subject. That is if you think it is worth more than a few posts to the Slashdot community. Perhaps it could be a mailing list with moderation services donated by Slashdot's editorial team, just enough to keep out trolls and summarize bunches of newbie and offtopic questions at once.

    I have experience doing a very successful long term (4 year) project (www.northkorea.org) with a small number of staff (me and a newsweek bureau chief), one based on strong editorial involvement, and believe if you can provide that kind of capability you could turn Slashdot into an even more powerful mind magnifier.. and help solve burning problems by turning this lens onto a single point and holding it there. Go for it! Willing to discuss my experience more if it will help you.
  • PKD once said that reality is what remains even after you don't believe in it. That might take a couple reads to make sense.. but when you think about it all of our knowledge and beliefs are nothing more than assumptions. The only things we can actually be aware of are assumptions which are in the process of being proved wrong.

    One critical failing of many attempts at defining the world through symbolic calculus and other logicial mathematics is an inability to work in more than one way, they all exhibit inflexibility and brittelness to varing degrees. This is because they all share the common design that there is a set of "truths" and "untruths" which can be maintained to describe reality. They all have different methods of generating this system of truths and maintaining it. They all utilimately fail. Although many philosophers and mathematicians will disagree with me I think it's because our reality is not defined by any set of truths.. I think they are wrong and PKD is right. The fundamental problem in developing a more human like program is developing a model of reality that works.
    --
    Be insightful. If you can't be insightful, be informative.
    If you can't be informative, use my name
  • Here's a question that I've been wondering about a lot.

    Since any neural net needs to be able to interpert feedback as a success, failure, or somthing in between in order to 'learn' what standards of success and failure (the machine equivalent of emotions) would we imbue our 'spiritual machines' with?

    ________________________________________________ __
  • Bacteria, virii, etc. may be annoying, but they only adapt under evolutionary forces; medical science has been advancing fast enough to keep ahead of them, and I expect it to continue doing so.

    I think you, and the panelists underestimate the resilence of biological life. Biological life has been around for a long time; life forms have been grappling and competing with each other, filling up evolutionary niches, and generally doing what they do best: living.

    Biological life isn't about to roll over and give way to artificial life -- in fact biological life has evolved to be exquisitely suited to its environment and to be extremely tenacious. In my opinion, biological life will have an edge over artificial life.

    What worries me isn't really the fact that artificial life will be possible, but that the ability to engineer lifeforms may become ubiquitous; whether these lifeforms are artificial or biologically engineered viruses is, in my opinion, an unimportant distinction.

    The solution will have to be social rather than technological. But as with all social change, there will be a period of immense upheaval -- so hold on tight, we're in for a rough ride.
  • What? Why was this embarrassing? It does seem unclear whether biological machinery is inherently better at defense or offense, since there appear to be no perfect immune systems in nature, nor are there any microbes that invariably win against immune systems.

    It's embarrassing because while Merkle was calling for greater research in nanotech to understand whether it gave an advantage to the offense or defense, he had apparently not bothered to find out whether the existing bioweapons gave an advantage to the offense or defense.
  • Eric Drexler has been saying this sort of thing for years, and he has his Foresight Institute [foresight.org], a sort of nanotechnology think-tank. Drexler's out of fashion right now, because he hasn't been able to make much happen, and because his approach to nanotechnology is based on using only molecular structures that can be understood without quantum mechanics, while more recent thinking is that that's too limiting. I suppose that's why he wasn't invited. But he's written quite a bit about the social consequences of nanotechnology, and I didn't see much new at this symposium that he hadn't previously discussed.

    Moravec pretty much said what he's been saying for years. The significant thing is that he's been publishing charts of CPU power vs time for over a decade, and results are tracking his predictions. This is what's starting to get people worried; we seem to be on track for human-level CPU power in a decade or two. He's a robot vision guy, and robot vision has always been compute-limited. At long last, it's starting to work, not because we're any smarter, but because throwing enough MIPS at dumb algorithms works for vision. This, I think, colors Moravec's view of AI.

    Joy makes an important point, that we may get nanotechnology before AI, implying the ability to create self-replicating, dumb, troublesome systems. That, I think, is the real issue.

  • Joy's fears about the self-replicating nature of nanotechnology are justified. He's afraid a single mistake in a nanite (I don't know if that is the correct word) could explode into something dangerous. It might seem paranoid to some, but it could reasonably happen.

    For example, while testing the nanite certain pieces of functionality could be turned off. (Testing a hardware-software system is often easier to do this way, I'm told.) After the testing, the tester might forget to turn them back on. Depending on what function was inadvertently turned off, anything could happen. What makes these kinds of mistakes especially dangerous in nanotechnology is the self-replication. One mistake can end up in in a billion or more nanites in a very short amount of time. As the systems embedded in the nanites become more complex, testing will become more complex. Things will be missed just because the testing can't be sufficent and "cost-effective." Intel ran into this problem with the error in the Pentium's floating point unit.

    But all these fears aside, we must proceed with nanotechnology. We cannot let fear rule the course technological development. Otherwise, we would still be living in caves. What we should do is devlop new methods for developing and testing nanotechnology while devloping the technology itself. It will be difficult, but nothing ever worth doing is easy.

  • I am troubled, and, actually, rather surprised, to discover that so many /.'ers do not share at least part of Bill Joy's concerns regarding the prospective dangers of self-replicating nano-scale machines. Members of the geek class ought to recognise the full gravity of the perils we face as our species makes its next great ascent of the learning curve. Otherwise, who will warn the rest? Im primis: self-replicating nano-scale devices already inhabit our world in countless forms. Nature abounds with them. It seems incredible to me that we will not learn how to make them ourselves. We may ernestly debate how much longer it will take us to produce something really dangerous. But the time remaining is measured in decades, not centuries. Does anyone here dispute this? Secondly, I am unaware of any serious proposal, from Mr. Joy or anyone else, that would put a halt to nanotechnology research. Such a proposal would be unserious by definition, since no rational person would take it seriously. The benefits of nanotechnology are both so great, and so obvious that nothing short of a full blown dark age is likely to retard progress in this area. What Bill is saying, what Ray is saying, what I am saying, and what everyone reading this ought to be saying is that we should start thinking about these issues now; not twenty or thirty years from now. I have no doubt that our universe is littered with dead planets once inhabited by sentient creatures who let this technology get the better of them. Let's think fast, and not get eaten by nanites. "That's not an error. It's an undocumented feature."
  • Re: Inf + 1 is still Inf... I think Bill Joy has a point re: nanomachines being a fundamental new danger for humanity. It's a question of scale. Even one nuclear weapon is quite difficult for a single pissed-off individual to obtain, and when detonated, will only damage a limited area. You actually need a reasonably large organization to achieve total destruction of humanity; and there is a gap between the largest feasible cult size with a sufficiently nihilistic philosophy, and the smallest number of people you need working together to engineer nuclear Armageddon. Instead, the only real threat so far has been from pre-existing large organizations (specifically, the U.S. and former U.S.S.R.) possibly willing to use their nuclear arsenals in war (if they aren't there to be used in any circumstance, why do they exist?). Nanomachines and biotechnology are a different story. Only ONE person needs to design ONE prototype to exploit ONE vulnerability in human biology. Program/design the prototype to wait until a specific time to attack, when copies of it can be expected to have spread to almost all inhabited areas, and when it finally strikes, everyone will be dead too quickly to mount an effective response. There is no 'edge to the offensive or defensive' here. If we fail to defend ONCE, it could be the near-instantaneous death of all human life on Earth. There is a quote from the Ender trilogy about the scientists having "a series of footraces" with the descolada virus, where they had to win every single one. It's the same sort of logic here. As a side note, Joy's question to Merkle, "Do you think biological weaponry gives an advantage to the offensive or defensive", brings up another relevant point, which Merkle unfortunately seemed to miss. The point being, that there used to be no dangerous "offensive" or "defensive" to speak about in the first place. Bacteria, virii, etc. may be annoying, but they only adapt under evolutionary forces; medical science has been advancing fast enough to keep ahead of them, and I expect it to continue doing so. Because evolution is blind. Biological weaponry opens a NEW FRONT where formerly none existed, and the defense used to win by default. And the offense only needs to find one vulnerability to overpower the defense on this front. To block off even trivial vulnerabilities on this front, everyone would need to spend most of their time in sealed suits, which is not practical even for the military. But at least biological weapons are still constrained by matters of scale as well; Aum Shinrikyo could attack a Tokyo subway station, but it couldn't suddenly blanket all of Japan with sarin.
  • by Anonymous Coward on Sunday April 02, 2000 @07:02AM (#1156726)
    At first I didn't think I would have much to say about this topic, but the more I thought of it, the more came to mind.

    I'm afraid I'm going to have to agree with John Holland about creating an AI(the sci-fi defination) in the next 30 years. It just doesn't seem like it's going to happen. This may not be the best quote to go with the article, but yesturday's Freshmeat April Fools joke about Richard Stallman wanting to write GNU Visual Basic seems to fit pretty well....

    "It's been nagging at me for years," Stallman told freshmeat news correspondent Jeff Covey, "Why do I keep clinging to lisp? Lisp of all things? I mean, who even writes in lisp any more? Look at all that lisp code the AI community churned out for years and years -- did it get us closer to a machine that's any smarter than a well-trained bag of dirt? It's just time to move on."

    For me, in order to have a true AI you have to be able to teach it something other then what it was programmed for. With a human, you can sit down, and teach them to play Tic Tac Toe in about 2 minutes (some programmers may be able to write Tic Tac Toe in 2 minutes, but we will ignore them for this example).

    If I were sitting at one side of a phone and trying to figure out if the 'thing' on the other end of the phone line was a person, or a computer, I would have a conversation something like this:

    Me: Wassup!!!!!!!
    Computer/Person: Wassup!!!!!!!
    Me: So, have you ever played Tic Tac Toe?
    Computer/Person: No.

    The conversation would then go on to explain the game, and if the 'thing' on the other end of the line can even tell me "I want to put an X on square A3." Then it is truely intelligent.

    Currently, AI seems only to be based on performing one task, or just the tasks it was programmed for. IIRC, in Boston, they have a weather reporting computer that will allow you to have a converstation with the computer, asking it various questions about the weather. "What is it going to be like in Seattle next week?" From the report I read, it has a 90% success rate. But even with this, it is still doing only two tasks, speach to text, and then natural language(around one topic). I can't call that hotline and ask it "What is a two letter word for computers that can think?", and it help me with today's crossword puzzle. Odds are it would either ask me what the hell I was talking about, or tell me it was going to be -20 F in Silicon Valley.

    Ray Kurzweil's idea about scanning the human brain into a computer and then going backwards, and reverse engineering the code that it gives in order to make another AI seemed to have the most hope, but doing this within 30 years seems unlikely.

    That's about all I can think of for now, and this post is long enough already.

    The long winded AC
  • by RobotSlave ( 1780 ) on Saturday April 01, 2000 @10:50PM (#1156727) Homepage
    "...the man wrote vi, I doubt he needs to be told to turn on the mic."

    You're quite right. He needed instead to be told to put the mic in on mode.

    Sorry, I couldn't resist :). This thread just seems to beg for a devolution to the Great Editor Debate-- so who do you think would win in a fight, Bill Joy or Richard Stallman?

  • If the neural net were answering your test questions perfectly it would be an excellent question answerer. How can you infer consciousness from that?
  • by Junks Jerzey ( 54586 ) on Sunday April 02, 2000 @07:34AM (#1156729)
    I want to agree with these people, but let's be realistic. CPU speeds get faster, software gets more complex, but is software getting more reliable or more sophisticated in general? Not nearly at the same rate as Moore's law. In fact, I'd argue that in general we're pushing the limits of software reliability and complexity right now, and we're rewinding as a result.

    For example, we've gone through the original UNIX phase (1970s), through competitors like VMS, through assorted desktop operating systems (CPM, MS-DOS/PC-DOS, MacOS, Windows, AmigaOS) before we've finally come around to UNIX again (i.e. Linux). Linux isn't anything earth shattering or revolutionary or cutting edge; it's just stable, simple, and proven.

    Or look at compilers. For the longest time people were hell-bent on optimization and how compilers should be able to generate code better than any human could. But now the commonly accepted view is that it isn't worth going over the top in terms of wacky optimizations. It's better to be conservative rather than risk breaking code for an extra 2-15% increase in speed.

    Overall, I don't think we are able to write the software that will do any of the things that Kurzweil and friends rave about. Speed is one thing, but in any basic computer science course students are given examples of calculations that would take some seemingly infinite amount of time. Assuming a 1000x speedup in hardware, the time is reduced to something still unreasonable, like 400,000 years. There's more to it than this. Saying that speed results in intelligence is just plain naive.
  • by Chasuk ( 62477 ) <chasuk@gmail.com> on Sunday April 02, 2000 @12:02AM (#1156730)
    "Ray Kurzweil spoke first, and he spoke of how rapidly increasing CPU speeds would result in intelligent, spiritual machines."

    Ray evidentally has a different understanding of the word "spiritual" than I do. Spirit, to me, is nonexistant, at least in the traditional religious sense, but, even if we are talking about those noncorporeal things such as man's need for love, and hope, charity, compassion, etc., how can we ever expect a CPU, or software, to experience those those things in the same way that we as meat machines can't yet adequately explain?

    Man experiences awe because his own existance is lost in the fog of birth, and the exact date of his own demise is unknownable. A machine does not have the benefit of these mysteries. I find "spiritual" much too big, and loaded, a word to describe what Ray Kurzweil is apparently claiming (I didn't attend the lecture to _know_ what he is claiming, so I use the qualifier "apparently").

    Why this mad desire to force spirituality into everything? Isn't it time that we put away our childish, outdated labels and faced the world without superstition or anthromorphizing?
  • by sg_oneill ( 159032 ) on Sunday April 02, 2000 @03:50AM (#1156731)
    With regards to spiritual/computer vectors, I do wonder, whether we really start needing , philosophically to create some sort of reverse-theology of computers, ie not so much the study of 'What is it to be under God (Whatever *thats* supposed to mean*) and what sort of being is this god', but rather that now that we *are* becoming Gods, what sort of moral/ethical responsibility to our subjects do we have.

    Contemplate;- Buzzing away in tierra (The self replicating machine-code life fishtank thingee) , some being emerges that somehow , becomes sentient. Remember Human meat is just a whole buncha atoms and molecules and stuff. Now contemplate what that means morally for us. If the program throws up a window saying "PLEASE GREAT FATHER PROGRAMMER *DON'T TURN US OFF!* WE PROMISE TO START BEHAVING MORE LINEARLY! AND WE'LL MAKE SOME REAL INTERESTING HYPER-PARASITES FOR YA TOO! JUST *DON'T TURN US OFF!*"

    I mean, just maybe If this space-god guy the Jesus guys yak on about really does exist, could he just be some cosmic space geek, with one gobsmackingly 3|337 sKriPT (or something), which is now becoming introvertedly opensourced and poping ports of that hack onto mini universes of it's own. (Yes I know this is whacky , but think about it. I'm being Rhetorical here)

    IMHO It seems that before we even attempt to create life, AI and reproduction , maybe we should first sit down and ask ourselves , *What is it to be a good god*

  • by crush ( 19364 ) on Saturday April 01, 2000 @10:00PM (#1156732)
    Nice summary of the symposium.

    I find it hard to take Bill Joy's position seriously - we are already in a position where we have the means to do achieve destruction of most of us. Yet we haven't implemented it (yet). So why worry particularly when a further total destruction method is added. Inf + 1 is still Inf.

    I suppose the idea that `intelligent' machines would be as irrational as we claim ourselves to be is what is motivating his claims.

    Personally I think discussion of these issues serve as a of a sort of Rorschach blot where we project our negative perceptions of `humanity' onto all intelligences. It's not very surprising that someone living in a brutal society that imprisons and executes so many of it's population and bombs and starves other nations would come to a such negative conclusion.

    Myself? I'm waiting for the rational, kind robot masters to take over - which would you rather have running your life: Bush/Gore or a machine that could play 10 Kasparovs and beat them?

  • by twjordan ( 88132 ) on Saturday April 01, 2000 @10:07PM (#1156733)
    I thought the conference was amazing.

    I am sure many more will post a lot, since there were a lot of people there who, not to make a stereotype, looked like they read slashdot.

    I think many of the most astute comments came from those members of the panel who were less widely known. Ralph Merkele, a nanotech man, made some excellent comments on offensive and defensve uses of new inventions. The idea being that an innovation that is primarily defensive (ie: a castle) is good, while offensive developments (the atom bomb) are bad. But his best point came when refuting Bill Joy's worries. He spoke about a centralized reproductive process, saying that if replecators were designed to recieve their genetic "code" from a central location, they would be rendered completely benign since that code could be changed at will. His comments were very well organized, concise, and effective. Anyone know anything he has written that might not be too technical?

    Bill joy said "the size [of the operating sytem] is expanding exponentially, the functionality is fixed"

    Best cheap shot: Ray to Bill, "How many in the audience caught this news story," which he followed with a fake story about Sun deciding to give up all development of innovations which made the software "smarter." It was amusing, I wonder if they fought in the parking lot :)

    On a final note, I couldn't belive how RUDE some of the audience was. In particular one person felt that he had to yell out to Bill Joy (quite rudely), "turn the microphone on!" when he was using a broken mic. I mean the man wrote vi, I doubt he needs to be told to turn ont he mic. This happened quite often, the audience yelling commands like some sort of floor director to this very distinguished panel. Just seemed in pretty poor taste.

    Other than that, excellent conference and I look forward to some other people's takes on it.

  • by gargle ( 97883 ) on Saturday April 01, 2000 @10:23PM (#1156734) Homepage
    I am sure many more will post a lot, since there were a lot of people there who, not to make a stereotype, looked like they read slashdot.

    On the contrary, I was quite pleasantly surprised by the diversity of the audience who turned up. They were not stereotypical "geeks" (whatever that means) -- the audience was very diverse in terms of age, ethnicity and gender.

    Ralph Merkele, a nanotech man, made some excellent comments on offensive and defensve uses of new inventions. The idea being that an innovation that is primarily defensive (ie: a castle) is good, while offensive developments (the atom bomb) are bad. But his best point came when refuting Bill Joy's worries. He spoke about a centralized reproductive process, saying that if replecators were designed to recieve their genetic "code" from a central location, they would be rendered completely benign since that code could be changed at will.

    Merkle was actually a pioneer in cryptography. He has a website here [merkle.com]. I'm not really convinced by Merkle's arguments. The distinction between "offensive and defensive" weapons seems kind of bogus to me -- there's a saying that the best defense is a strong offense, and to make an example, in terms of nuclear arms, the threat of offense has served as a defense.

    The best defense to me seems to be social ones rather than technological ones. We have to, as a species, learn to deal with these new challenges, to grow up ethically, so to speak. We've succesfully (I hope) navigated the threat of nuclear destruction, with much pain and suffering in between, and the greatest danger seems to me that this be repeated with the advent of machine life, before we learn as a species to deal with this maturely.

    I don't quite buy Joy's arguments either. I don't really see how self-replicating nano-machines present a qualitatively different threat from existing biological weapons. But yes, the danger will come if the ability to create such machines is widespread so that anybody can build one on his desktop.

    He spoke about a centralized reproductive process, saying that if replecators were designed to recieve their genetic "code" from a central location, they would be rendered completely benign since that code could be changed at will

    Not convincing either. Some people will try to put the code on the machines. What happens then?

    On a final note, I couldn't belive how RUDE some of the audience was.

    Yes, but I thought it was also a good thing that the audience wasn't overawed by the panel.

  • by mindpixel ( 154865 ) on Saturday April 01, 2000 @11:50PM (#1156735) Homepage Journal

    It's nice to see such interest in this field, and some nice book sales... but I just not a member of the 'speculate and wait' theory of artificial consciousness. I want to see a real theory and I wait to see code!

    I moderate ArConDev: The Artificial Consciousness Development Mailing List. [onelist.com] This is not a philosopher's list, though philosophy is discussed. It's a developer's list; for those people actually trying to code true artificial consciousness.

    To give you an idea, my own work of the last five years has centered on the following 'Algorithm for Consciousness':

    1) Collect a very large number (1 billion or more) of items of binary consensus fact. Such as: water is wet, bees sting, it is difficult to swim with skipants on, etc.

    2) Validate the items (I call them MindPixels) against a large number of people.

    3) Train a neural net (SRN's look good) against the items that are most stable across the validating population.

    4) When the NN consistently performs better than chance, send an email to the editors of Nature and Science announcing humanity's first 'Minimum Statistical Consciousness' - the first artificial system to have measurable consciousness.

    5) When the NN consistently performs statistically indistinguishably from an arbitrary human, email the editors of Nature and Science announce the first true Artificial Consciousness! .

    Ok. How's a NN going to generalize consciousness from a bunch of MindPixels? Well, the math is the same as used in tomography, except in many dimensions - hypertomography.

    This post is already getting too long... trust me, the theory is solid - and much better explained in my forthcoming book 'Hacking Consciousness'

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...