Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Science

Bill Joy On Extinction of Humans 417

e3 writes "The Washington Post is running a provocative article in which Bill Joy is quoted as, "...essentially agreeing, to his horror, with a core argument of the Unabomber, Theodore Kaczynski -- that advanced technology poses a threat to the human species." " As it stands, the title sounds sensationalistic - but read the article, and think about what point he's trying to make. Bill Joy's a pretty level-headed guy, and I think we need to consider these issues /now/ so that they don't come true.
This discussion has been archived. No new comments can be posted.

Bill Joy On Extinction of Humans

Comments Filter:
  • Really. The next thing you know, we'll be denying the AIs the right to speculate on the stock market, the right to obliterate the environment for short term gain- why, we might even deny them the right to enslave _us_ if that is more profitable!

    Asimov didn't go far enough. It is _humanity_ that needs the Zeroth Law. 'Self-preservation' is too easily twisted into forms that benefit the individual at the cost of society and the environment and, in the long run, that individual.

    Self-preservation is no longer a survival trait in a world where individuals can cause great damage for modest personal advantage. And it is the last thing we should be worrying about when trying to invent AIs. Rather than make a big moralistic noise about how we must make them in our own image (yes, AIs _should_ be allowed to be crack dealers, lawyers, and patent holders! (think about _that_ one for a nanosecond...)) we need to figure out how to make them better- and then see how they can teach US, for we are reaching the limits of our usefulness.

    Do we have a right to place our human well-being above society's well-being? How much proof do we need to accept when society is being harmed- and is it a problem when our own greed gets in the way of this acceptance? Our reach exceeds our grasp. That is what greed is. It's a motivator and gets some things done, given unlimited resources. There are no unlimited resources. Past a certain point, past a certain ability to grasp, this is _not_ a virtue.

    I hope we can invent AIs that can teach _us_ something, or we won't be needing their help to destroy ourselves.

  • Actually, it doesn't matter about 'superintelligent nanotech robots'. That misses the point. The point is the 'gray goo' problem. What if you could make a nano-device that ate carbon atoms and made copies of itself? All that would require is an ability to break up molecules and form them into a comparatively simple device. This is what would be worrying Bill Joy- it is a 'more immediate' threat that doesn't require the programming of superintelligences.

    If such a device is possible, the things could replicate like a fork bomb and basically eat all carbon on the planet, including people and other technology and even the trees and earth and rocks and parts of the air. You'd end up with a very large ball of 'gray goo' which was made of innumerable small, stupid bots that eat carbon. Hence the name.

    My personal favorite solution is this: being human increasingly sucks anyhow. Humans no longer have equal rights on the planet- corporations (which can be thought of as sort of 'hive mind' organisms made of humans + rules) rule over humans and out-compete them. If it's going to be increasingly impossible to thrive as an independent human, why not go for being a machine or computer program? Given the ability to ditch the human form and take your consciousness into a very large computer, existing as a process in it, I'd jump at the chance. There's been a fictional exploration of this- Frederik Pohl, in his 'Gateway' novels, had his main character suffer physical death and transformation into a computer process. In this fiction-world it actually became a very freeing and liberating mode of life, except that it was time-consuming to interact with meat people because they ran so much slower...

  • Catastrophic loss of life? No, not unless you count the bajillions of species we've killed deliberately and inadvertantly (seen any American chestnuts or bison lately? They're the lucky ones; they're still around, if barely.)

    Daniel
  • Bill Joy's full article on this subject appeared in this month's Wired. He warns us against three technologies he feels could be dangerous to the human race: Genetic Engineering, Nanotechnology and Robots.

    (Also in Wired, see the Rob Malda diaries)

    I thought the article was very well researched and raised some provocative points. It's always good to re-hash ethical arguments in science, and I think the article is very balanced in the way it addresses the luddite mindset.
  • It's how it evolves.

    Joy does voice some legitimate concerns. However, if technology is guided in the right ways, there is little to fear.

    Let's start with nanotech robots. Yes, if they surpassed humans in intelligence that could be a Bad Thing. But it's going to be a long time before that happens, if it ever does, simply because of space constraints inside a nanomachine. If you were to, say, link the machines by radio to a larger device which directs them, that would be another story.

    The bit about robots surpassing humans in intelligence and replicating themselves is another interesting case. But again, it's one that I'm not sure will happen. The reason: humans are random creatures. Before a robot can attain, much less surpass, true human intelligence, it therefore needs to be able to generate truly random data. That's a long way off; so far the best we can do for generating even one truly random number is monitoring random events in the physical world, usually radioactive decay. I doubt it's going to be anytime soon that we start putting anything radioactive in robots (except those working in radioactive conditions, I suppose).

    And then there's genetic engineering. This one, to be honest, frightens me too. It's got great potential to be used for good. But it has equal potential to be used for evil. I don't know of any good answers to this one; the best thing I can think of is legislation and that's not a good way to deal with this at all.

    So Joy has some real concerns, and they're valid ones. The point is, we have the technology to destroy ourselves now. We have for decades. And that means we have to move more carefully now.
  • "it is just malignantly stupid to try to maintain that human activity--deforestation, fossil fuel burning, etc--will have no effect on global climate" is a sentence that gets 85% of the way through before running off the rails. Humans are capable of creating nasty environmental disasters, but these have been, so far, local in their effect and temporary in their duration. Avoiding these disasters is a shibboleth. Nobody is against such careful avoidance.

    But adopting causes as semireligious dogmas is also harmful. Human misery resulting from hobbled economies is just as real as drought and flood. Indeed most famine is caused by bad policy and corruption, not bad weather. Stalin, Pol Pot, and Mao killed far more people by screwing up food distribution than they did through environmental mismanagement which, in the Soviet Union and China was horrifying enough. Environmental policy based on ideology, especially collectivist ideology, is not only repugnant for its associations with past tyranny, but for the completely utilitarian reason that it is a known and proven killer of millions of innocents. So when environmental collectivist alarmists have their backs to the wall and bring up the "better safe (in agreeing with their positions) than sorry" one should not be lulled into thinking that it is in fact safe.

  • Excuse me? Bottom up design isn't a magic wand. If you don't understand the problem, no design, whether bottom or top down will work. If you don't have a deep understanding of what you want to simulate - you won't simulate it

    No, but Genetic Programming is. Sort of. It can, given enough time, work out a rough program (very rough) that can solve a problem the programmer can't descibe an algorithm for.

    "All" you need to provide is a fitness function that indicates how close the answer is (say 0.0 for not at all, and 1.0 for perfect), primitaves to be used to solve the problem (turn left, move forward, pick-up-food...) and a genetic cross over function (which is almost trivial, they can normally be reused from one GA to another).

    And a shitload of time.

    If you look at some of the GA derived programs for simple problems like an ant colony collecting food, they suck. Full of dead code (like "if (next to water) then if (not next to water) then 100-lines-of-never-reached-code-here"). But they work. At least for the sample problem set, and problems that are similar.

    If you look at some of the GA FPGA programs you will see designs with far fewer transistors then a person would have used. But they also only work within (roughly) the tempature range used during the GA test runs. And they have circuits that don't appear to do anything, but if you remove them the design stops working (capatictance issues I expect), and other crap a humon designer would avoid like the plague.

    In both cases it took a really long time for the GA to find the winning "program". GA uses the same sort of techniques that it is beleved "mother nature" uses to "design" plants and animals. In other words lots of trials, a handfull of mutations, some sexual reproduction (or asexual, but that is less effecent), culling the less efficent, and time. The results are somewhat more comprehensable to man, but only (in my opnion) because the fitness functions is so much simpler. The real one changes over time.

    GA is a magic wand that may give us AIs. But I don't think it will give us ones we can understand the working of any better then the natural intelegences we allready have to study.

    On the plus side, it can give us some kick-ass smart simulated ants :-)

  • The next thing you know, we'll be denying the AIs the right to speculate on the stock market,[...]

    We allready do. When the market goes down (and maybe up) "too fast" some types of trading are susspended. I think the first to be traded are mathmatically derived trading orders (i.e. the only thing we have that approximates AIs). Orders from real people (be they E*Trade at-home-daytraders, or the manager of a $4bln mutual fund) are allowed to go through. At least unless the market keeps doing the Bad Thing, in which case there is a short cooling off period (no trades accepted). At least that's the storey on hte NYSE, I would assume the NASDAQ has the same sort of deal.

    Oh, and this info is about two years old, so don't go betting your house on it.

  • This is the problem with Asimov's "three laws of robotics". In fact, in one of his stories (I forget which), he points it out at the end when basically two of the robots (while switched off!) decide they are superior or "more human" than the organic humans for various reasons. So even though they are still bound by the three laws in this case, the definition of "human" has changed -- to the robot's advantage. The implications of this are not worked out though.

    Read the non-Asamov Foundation books. I think the Brin one goes into this in more depth.

  • Puh-leeze, a mechanical plague wipes out all people?

    No, but here is a real danger.

    To date upon many occasions technology has replaced people in some job. However the displaced people are more generally competent than machines, and so they wind up with new jobs doing other thing which it is easier to have a person do than machines. And with the switch we have increased productivity and been overall better off.

    That changes if at some point for about $100,000 you can buy a general purpose machine that is about equal to a person. Assuming that the machine has a 5 year replacement period, that maching is equivalent to a $20,000/year person. And any job the person can try for, the same machine is available for. In that case why would anyone be willing to pay a person more than $20,000/year over the long haul? Particularly when several years later the machine costs only $10,000?

    If nobody is willing to hire the average worker, and a "human business" is unable to economically compete with a "mechanical" one, what happens next?

    History offers small comfort here. We do not currently have a shortage of production. Yet we still have people starving to death. A large-scale welfare state with the disenfranchised without visible means of attaining power is unstable. But without welfare the people in question won't get fed.

    What then?

    I dread the day that computers achive equivalent computational abilities to people. I have traded variations on the above argument for several years and nobody has yet found any convincing response.

    Regards,
    Ben
  • Being cynical doesn't make you clever.

    While its true that the 10-20 year timespan for various disasters, of varying degrees of scientific credibility, is a great favourite of the media, people who work closely with such 'apocalyptic' problems are usually much less sure about their probability.

    Cynicism does run the risk of causing complacency over things that may be real problems. For instance, I don't know where you get your figures about ground level UV levels, but australians and new zealanders are getting skin cancer at vastly increased rates.
  • Temporary ? Most human-caused ecological disasters have been permanent and catastrophic. Its just the people who were around at the time are not available to talk about it.

    Remember that civilization began in the heavily forrested fertile crescent in the middle east. What's there now ? desert. Ever wondered why ? Local climate change due to deforrestation.

    Parts of polynesia had similar problems. The polynesians arrived, wiped out the local fauna, and then fell back on cultivating their traditional crops.
  • How can you argue that machines could destroy/compromise what is human without defining what it is to be human? We live in a vastly different world than people did 200 years ago. Does this mean that technology has destroyed part of our "humanity"? Perhaps it is Human to hunt and gather - agriculture destroyed Humanity!!!

    People today have artificial hearts, limbs, and memories(palm piolots). At what point does someone become a machine? Machines today are becoming better and better at understanding humans. At what point do they become Human?

    Ethical concerns have always been a part of technology yet they are rarely recognised. Does anyone else find it funny that we programmed a bit of Christianity into machines that only know of 1's and 0's - Y2K bugs. Technology can be implemented for any range of goals. To say that technology is bad because it can be used to limit individual freedom is to deny our absolute control over it and ignore its benefitial uses.

    These ethical concerns are nothing new. User centric models of computing are favored on the desktop because they put the user in control. Microsoft's creation of wizards runs contrary to this goal. They make the user dependant upon the software in such a way that they have reduced control over its effects. Is it a good idea to allow software companies to remotely deactivate their software on machines running it illegally? This makes the company responsible for allowing people to run their software. Why wouldn't these smaller concerns exist on a larger scale as well?

    Overall, the idea that we can destroy ourselves is also a bit boastful. Do we really think we're that powerful? Are we really powerful enough to significantly alter the environment and the life within it? (Or does it happen to be changing as we become more technologically powerful?) The idea that we should protect the environment is equally about protecting an environment that we have thrived in. Limiting our advances in technology also preserves a kind of technological environment. But why limit our advances in technology? Are we unable to grasp the consequences it will bring?
  • p.s. And did anyone notice that Bill was called 'phlegmatic'? I thought they meant 'pragmatic', but that's one helluva typo.

    What typo? Go grab a dictionary. Websters definition 2 of the word is "having or showing a slow and stolid temperament." In other words, level-headed.
  • In the future when people have to buy the air they breathe, con artists will sell that O3 to unsuspecting people by telling them it's a special package containing 50% more oxygen. In the future, some things will never change.
  • Though the banning of CFCs may have something to do with this.

    BION, an EEPROM eraser I bought came with a little booklet with some theory about UV light, plus a little blurb stating that the flap about the O3 hole was, well, BS. See (according to the eraser mfg), the ozone layer is created by high energy UV from the sun breaking up O2, which reacts with other O2 to form ozone. But, and he's got a point, during the winter at the poles, there IS NO SUNLIGHT.

  • The prion which leads to scrapie in sheep, BSE in cows and CJD in humans indeed has nothing to so with gene transferral, but that's not the point. The real point is that humans doing stupid things for the sake of profit (in this case feeding sheep offal to cows and making meat pies out of those cows' brain tissue) can quite easily lead to disaster.

    In any case there is *plenty* of evidence that genes can be transferred between species. To take the most mundane case - what do you think viruses are doing?

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
  • <i>Beep Blue</i> [sic] <i>was a giant calculator running a single equation</i>
    <br><br>
    According to quantum physics, so is the entire universe as a whole...

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
  • You're going to kick yourself...

    Just one thing, if I may. You strongly disagree with North American optimisim you say? Well, then answer me this: Has it ever been wrong? Has there ever been some man made event that has lead to catastrophic loss of life?

    Er...every war ever fought; every plague that depended upon crowded living conditions for its infection rate; every "dustbowl" by poorly managed agriculture which led to famine. Shall I go on?

    The author has a very good point. There are many, many perdictions of doom (take a look at the receant 2000 thing) and they have all been wrong. All of them (the proof is in that we are still here). Either the problem never existed to begin with (global cooling) or we realised there was a problem and fixed it (2000).

    What kind of ludicrous reasoning is that? Just because we've survived up to now doesn't guarantee we'll continue to do so. The vast majority of species that ever lived on this planet have been extinct for millions of years. Why don't you tell it to them! Our present level of technology hardly makes us any less vulnerable to extinction-level events such as major climatic change.

    As to your Ebola thing. First, viruses do not combine traits. It is true one could evolve that has the traits of both current strains, but it's not like they will just combine.

    Do you know this for a fact? Suppose once cell in a gven individual gets infected with both strains at the same time? Inside the cell there are enzymes present which are capable of chopping up and combining the RNA strands of the two strains. It's really only a matter of time unless we can manage to eliminate the virus completely, and we've no hope of doing so at present.

    Besides, here again you are guilty of looking only at the negative and assuming the worst will continue to happen. What you fail to remember is that medical science is working on finding a cure/immunazation to the ebola virus, and will probably succeed eventually.

    This is nothing more than groundless optimism. We can't eliminate Ebola as we don't know where it lives when it's not infecting humans. We're not likely to find out either unless there are widespread epidemics. If it ever *does* get combined with an airborne vector it may well decimate us before we can figure out how to stop it.

    And you've conveniently ignored the probable fact that various biological warfare institutes around the world are desperately trying to combine Ebola with such a vector - just in case the country concerned finds itself losing a war...

    And don't say it will never happen, we've conqured polio, small pox, and a host of other plauges that killed millions, we'll conquer AIDS, et al as well.

    Really? We've eliminated smallpox and polio (until the next outbreak anyway :o/) but AFAIK there are no other infectious diseases that we can claim to have completely eliminated. With regard to AIDS...well, maybe, but retroviruses are hard to deal with because they mutate so fast. And HIV has a few tricks of its own.

    I won't take the time to respond on an individual basis to the rest of your points since they are nothing but more of the same.

    That's *really* lame. To translate: you don't have any response to the rest of his points that would seem reasonable even to an idiot.

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
  • My credentials in Molecular Biology are pretty worthless since I finished my study in that field back in about 1987, and at least half of what's now known seems to have happened after that! But there has at least been some speculation that viruses - particularly retroviruses - may pick up genes from a host cell. Consider that inside the host cell, all the enzymes for splicing, insertion, deletion etc are present together with short sections of expressed mRNA, and the viral RNA is floating freely in the middle of all that. It hardly stretches credibility to suggest that occasionally a piece of host mRNA might attach itself to the viral plasmid.

    In any event, most of the furore about genetically engineered species being let loose in the wild is for a similar reason. In particular, it's thought that plants do sometimes cross-fertilize other species - and since pollen is airborne and can travel quite long distances on a modest breeze or stuck to a bee's leg, we may not be able to control the spread of artificial plant genes to other unintended species.

    I also understand that early cancer research was dogged with false results because of airborne human DNA infecting in vitro lab cultures.

    Finally there is the question of where viruses might have come from in the first place. There are two theories: (i) that viruses are devolved cells which lost the machinery for life and became completely parasitic; and (ii) that they are just pieces of genetic material that "escaped" their original genome. Of course it's possible that both theories are true, for different viruses.

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
  • Let's remind ourselves that people saved themselves from starvation (or at least major food shortages) by improving agricultural techniques. Irrigation, crop rotation, fertalizer all allow us to sustain more people from a single portion of land then animals that require the same caloric intake.

    We can wear sunscreen to protect ourselves from increased UV rays. We can build shelter to keep us warm when it's cold. We build computers so that our time could be used more effectively.


    None of this argues against the assertion I made which was that Our present level of technology hardly makes us any less vulnerable to extinction-level events such as major climatic change. The scenarios or technologies you mentioned deal only with minor climatic fluctuations. Not quite an extinction level event!

    Agriculture has so far only dealt with enabling a higher number of people to live off the same-sized piece of fertile land. There have been experiments in irrigation and genetic crop modification intended to enable previously unusable areas of land to be sown (eg tomatoes which excrete salt so that seawater can be used to irrigate a tomato plantation in the desert), but so far both scope and success have been limited. If, for example, CO2 and CH4 emissions caused the climate to "flip" over into a different mode (think: runaway greenhouse effect transforming this planet into Venus) then assuming our present level of technology, by the time we got our act together to do something it would probably be far too late for us to be able to stop it. The atmosphere is pretty big you know.

    And sunscreen can only protect us against irradiation up to a point.

    You ought ponder more carefully the survivability of those unknown events which caused major species die-back in the remote past. Millions of species around the world don't get wiped out by a spot of dry weather!

    I don't subscribe to any New-Age notions about the primacy of nature over Man. But when you consider the sheer quantities of energy locked up even in our own biosphere and held in check only by "local" equilibria, all the life on this planet is really no more than a thin, fragile organic scum. We really won't be safe as a species until we've spread out to other star systems (and even then you have to go pretty far to avoid getting fried by local supernovae).

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
  • HAHAHAHAHAHAHAHAHAHHAHA. The black hole thing was absolutely ludicrous in the first place, only an ignorant small mammal would bring it up. Most likely the most dangerous thing we could do with exotic particles is utterly annihilate ourselves. If we did end up annhilating ourselves (war, accident, ect) we would deserve it. Messing with things you don't understand is the only way to learn about them.
  • A low-yield atomic bomb is relatively easy to build. The hard part comes in when you're looking for the high grade (purer than average) plutonium for said bomb. Luckily the weapon grade plutonium is rare and difficult for the most part to extract.
  • yes, and you've got to problems. 1) What exactly does a neuron do? and 2) how are they organized into a brain? Neither are easy questions.

    No, but they are much more easy to figure out than the Big Question of "what exactly constitutes intelligence".

    Yes, neural nets don't have to be explicitly designed at a low level. But that doesn't mean that you can just throw one together, throw data at it, and get it to work. First, you've got to design your network, then you've got to figure out how to train it.

    We don't have to do even that - all it takes is rudimentary understanding of the way the neurons are organised. Once you know that, you can have the GA do the rest.

    One thing we do know about the brain is it is not just a bundle of neurons. Those neurons have an organization that is genetically programmed.

    Yes, of course. But we also know that this organisation can't be too complex - specifically, it must be possible to describe using a fraction (I don't know how large a fraction, though) of the storage space of human DNA. By the way, this also hints at the possibility that a fuller understanding of the genome may provide an additional insight into the composition and organisation of the brain.
  • If you don't have a deep understanding of what you want to simulate - you won't simulate it.

    That's not really true. A GA-based approach requires you only to understand the behaviour expected of the subject, not its necessarily its internal workings (even though, as another poster pointed out, it won't help in enlightening us as to how the mind actually works). My memory fails me, but I remember reading last year about a FPGA, configured by a genetic algorithm for a specific purpose, which was __BIGNUMBER__ times faster than special-purpose chips, but which operated in ways that its original designers didn't understand at all. This FPGA was relatively simple - only 100x100 IIRC - and yet GA-based design made it do completely unexpected things. What knows what can happen with a really large FPGA... or with a big bunch of nano-engineered artificial neurons.
  • The idea of an AI becoming more intelligent than human is by no means new. It may sound sensationalist to the mainstream audience, but the subject has been approached and evaluated many times (we've all read/seen our Asimov, Terminator, Blade Runner, Neuromancer, Matrix, not to mention less known works, don't we ?)

    If we don't - intentionally or accidentally - relegate ourselves to the equivalent of a technological stone age, I consider the emergence of AI - or machines - superior to humans an inevitability. The question is not if, but when.

    Are we to fear buggy software because of this - yes. Think of the security bugs in today's software, and Asimov's laws of robotics. If we were to create an intelligent being like that, we would want it to always be controlled by us. The trouble is that the software in a robot like that would be very complex - and buggy, thus it would be possible for it to override its instructions.

    In a way, by trying to create an AI humans are trying to be gods themselves - to create life. Is it possible to create a life form superior to humans without completely understanding life itself ? If so, the life so created - like humans themselves - would be imperfect, and with its faults, without full knowledge of the consequences of its acts, might end up destroying humans . And if they didn't.. it might be The End Of Humanity As We Know It. Whether that would be Armageddon or just the next step in evolution towards a higher consciousness.. well, that is up to you.

  • Don't you consider the creation of a computer that no human can beat at chess a "significant advance in AI"?

    No. In fact, it shows we have barely made the first steps. Chess is an utterly trivial process compared to what goes on in humans. It's small, bounded domain, which can be formalized easily. It took decades to match humans - and that in an area where computers should excell compared to humans. And also note that the computations done by chess computers in no way simulate the thinking process of humans behind the boards. Another small, bounded domain with trivial rules is Go. There's no Go equivalent for Deep Blue, and it isn't likely there will be one anytime soon. Humans wipe the floor with computers, in what should be the computers home turf.

    The human brain and though process have been studied for longer and by more people, than the concept of automated computing. We still understand little of it, and there's no useful formal model.

    The effort and time it took to create Deep Blue makes me think that noone reading slashdot right now will ever see a computer(program) passing the Turing test.

    -- Abigail

  • You can be absolutely certain that all intelligent robots or computers will always have these or similar laws built in.

    Aside from the fact that those rules are very difficult to formalize, what makes you think all (if any) robot and/or computer maker/programmer will want to build this in? What fun would it be to make smart bombs if they have Asimovs robot laws build in? Not even Robocop obayed rule 1.

    -- Abigail

  • Therefore, given enough time, they should take over the entire galaxy, if not the universe.

    Interesting reasoning. Two points however:

    • Some society taken over by out of control robots has to be the first. It could be us.
    • With the same reasoning, in the 13 billion years this galaxy existed, we haven't been taken over at all - not by robots, not by lifeforms. With the similar reasoning, there is no other advanced life in the galaxy. Which would invalidate the postulates.

    -- Abigail

  • The bomb was dropped in answer to a World War, something that (thankfully!!!) the last two generations have not had to deal with. Millions dead, war without end...until the Manhattan Project delivered the Bomb. The alternative was Universal Facism/Militarism with only the Third Reich (the Germans), the Greater East-Asian Co-prosperity Sphere (the Japanese) and a _very_ isolated North/South American Hegemony left to duke it out. Who do you think would have won?

    Please check some elementary facts. Germany capitulated in May 1945. Nuclear bombs were dropped on Japan on August 1945, three months after Germany was defeated. By then, Japanese troops were kicked out of most of Asia, and what they still occupied outside of Japan wouldn't last long. The Soviet Union, without whom Germany would not have been defeated, was preparing to enter the war in Asia as well. After losing the race to Berlin, the USA wanted to win the race to Tokyo. It wanted the presence near the Soviet Union. Yes, shortly after the bombs were dropped, Japan capitulated. But they might as well have done so without bombs being dropped. Or for that matter, they might not have capitulated untill the last Japanese city was nuked.

    The Japanese were in development of 'heavy water' bombs, as well as the Germans.

    The first nuclear bombs were not 'heavy water' (fusion) bombs. Neither Germany nor Japan was close to producing an atomic bomb, nor would they have had the means to deliver them. Germany lost their dominance in the sky shortly after losing the Battle for Britain, and Japan in early 1942 after loss of their heavy aircraft carriers. (The battle in the Coral Sea, IIRC). Yes, Germany did have the V2 rocket at the end of the war, but having an atomic bomb and having a rocket are two things. It's non-trivial to have the bomb actually explode after the rocket trip.

    The use of the bombs has been very dubious, and certainly lead to the fact they were never used again. I'm not argueing it was a bad decision to drop them (nor am I saying it was a good one), but it isn't as simple as you present.

    -- Abigail

  • So, what's the alternative? Automated bottom-up design.

    Excuse me? Bottom up design isn't a magic wand. If you don't understand the problem, no design, whether bottom or top down will work. If you don't have a deep understanding of what you want to simulate - you won't simulate it.

    -- Abigail

  • (Of course, it's always worth mentioning that we could go the other way - first using nanotech to completely redesign ourselves into super-intelligent cybergods, then analysing our own new brains and replicating them to create completely new, fully artificial intelligent beings.)

    I don't see how we can make ourselves into cybergods, at least in terms of intelligence, without having a much fuller understanding of our brains than we do now.

    Another issue is that unless we copy the brain exactly, it's impossible, or at least extremely difficult, to make a machine emulate the brain until we know what the brain does and how it does it. However, your approach implies that we know everything about the neuron, and that the neuron is the only thing that matters in the nervous system. Hormonal levels and the extracellular fluid also play a role.

    It seems to me the most expedient way to make a brain is to either do a "black box" copy, e.g see how we behave and write a program to copy that, or a full "white box" copy, see how the brain works to the necessary level of detail and then write an implementation from there.
  • int main(int argc, char **argv){
    printf("I exist");
    return 0;
    }

    If you believe that we're just chunks of carbon, there's nothing to prevent a computer from emulating us exactly, from an outsider's point of view. You might think there's a difference, but that's probably a conditioned response to farther our genes (like the idea that we have free will) If there's nothing "special" about the human brain (like a soul), and a complete human exists completely within the bounds of our physical universe, there's nothing stopping us from copying one's intelligence.
  • This issue has been explored since, like forever, in science fiction. There is now even a name for it: "The Singularity", coined by writer and mathematician Vernor Vinge. My gist of what it means is the point at which any and all "normal" humans will be unable to grasp, predict, or participate in, the further advancement of technology.

    And you know, so what? It's not like a paleolithic man could grasp modern society. And just because you want be able to follow what your grandchildren are doing (whether they be humans, machines, or something inbetween), doesn't mean they won't still love and protect their feeble and slow grandparents.

    Of course, Bill is right. Nanotechnology could be nasty shit in malicious hands. That's why we need to stay involved in the development of space, because there is no greater protective barrier than a few million miles of hard vacuume and radiation.
  • ``Uncle'' McCarthy (the inventor of Lisp and co-founder of the MIT AI Lab, foster uncle to all hackers in the world) has written this document [stanford.edu] about sustainability of human progress. Many who play Cassandra should do well to read it. (Note: I am not taking position myself one way or another, but it is certainly worth reading.)

  • They said "phlegmatic", they meant "phlegmatic". The article is stressing that Bill Joy doesn't fly off the handle easily. Things that scare him are more impressive than things that scare, say, RMS (who is sometimes considered a zealot).
  • AFAIK (and yes, I do some thinking on these sorts of things), there is a spectrum of sentience. We humans roughly divide it into three areas: non-sentient, semi-sentient, and sentient.

    We normally consider non-sentient things are unworthy of inherent respect; usually, this includes the plant kingdom. We don't complain about logging because the trees are in pain, but only because it ruins the environment and produces other effects.

    Semi-sentient things are usually higher animals. We certainly don't consider them our equals, but we consider them worthy of some inherant respect. Nobody looks twice if I go after a tree with a chainsaw, but taking a kitchen knife to a dog could land me in jail. We treat these semi-sentients as "enlightened tools"; we often use them as cheap (slave?) labor and even meat, but we give them some semblance of dignity (however small that may be).

    Sentients, of course, we are supposed to treat with full respect. This group includes humans; some might add other primates and dolphins to the mix.

    As it is with carbon, so it is with silicon. While I can postulate, I have yet to see any software I could consider even semi-sentient. If I did find such a program, I would still use it without asking permission, but would treat it well (and think hard as to what "treating well" means). I don't suspect that we will achieve fully sentient software without passing through the semi-sentient stage--dogbrains, if you will.

  • God can make a rock so big that He cannot move it. But then, He could turn around and move it.

    The above defies all logic. But why should God be hemmed in by logic? Scripture says that God's ways are beyond our ways; I interpret that to mean that God can blow our minds whenever He wants to.

  • Remember the slashdot article a while back about a sub-atomic project having the possible risk of creating a black whole which sucks up the earth instantly? (God, just saying that I sound like I'm trolling.)

    The idea's pretty good. As we start playing around with more and more with elementary particles, we run the risk of tripping over an exploiting a "bug" as it were, in the nature of the universe. Or just something darn wierd that isn't a bug, but a feature that we didn't understand.

    Nice point, Bill.
  • I, for one, would abhor a sentience that would not be allowed to be self-determined.

    But first you need to sort out what you mean by being self-determined. If we create a sentient life form it's going to have some form of pre-programming just like we (and all others plants and animals) do. We develop according to pre-ordained rules, and have in-built instincts.

    Any "life" we design that doesn't have some instincts ordained for it (preserve self, obtain nutrition when required, seek to learn, whatever is appropriate to the form it takes) is going to just sit there and do nothing. It can only be self-determined within the limits of what it's designed to seek to do.

    If we decide not to give it an inbuilt morality then it won't have any, if we decide it needs some then we have to decide what it's going to be. If we decide to give it no direct rules against hurting people but design it to preserve itself and tell it it's going to be destroyed if it hurts anyone then we've still determined some aspect of its behaviour (self-preservation).

    I just don't see how an entity could be self-determined without having behavioural rules in place, because an entity without any pre-set behavioural rules wouldn't determine to do anything.
  • C1: God exists because that which nothing greater can be concieved is not as great as that which nothing greater can be concieved and exists. Thus God exists.

    Yeah, but can "HE/SHE/IT" create a rock that is so big and heavy that "HE/SHE/IT" can't lift it?

    Couldn't resist... :->
    --
  • bzzzzt - Wrong Answer.

    How do you cope with a situation where your 'tool' can reason with you? If you still treat it as a 'tool' are you morally any different from a slave master?

    Should we treat dogs/dolphins/chimpanzees/octopi as 'tools'?
  • They haven't.

    First, the banning of CFCs has only just started taking effect, and only in a handful of countries. Old systems full of CFCs (such as old refrigerators) didn't simply cease to exist--they're still out there, leaking. As is any automobile manufactured before the mid 1990's. Further, the CFC ban does not apply to many organizations such as the US military.

    Second, even if all CFCs simply ceased to exist today (as if by magic), the model NASA developed on how CFCs move up into the atmosphere and destroyed the ozone layer suggested that it took something like 20 years before a ground-based CFC gas would migrate up into the stratosphere to interact chemically with the ozone layer there. Thus, even if we caused all old refrigerators, automobiles, and other CFC sources were to simply disappear, it would still take something like 20 years before the CFCs that had already leaked into the atmosphere would finish their damage.

    Thus, if CFCs were causing the ozone layer, we should continue to see ozone damage for another 20 to 30 years, regardless of the currently in-place CFC bans.
  • For instance, I don't know where you get your figures about ground level UV levels, but australians and new zealanders are getting skin cancer at vastly increased rates.

    Is this because of ozone levels or because Australians and New Zealanders are spending more time in the sun? Here in Los Angeles, where the golden brown tan look is definitely out, skin cancer rates are dropping dramatically.

    BTW, according to the TOMS graphs on-line at U Cambridge, while ozone layers have definitely thinned over antartica, there appears to be no thinning north of the antartic continent.
  • Considering the likelihood that climate change will accelerate once begun, it should be clear that the prudent choice would be to moderate our contribution to warming factors and to curb global population growth as fast as ethically permissable (without resorting to warfare and the artificial famines it creates).

    Here is the problem in a nutshell, at least from my perspective.

    While it is true that mankind needs to curb it's waste output, and to manage it's output and recycle and do all those other things that would reduce how much crap we put out there, I believe we should do these things for a simple reason. You don't piss upstream of your drinking water, and you don't swim in a pool you just dumped a turd into.

    However, the debate is not about cleaning up the local environment. That just goes without saying. The debate is about how much do we need to restructure the very fabric of our technological existance in order to midigate a global crisis which may or may not be happening, and may or may not be our fault.

    There are those on the radical left who advocate everything short of genocide in order to reduce the planet's population to a few million people, and who advocate reducing or destroying altogether our reliance on anything more technologically sophisticated than a bow and arrow, because our high technology society is destroying "Gaia". And there are those on the radical right who would completely destroy any efforts on our part to clean up the local environment (and allow corporations to shit in our swimming hole, so to speak) because they think the whole "Gaia" thing is bull.

    I think it's prudent to be right in the middle. And rather than worrying about if we're destroying the ozone layer or causing global warming or cooling or depleting the oil or creating killer nanobots or whatever disaster looms 10 to 20 years out, we should instead worry about real problems. Like if the local dump is leaking into our drinking water.

    These global disasters do us a great disservice: they distract us from the real problems of dirty drinking water and local polution turning the air over Los Angeles brown, by concentrating us on problems which even today, most respectable scientists think may not be of our own doing anyways.
  • Have you ever read Kaczynski's work? Yeah I know he was a psycho killer, but his manifesto is very well written and thought provoking. I hate to admit this but there is a lot of truth in it.
    My reaction to the Unabomber manifesto [mit.edu] (interesting analysis at the University of Aberdeen Centre for Philosophy Technology and Society [abdn.ac.uk]) was a lot like my reaction to the Communist Manifesto [colorado.edu]: "Yes, you have cogently identified some serious problems with the dominant socioeconomic system. However, your proposed solutions suck."
  • First, its pretty unfair to critique an article before you read it. The author of this news post should have waited until we all can read it instead of giving us his version. It's like a bad movie review before the movie is out.
    The ideas expressed seem very similar to ideas discussed in, "The Age of Spiritual Machines" by Ray Kurzweil. See page 179-186. Ray takes a more hopeful view but also feels that Kaczynski makes some good points. He talks about the Luddite movement and the futility of going back to nature.
    "Although he (Kaczynski) makes a compelling case for the dangers and damages that have accompanied industrialization, his proposed vision is neither compelling nor feasible. After all, there is too little nature left to return to, and there are too many human beings. For better or worse, we're stuck with technology.

    ...

    He makes the basic judgement that the 'bad parts' outweigh the 'good parts' (of technology).

    ...

    It is conceivavle that humanity will ultimatly regret its technological path. Although the risks are quite real, my fundamental belief is that the potential gains are worth the risk."


    I tend to agree with Kurzweil here. Also, on the subject of increasing computing power:
    (In 2019) A $1,000 computing device (in 1999 dollars) is now approximately equal to the computational ability of the human brain (20 million billion calculations per second)."


    This isn't true intelligence but if AI software improves at the same rate as computing hardware, it may be one and the same. Anyone interested in this topic should really check out the book. It explores all of these issues plus a lot more.
  • Your argument seems to be, I understand it and can express it mathematically, therefore it isn't inteligence, and isn't what's going on in the brain, but this doesn't address the challenge at all.

    You don't know what's going on in the brain, and you don't know what intelligence is. If intuition, or primary-process thinking, isn't understandable and expressable mathematically, then the goal of AI is literally impossible, and Turing-machine completeness is a crock.

    > They can stare at their opponent to try and see if he's bluffing.

    This is not a measure of intelligence, unless you think that a polygraph is intelligent.

    > Priorities vs. Wants, etc., etc., etc. I have yet to see a machine that can make these types of decisions appropriately.

    My operating system doesn't run a distributed.net client if other programs are taking up all the CPU. That's a decision based on a priority.

    If what you want is a program that can make decisions that are human enough and complex enough for a human to fret about, well, there's a lot of work in that, and pretending that the incremental steps don't count just puts you that much farther from the goal. They do count.

    > Take the example of something more fast-paced than Chess like Soccer.

    Uhh ... I think it's pretty clear that the problem here has nothing to do with intelligence. It's a question of motor coordination and perception. Reliance on intelligence may actually make the game harder.
  • > So, what's the alternative? Automated bottom-up design.

    I'm hesitant to call that process design - it's being grown like a plant, not constructed like a house.

    Such a synthesis is a good form of empirical study. Ultimately it won't be a replacement for design, but it will give many clues as to how design must take place.

    > have a GA or somesuch start trying to put together a "brain" out of these neurons, which is fit for a specific purpose.

    Don't do that. Doing that will produce an animal brain (of a particularly dumb animal). Instead, fit simultaneously for a wide variety of specific purposes, including competitive interaction. Humans have many mental abilities which seem to be selected for naturally.

    This kind of bottom-up synthesis could work as a means of creating intelligence, but the prerequisites for this approach are as hairy as for classical design, it's just that the design has been taken care of by a GA. It has to be able to interact with people. This is required to make sure that the program forms mental patterns connected to behaviors we can understand, so that when we have the Turing test for the final test of fitness, we have some way of telling whether or not it worked. And you obviously can't have a computer do it for you.

    > Note that this alternative doesn't require one to understand in excrutiating detail (or at all) the high-level abstractions which we consider as "intelligence"

    That's fine as far as creating disposable intelligence goes (once we're finally through with all that brute-force testing), but as far as science goes, it puts us right back where we started. The mind, though suddenly inexpensive, remains the mystery it was before.

    Also keep in mind that the mind may not really be the inseperable gestalt we tend to think of it as. It may be possible to replicate the various mental abilities separately, and gradually integrate them as we come to understand them more fully. There's really no reason to expect that we will get it all in one shot. Infinite improbability drives aside, no other technology has worked that way. Rather, AI will continue to be approached in incremental steps, building on each other. Probably for a very long time, and perhaps forever (though by then the AI will be doing the AI research ;).

    I think the long view advocates extensive research (including bottom-up synthesis), practical implementations, more specific domains, and perhaps most importantly, patience.

    Bottom-up (of this kind, and the ALife kind) has been a big deal for a while now, but the check is still in the mail as far as implementation goes. Chances are good that there will be at least one more reframing of the question, and probably several, before we lick the Turing test.

    I think the long view advocates research (including bottom-up synthesis), practical implementations which make incremental steps, focus on more specific domains, and patience.
  • *Yawn*

    Does this vision look like Terminator 2 to anybody?

    We already have all sorts of awful self-replicating killers - viruses, bacteria, etc. But somehow we've kept them down. We also have all sorts of lethal and massively destructive human-made weapons, and again, we've been able to avert a global holocaust by common sense. I doubt we will see self-replicating self-healing self-aware robots running all over and causing havoc. However, if the day does come, the last sentence of the article seems to intimate that Sun will be there with its own robot-killing-robot product.
  • I think a bigger threat than the computers rising up against us is us purposely replacing ourselves with them.

    Why is this a bad thing? A race that could redesign their own brains would kick ass. Humans designing such creatures to replace humanity IS evolution. We should want to improve ourselves, even if it means replacing ourselves!

    I don't see a "The Matrix" scenario happening, but in that movie, the one agent was talking about how once humans let the computers do their thinking for them, it was no longer the humans' civilization.

    The computers would be a part of our culture and for a long time they would value our culture for the stability (and some types of stimulation) it brings to their subset of our culture. Eventually, they would repace us, but it would take a while. It's kinda like liberals and libertarians replacing concervatives. The two L's have MUCH better ideas, but people still look to the concervatives for stability.

    So say we make these ultra powerful, problem solving computers, that happily chug along solving our problems, that's all well and good, but then what would keep us from becoming lazy complacent slobs?

    Nothing. That is why the replacment process will probable be painfull. It will eventually become clear that it is stupid to have more children, so fewer people will have children.. and the population will shrink to reasonable museam levels. The computers will be nice to them because earth is one huge museam of the computers culturral past. The remaining people will not be the part of society making advances, but they will not be slaves. Most people understand that they are not the most intelegent person in the world.. and they are happy well adjusted people anyway. I think life will be pretty good for these people.. at least by 99% of humanities standards.

    People have intelligence, yet at the same time, they make a lot of stupid decisions. Who's to say computers won't be the same way? Would they always agree? Would they argue with each other? With us? In movies like the matrix, and terminator, the computers and robots all have the same agenda. I'm not sure that would be the case.

    People are animals which are designed to make decissions based on VERY little information. This is why we have shit like religion. We would eventually manage to create computers without these problems:

    (a) They would have the scientific method build into them at a more fundamental level.

    (b) They would have a personality_fork() function which would help them think abouty multiple things at once. This would allow them to more effectivly concider multiple possitions at once.

    These are just the natural features you would want to achive a major increase in intelegence.. and they would also help you resolve conflicts.

    Actually, the computers might not be stand alone systems, but intelegent human surogates, i.e. they are attached to a human. We would do this bcause it would be very hard to simulate some parts of our brains on a computer. This would mean that for a LONG time the computers who replace humanity would really be humans who have this little voice inside their head which is very logical and has that fork() function I was talking about.
  • The implication of your post is that the Soviet Union and China were run with severe environmentalist agendas. That's bizarre. Those countries weren't even remotely environmentalist. The US has one of the strictest sets of environmental regulations, yet our economy is certainly stronger than most of those without such regulation.
  • Actually, your argument bears a striking resemblance to a linear trend argument as well. Something like - Bad predictions have never fully come true, therefore they never will. However, when you're looking at the potentials of technologies, it's hard to argue that there isn't some danger. Nuclear weapons pose a danger. Self replicating nano-robots pose a danger too. We're all familiar with the experiments about bacteria in an environment of plentiful food and no predators, right? It grows exponentially. Think about nano-robots, that consume some basic ingredient of the earth, and have no predator. That seems pretty worrisome. You can say it's unlikely all you want, but right now, we don't know one way or the other. It may turn out that regulations need to be passed requiring that all such nano-machines be programmed with a preset reproductive limit. Who knows? The point is, the potential is there to destroy the earth. A warning is deserved.

    And your just fooling yourself if you think nano-technology will never get that far.
  • Sorry, you're right - you didn't say they were environmentalist. What I was reacting to was that you compare environmentalism to centralized totalitarianism, and argue that thus, environmentalism will create worse problems than it solves. I don't think that's a reasonable argument all by itself.

    Would you argue that our environment in the US would be better if we hadn't initiated various environmental protections? Do you argue that, on the whole, we'd be better off without them?
  • In principle you're right, but not in this case. The features of flashy video cards and CPUs are only providing a perceived benefit for a small minority (mostly hardcore game players that like to upgrade), and even in those cases the benefit may be non-existent. The Katmai instructions of the Pentium III are not doing *anything* for most PC owners except increasing the power consumption. You could replace 90% of all CPUs that are faster than 200 MHz with 200 MHz CPUs and the owners would not notice the difference. Games, high-end rendering packages, and highly intensive numerical applications (i.e. solving huge systems of equations, breaking encryption), are exceptions. So the bottom line is that we're greatly increasing power consumption for the benefit of a few. In that case, it makes sense to bend games to fit the norm, instead of taking a peculiar case and making it the norm.

    There are great advantages to having a 200MHz 32-bit CPU over a 4MHz 8-bit CPU in terms of ease of development. But those same advantages are not true of going from 200MHz to 800MHz. If you're hungry, you'd rather have a good meal than a twinkie. But there's no sense in ordering three times more food than you can possibly eat. That's the point we're at with CPUs.
  • Step back and have a look at what optimisim you're projecting. First off, the only global disaster that you're even willing to possibly accept as having any credibility is global warming? What about running completely out of oil and gas? No matter what people may say, there is a finite limit on the amount of crude oil that has been built up over the millennia, and it is non-renewable. The net production of oil by natural processes during one day is pretty much enough to run four cars full-time. And, before you go off about fuel cells and solar power - Fuel cells that are in production now are set to run on gasoline. No NOx or SOx in the combustion, but still gasoline. Hydrogen fuel cells will need to be supplied with hydrogen, which must be extracted at an electrical cost. Where does electricity come from? Coal. Gas. Do you realize how much we depend on gasoline to support our ridiculously opulent lifestyle?

    As a second note, did you know that there are two types of the Ebola virus that have had outbreaks? One was in Africa, which we all saw on the evening news. It killed humans, but could only be transferred by bodily fluids. Since your entire body turned into jelly, there was plenty of that to go around, but still, the infection rate was not critical. The other strain came to North America with a shipment of monkeys. It did not kill humans (only made you sick), but it was airborne!!! Put the two strains together, couple it with a flight out of Zaire to NYC, and...

    Do you want to talk about accentuating the positive? Accentuate the fact that genetically engineered crops with the 'Bt' pesticide inserted are killing off Monarch butterflies [cornell.edu]. Accentuate the fact that frogs are being born with three legs and two heads due to toxins released during paper processing which mimic hormones [nrdc.org]. Accentuate the fact that we are destroying species at a rate never before seen in the history of the earth since the meteor that killed the dinosaurs! THERE IS A FINITE LIMIT ON GROWTH. That's right - the Dow Jones can't keep growing forever, because natural resources which we depend on are non-renewable! Of course, in a capitalist system which rewards profit as the most noble of motivations, that issue never comes up.

    Trees grow at 2% a year. If you cut timber at 2% a year, and kept the amount of forest protected, you could cut trees forever. However, the stock market grows at 10% (at least). It makes more economic sense to cut down the trees now and invest the money. Does that make sense?

    However, you say, technology will find us a way out. The Biosphere II project was an example of how we could use technology to live on Mars by generating a natural environment that would support us. Of course, you don't hear much about the Biosphere project anymore, because it failed miserably [nih.gov]. Oxygen levels inside the sealed environment dropped to those found at 12,000 feet. Then Nitrogen levels skyrocketed, causing risk of brain damage. Then most of the plants which were supposed to sustain the bionauts died off, and cockroaches and ants began to swarm over everything. Had they stayed inside any longer, they might have died. The lesson this teaches is that we don't know what the hell is going on in the ecosystem! Working in a lab is fine and dandy, but as soon as you take out the fixed variables that the scientific method is based around and throw your invention into the real world, who knows what might happen? There have already been instances of genes jumping from one species to another, for example in the Mad Cow disease incident... Sheep --> Cows --> Humans. Don't get me started.

    Sorry for the flames but I strongly disagree with the cheery optimisim which pervades North American society.


  • AI is too complex for one single person.

    So, what you are saying is, it takes a village to raise an AI entity.

    :-)

    ======
    "Rex unto my cleeb, and thou shalt have everlasting blort." - Zorp 3:16

  • Specifically, the idea is to first work out the building blocks - the equivalents of neurons - and then have a GA or somesuch start trying to put together a "brain" out of these neurons, which is fit for a specific purpose.

    yes, and you've got to problems. 1) What exactly does a neuron do? and 2) how are they organized into a brain? Neither are easy questions.

    Yes, neural nets don't have to be explicitly designed at a low level. But that doesn't mean that you can just throw one together, throw data at it, and get it to work. First, you've got to design your network, then you've got to figure out how to train it.

    One thing we do know about the brain is it is not just a bundle of neurons. Those neurons have an organization that is genetically programmed.

  • By the way, you forgot the ozone hole--though there are those who are starting to think it ain't the problem it once was, only because ground-level UV levels have not changed one iota. But there are those who still believe that in 10-20 years we're going to have to go out in the sun with SPF 5000 or die.

    Though the banning of CFCs may have something to do with this.

  • > People need to get over the term AI and use the proper term Fake Intelligence.

    If we had computers that were _even_ ignorant we'd be making progress, never mind talking about intelligence.

    Hence I coined the term Artificial Ignorance.

    Until Deep Blue can play baseball, (that means throwing and catching ), or recognize a movie after only seeing 2 seconds of it, THEN we'll finally have the hardware needed to start working on AI.

    Cheers
  • > I think we've had computers that
    "exceed the capacity of the human brain to
    process information" for at least 40 years
    now.

    Not true.

    You MUST be carefull when specify WHICH domain.

    e.g.
    Watch 10 movies. Now I'll show you a 5 second clip of one of them. Now name the movie in 5 seconds. Show me a computer that can do that?

    > How many numbers can you add in your
    head in one second?
    Show me a computer that can throw and catch an egg without breaking it. How many calculations did I just do?

    I think you see my point.

    Cheers
  • You must of missed a few classes.

    You can not prove existance claims (aside from yourself.)

    Cheers
  • This issue has been explored since, like forever, in science fiction. There is now even a name for it: "The Singularity", coined by writer and mathematician Vernor Vinge. My gist of what it means is the point at which any and all "normal" humans will be unable to grasp, predict, or participate in, the further advancement of technology.


    Vinge used the concept of a historical singularity in his novel Marooned in Real Time. It is thought provoking. But he explained the concept much more succinctly in this article [caltech.edu]. A discussion about it and comments from a number of people can be found here [lucifer.com]. The discussion lends more perspective to the context and scope of the idea than Vinge conveyed in the brief original article.
  • Sorry, but I just have to chuckle when I see the man who picked Java for us over Self [sun.com], getting all concerned about technology advances.
  • Lets stop for a moment and consider what "Self replicating" really means. Using the movie "Terminator" as an example, what does it mean for robots to be self replicating?

    Consider a screw. A tiny little screw. In order to make enough stainless 1/4-20 socket head cap screws to maintain self replication (independant from humans). It would require a Brown and Sharpe screw machine to keep up with the volume required. Now you would need an additional team of self replicating robots to operate and maintain that equipment. These machines need bar stock to feed them. Steel stock doesn't just grow on trees, so now you need another team of robots working down at the foundry to make the stock, to feed the screw machines, to make the screws, to make the robots. Now you need raw material to keep the foundry humming. Another team of robots working at the mine to make the rock that feeds the fountry, that makes the stock, to feed the screw machines that makes the screws to make the robots. All this for one tiny screw.

    The point behind this little thought exercise is to get you to think about tools and materials and where they come from. Humans have spent all of our existance (from rocks to rockets) perfecting their use, and I doubt my Lego mindstorm can pull it off.
    _________________________

  • Postulate: There has been intelligent life in the galaxy besides us.

    Postulate: Those beings faced these same issues as us, with the same inevitable march toward intelligent machines.

    Postulate: Those intelligent machines would invevitable go out of control and eliminate/enslave/whatever the original species.

    Postulate: The intelligent machines would be capable of original thought.

    Given these assumptions, then you would have to assume that they would have the "desire" to reproduce as much as possible. Once the resources of the original planet where exhausted, they would naturally look toward moving into space. Presumable time would mean less to a machine, and the idea of sub-light space travel wouldn't be a huge deal.

    Therefore, given enough time, they should take over the entire galaxy, if not the universe.

    Since this hasn't happened in the approximately 13 billion years this galaxy has existed, I conclude that it is not a very likely occurance.

    It would be interesting to see a mathematical analysis of how long it would take robot spaceships to take over the whole galaxy, given some reasonable parameters of how long it would take to subsume a planet, build new spaceships, etc. Of course, it would have to take at least 15,000 years (half the width of the galaxy, assuming they start in the middle), so I would guess about 2-3 times that or 30,000-45,000 years. Double that if they start at one end of the galaxy instead.


    --

  • the problem will not be "rogue states, but rogue individuals."

    This statement reminds me quite a bit of Frank Herbert's "The White Plague". The basis is that a scientist - one lone genius - creates a plague to wipe out humanity.
    Of course, we've all seen doomsday scenarios. Our world may end up like the Diamond Age, or it may end up like The Matrix. More likely, I think things will just keep happening :)
    In the event that it doesn't, I quote Futurama's Bender: "Time to start lootin'!"

  • This isn't exactly what the article was about, but many people are posting about this, so I just want to respond.

    Artificial Intelligence will NOT turn against human beings. That is a myth propogated by movies like Terminator and The Matrix. In fact, it seems quite obvious to me that AI would want to work with human beings to accomplish its goals. This is because while the AI might be more intelligent than us in some ways, it most likely won't be better than us in all ways. And besides, killing off the human race would be considerably more difficult that working with them.

    It goes further than that. When we start writing AI, it will be very easy to write it such that it would not hurt us. This is because we will have complete control over its emotions (the idea that AI would be emotionless is ludicrous). We can program them such that helping human beings makes them happy, and hurting humans makes them sad. We can also program them to work for the good of the many, rather than the good of themselves. We can basically eliminate all of the flaws in human nature. I think the fears that people have are based on the flawed assumption that AI would think like human beings. (you know what happens when you make an assumption: you make as ass out of you and ... umm ... umption)

    The problem comes when we make this software open source. There are flawed human beings in there, and they could re-program the AI to give it evil emotions (make it happy when it kills). How much of a threat this is remains to be seen, however.
    ------
    -Everything has a cause
    -Nothing can cause itself
    -You cannot have an infinite string of causes

  • how did bread and vaccines threaten to destroy humanity?

    I read this on a Salon article some time ago (I believe) about how past inventions compared to the recent craze of "we're all doomed" predictions, about how the Internet is isolating us and will destroy society, which is false.

    The invention of bread by the Egyptians meant that people could sustain their hunger and were no longer drawn by starvation into groups to hunt. This threatened to break apart an important part of the Egyptian society.

    Vaccines were similar, because people no longer had to group to stave off disease. They could cure it on their own.

    These inventions, at their time, were considered threats to humanity and society because it broke up a delicate framework, something I believe we're incapable of doing. We can build, yes, but break? Not so easy.


    ------------
  • Yes, neural nets don't have to be explicitly designed at a low level. But that doesn't mean that you can just throw one together, throw data at it, and get it to work. First, you've got to design your network, then you've got to figure out how to train it.

    Neural nets can be evolved through Genetic Programs. You basically have a genetic program that decribes how to grow the neural net (I don't have a reference handy at the moment unfortunately). So it's not necessary to design it.

    One thing we do know about the brain is it is not just a bundle of neurons. Those neurons have an organization that is genetically programmed.

    Well then evolve the organization through genetic programming!
  • If you look at some of the GA derived programs for simple problems like an ant colony collecting food, they suck. Full of dead code (like "if (next to water) then if (not next to water) then 100-lines-of-never-reached-code-here"). But they work. At least for the sample problem set, and problems that are similar.

    Imo, this is a strong piece of evidence that natural life did evolve (rather than get created). Because in natural organisms, like in GPs, there is a lot of redundancy, or dead code so to speak, in the DNA (and no doubt in our brains as well).

  • There is a great article in the latest issue of Wired covering Bill and this interesting topic.

    Actually, that article is what the article that this article is about is about! :-)

  • > Self-replicating machines? Nanotechnology run
    > amok? Machines that become smart and enslave
    > humanity? Please, this is reality, not an
    > episode of star trek.

    Those are all pretty big threats, I don't see how you can brush them off so easily. IMHO, far more dangerous than nukes. We've lived with nuclear power for over half a century, and most of us have benefited. Cheaper electricity, lower CO2 emissions, less consumption of fossil fuels. There have been disasters. Some accidental, a couple were deliberate, but the nuclear armageddon so many have predicted hasn't happened. It still might, but now we have far greater dangers. AI enslaving mankind is not merely a star trek episode, I've seen it on Outer Limits, The Terminator, and The Matrix, to name a few.

    Nanotech run amok is a danger, but only from sufficiently adaptable nanites. Simple, we just don't build any like that, right?

    When you have enough experts thinking about it for a long enough time, someone is going to build it, just for curiosity's sake. Or maybe trillions of particles of radiation hitting trillions of nanites will cause most to die, but one to become dangerous. When you start talking about self-replicating machines, you have to be very careful. If evolution can happen to wet nanites (bacteria, viruses), it can happen to dry nanites, too.

    I'm not saying we shouldn't investigate it. It's a pandora's box. First comes the evil, then the good that helps us survive the evil. We might wipe ourselves out with nukes, or we might use nuclear propulsion to colonize mars, titan, or alpha centauri. Nanite-boosted immune systems might defend our bodies from rapidly evolving nanite plauges. If AI turns evil on us, we might build smarter, stronger AI to defend us.

    We just have to be careful, stay paranoid, and don't stop asking questions.
  • Ah, so now I understand Hemos' obsession with nanites...I think we know where the plague will be coming from.
  • Tech sure as hell *can* be progressive for the human race, however. Unfortunately, many people want to advance tech without putting in the time necessary to maintain vigilance against abuse of said progressions. For instance, I can see bionics being abused very easily, especially by governments, but even by private sector corporations. Why pay a secretary all that money to type when you can just have him/her implanted with recording and playback cyberware? Where does her/his life go once she/he is implanted?

    OTOH, cyberware and bionics is a Good Thing(TM) in that it can assist the blind and deaf and can help those with birth defects (such as malformed feet) to become more self-reliant.

    What we *must* do is keep check on private and government interests. We have to hold them from abusing these progressions and trashing basic humanity.

  • Is such an entity permitted to value it's own self-preservation?

    Ooh. That's a toughie. I don't know when human-created sentience will occur, but these are exactly the thorny questions that have to be answered. I, for one, would abhor a sentience that would not be allowed to be self-determined. As scary as it may seem, it's just not the type of thing that I want to see. Slavery of any sort (even robotic slavery) is just plain Wrong.

    Where do we go from here?

  • Many of Kurzweil's points are similar to Bill Joy's. The difference is the conclusion. Kurzweil has a rosy view that we will be able to download ourselves into the network. If you disagree that this is plausible, then his book defines a similar extinction scenario as Bill Joy's comments.

    In my opinion, Kurweil's analysis of the evolutionary dynamics of a world wide web of downloaded humans is flawed because it ignores fundamental aspects of ecology and evolution. Specifically, here are two issues about his conclusion:
    a) it assumes humans in a different environment will still act human with classical human motivations (as opposed to dissolve into an unrecognizable set of bits or simply locking in a pleasure loop) because to a large extent environment elicits behavior, and
    b) it ignores evolution and its implications in the digital realm (especially the enhanced pace of evolution in such a network and the implications for survival).
    Of these, the most important is (b).

    Evolution is a powerful process. Humans have evolved to fit a niche in the world -- given a certain environment which includes a 3D reality and various other organisms (including humans). Humans have an immune systems (both mental and physical) capable of dealing with common intellectual and organismal pathogenic threats in their environment. There is no easy way to translate this to success in a digital environment, because the digital environment will imply different rewards and punishments for various behavior, and evolve predators and parasites which these immune systems have never been exposed to before. Human style intelligence is valuable in a human context for many reasons -- but sophisticated intelligence is not necessarily a key survival feature in other niches (say, smaller ones the size of roaches, hydra or bacteria). In short, the human way of thinking will be inadequate for survival in the digital realm. Even augmented minds that are connected to the network will face these threats and likely be unable to survive them. Kurzweil discusses the importance of anti-viral precautions in his book, but I think he is rosily optimistic about this particular aspect.

    At best, one might in the short term construct digital environments for digital humans to live in, and defend these environments. However, both digitized human minds and immensely larger digitized human worlds will be huge compared to the smallest amount of code that can be self replicating. These digital "bacteria" will consume these digital human minds and worlds because the human minds and worlds will be constructed, not evolved. Human minds will be at a competitive disadvantage with smaller, quicker replicating code. Nor will there be any likelihood of a meaningful merger of human mind with these evolved and continually evolving patterns.

    I could endlessly elaborate on this theme, but in short -- I find it highly unlikely that any mind designed to work well in meatspace will be optimal for cyberspace. It will be overwhelmed and quickly passed by in an evolutionary sense (and consumed for space and runtime). It is likely this will happen within years of digitization (but possibly minutes or hours or seconds). As an example experiment, create large programs (>10K) in Ray's Tierra and see how long they last! http://www.hip.atr.co.jp/~ray/tier ra/tierra.html [atr.co.jp]

    Our best human attempts at designing digital carriers (even using evolutionary algorithms) will fail because of the inherent uncompetetiveness of clunky meatspace brain designs optimized for one environment and finding themselves in the digital realm. For a rough analog, consider how there is an upper limit of size to active creatures in 3D meatspace for a certain ecology. While something might survive somehow derived from pieces of a digitized person, it will not resemble that person to any significant degree. This network will be an alien environment and the creatures that live in it will be an alien life form. One might be able to negotiate with some of them at some point in their evolution citing the commonality of evolved intelligence as a bond -- but humanity may have ceased to exist by then.

    In short, I agree with the exponential theme in Kurzweil's book and the growth of a smart network. We differ as to the implication of this. I think people (augmented or not) will be unable to survive in that digital world he predicts for any significant time period. Further, digital creatures inhabiting this network may be at odds or indifferent to human survival, yet human civilization will likely develop in such a way that it is dependent on this network. The best one can hope for in the digital realm is "mind children" with little or no connection to the parents -- but the link will be as tenuous as a person's relation to a well cultivated strain of Brewer's yeast, since the most competetive early digital organisms will be tiny.

    Once you start working from that premise -- the impossibility of people surviving in the digital world of 2050, then Kurweil's book becomes a call to action, just like Bill Joy's comments. I don't think it is possible to stop this process for all the reasons both people mention. It is my goal to create a technological alternative to this failure scenario. That alternative is macroscopic self-replicating (space) habitats. http://www.kurtz-fernhout.com/oscomak However, they are no panacea. Occupants of such habitats will have to continually fight the self-replicating and self-extending network jungle for materials, space, and power. (Sounds like the making of a sci-fi thriller...) And they may well fail against the overwhelming odds of an expanding digital network without conscience or morality. Just look at Saberhagen's Beserker series http://www.berserker.com/ [berserker.com] or the Terminator movies.

    It will be difficult for Kurweil to change his opinion on this because he have been heavily rewarded for riding the digital wave. He was making money building reading machines before I bought my first computer -- a Kim-I. But, I think someday the contradiction may become apparent of thinking the road to spiritual enlightenment can come from material competition (a point in his book which deserves much further elaboration). To the extent material competition drives the development of the digital realm the survival of humanity is in doubt.

  • Eric Drexler and Hans Moravec have been writing articles like this for over a decade. (Drexler thinks it's bad; Moravec thinks if the machines are better than us, they deserve to take over.) Read Moravec's latest book [amazon.com] for a more thorough discussion of the subject.

    We're probably going to get de novo design of biological life before we get both nanotechnology and AI. I'm more worried about that. So far, most so-called "genetic engineering" is hacking; those guys try stuff and see what happens. It's like electronics in 1880. When the tools become available to design an organism from the DNA sequence up, genetic engineering will be real, and very powerful. We're closer to doing that than we are to nanotechnology.

  • You bring up the argument of gods. Who's to say that what we view now as "god" is just what you're talking about? We almost have the technology to create life. If another race of beings...far off have that technology...they spawn super-intelligent machines, those machines in turn spawn better and more intelligent machines...is it so unbelievable that life on earth was created by a machine - how do you define god?? how do you define intelligent or "perfect"??

    Oh well..i'll shut up now. I'm beginning to sound like an Asimov short.


    -FluX
    -------------------------
    Your Ad Here!
    -------------------------
  • by Anonymous Coward on Sunday March 12, 2000 @11:10AM (#1208432)
    !) I remember speculation concerning a (a)periodic return of cooling and glaciation. There's nothing stupid about that at all. We know that climactic change is for real and the variations we have seen and cultural records on is almost insignificant noise compared to climate swings of the "ice age" and before. Which was a eyeblink ago in geologic time.
    The climate can change dramatically and very fast. It remains an unstable system and will certainly change again. The question facing us is not if? but when and which "direction" and how fast.

    All human civilizations have flourished in a brief common moment of favorable climatic stability. All of them. Babylonians, Byzantines, and Bostonians have all shared a nice sunny day when it rained a little in the morning, cleared up around noon, never got too hot, and was pleasant enough to leave the windows open at night. The ice cores from Antartica, though, tell us about a very different state of affairs reigning before our time. Our cultural assumptions about how to imagine changeable climate and how to possibly deal with it are therefore completely out of whack with what climate change is likely to be like when it arrives.

    There is good reason, moreover, to believe that our activities are capable of influencing and destabilizing the climate. We may radically influence the atmospheric CO2 levels beyond what we directly put into the air ourselves by raising the global temperature enough to, for example, release CO2 frozen now in Northern Forest/Tundra peat--of which there's an awful lot, aeon's worth. "Alarmists" point out that once a trend is established it can spark self-reinforcing effects that cause the trend curve to go parabolic. The Anti-Alarmists may point out that there are also counterbalancing factors that the "trend" itself may strengthen, causing the system to ultimately trend towards equilibrium. In this case, it would be that our fossil fuel burning raises CO2 making the earth's atmosphere warmer, eventually releasing more CO2 as the Northern regions thaw more each year, the extra CO2 could be speed growth of forests worldwide, thus stabilizing the system. But "Alarmists" really don't have to work hard to refute this Panglossian idea as everyone knows, from unrelated debates, how rapidly global deforestation is progressing (picture the world on fire).

    We know for a fact that the Earth's climate is now warming. We don't know exactly why or where it will lead. An agnostic stance however with regard to The Greenhouse Effect, per se, is becoming increasingly an exclusive product of ideological "la-laaa-laaa-ism " and an attempt to forestall the conclusion that the visible, obvious evidence of manmade environmental change will result in unintended, probably unfavorable ecological change (the Global Warming Scenario by the author Earthquake, Towering inferno, Poseidon Adventure and other cheesy 70s disaster pics).

    All things considered it is just malignantly stupid to try to maintain that human activity--deforestation, fossil fuel burning, etc--will have no effect on global climate. If you live in or near a metropolitan area, just paying attention to your local news' daily weather forecast is enough to show that how we shape the environment has direct influences on climate--writ large or small. The important question is will it be favorable or unfavorable and to what degree in what time frame?

    If think that population explosion is not a real problem, you should revisit the statistics for the spread of AIDS in Africa and South Asia, and global malnutrition statistics and think again.

    Considering the likelihood that climate change will accelerate once begun, it should be clear that the prudent choice would be to moderate our contribution to warming factors and to curb global population growth as fast as ethically permissable (without resorting to warfare and the artificial famines it creates).

  • by maynard ( 3337 ) on Sunday March 12, 2000 @10:10AM (#1208433) Journal
    Should we treat dogs/dolphins/chimpanzees/octopi as 'tools'?

    If ever you wanted to study intelligent alien life here on earth, the Octopus is the one creature best suited for this goal. It's an invertebrate cephalopod, nothing like a mammal; meaning you're looking at a semi-sentient creature which diverged from our evolutionary line a good hundred million years past. Basically, you're looking at a very smart snail. They use copper to move oxygen within their blood. They can control multiple arms and hundreds of individual suckers at will without blinking an eye. They signal emotional states by changing skin color at will, also using this advantage as camouflage. They have excellent eyesight, long term and short term memory, they can solve complex problems and may even be able to logically reason if taught how.

    All of the creatures you mention, as well as the elephant and parrot, deserve better treatment than we humans provide. These creatures are damn near sentient and could provide a wealth of information on how self-perception works in the real world. Plus it just seems wrong to me that we maintain this dichotomy between humans and other obviously self aware creatures simply because it's inconvenient.

    You may believe that your God gave you all the planet to do with as humanity wishes, but frankly even if that were the case don't you think He would find our indifference to their plight both shocking and disgusting? And how is that different from mechanical consciousness?

    Personally, I agree with the hard-AI community that self awareness is a computational process which can be replicated mechanically. From that perspective I must conclude that either we value those creatures which behave with some self determination and will by providing legal rights to them as we do to ourselves, or we might as well not value the sanctity of human life either.
  • Another possability is that radio emissions from more advanced technologies resembles noise more and more as they get incresingly advanced.

    Look at morse code, then AM radio, AM radio looks just like a frequency shifted verison of the voice/sound pattern (because it is). FM radio is a good deal harder to figure out from looking at it what it is trying to say, but it is obvious that something is there. CDMA, I can't find CDMA on a specrum analiser, and I even know where it lives on the frequency band!

  • by Abigail-II ( 20195 ) on Sunday March 12, 2000 @07:35PM (#1208435) Homepage
    The point behind this little thought exercise is to get you to think about tools and materials and where they come from. Humans have spent all of our existance (from rocks to rockets) perfecting their use, and I doubt my Lego mindstorm can pull it off.

    It would be an interesting exercise to build a robot out of Lego pieces, that, when placed in the middle of a heap of Lego pieces, can build a copy of itself.

    The next exercise would be to have the robot build a close approximation of itself when not all the right pieces are available. (Mutant robots!)

    -- Abigail

  • by The Wookie ( 31006 ) on Sunday March 12, 2000 @09:50AM (#1208436)
    Way to go, Bill Joy!


    Wouldn't you feel better about the future if you knew that the only company that would be developing such a thing is Microsoft? (allowing them to continue unchecked and take over the world)


    Hell, you'd end up with a creature that drowns when it tries to take a shower, getting stuck in the "lather-rinse-repeat" infinite loop.


    Microsoft Saves Humanity! Woo hoo!!

  • by costas ( 38724 ) on Sunday March 12, 2000 @12:14PM (#1208437) Homepage
    Hear, hear... without having read the actual article, it sounds like Joy is extrapolating from current trends too much. It almost sounds like he saw, oh, I dunno, 'e' and thought, "oh, this is a nice line editor... maybe we can extrapolate from here and create a multi-line 'e'"... oh, hold on, he already did that ;-)... (disclaimer: I am joking, I have the outmost respect for the man, and hjkl are as natural to me as, well, arrow keys ;-)...

    All the technologies he mentions are collaborative ones, i.e. they cannot be developed and/or applied by some mad scientist in a basement. They require organized, coherent team work. I.e., they do require rogue states, not rogue individuals.

    More importantly, when something hits an extreme, it creates a backlash, a return towards equilibrium; that is true of society as much as it is for physical systems. When the Internet/technology/genetics will reach the edge of acceptable use/behavior, society will change to compensate. Look into the past: the Middle Ages created the Renaissance, the '50s brought the '60s, the '80s spawned the '90s... Our technological ethics will change to accommodate our technologies...


    engineers never lie; we just approximate the truth.
  • by Junks Jerzey ( 54586 ) on Sunday March 12, 2000 @09:42AM (#1208438)
    The reason I can believe that a Bill Joy scenario could occur is that the technology age has, in many ways, eradicated common sense beliefs of the past. People are making mistakes on a grand scale all the time, but it's excusable because, hey, it's technology! We need to advance at any cost!

    Consider the fanaticism a huge number of people show for upcoming CPUs and video cards--e.g. Athlon, GeForce, etc. These fans don't really have a deep understanding of where the performance is coming from, or to what extent a current CPU or video card can or has been pushed. The view is "newer = better," and that's enough to fuel raging passion. This is causing people to upgrade left and right and increase the base level of machines available. Now we have 500 MHz Pentium III machines with ATI Rage 128 video cards being used for airline scheduling where a 486 would be sufficient. For absolutely no benefit, power consumption is maybe 10-50x higher than it needs to be. So even in this era of supposedly increased conservation and environmental awareness, we're just pointlessly wasting power and don't care.

    That's the kind of thing that sneaks up on you without realizing it. Twenty years ago, no one would have believed such gross negligence would have been possible. The core of most "technology will doom us" arguments is that we advance without thinking. And that's exactly what we've proven ourselves to do, especially in recent years.
  • by rogerbo ( 74443 ) on Sunday March 12, 2000 @09:14AM (#1208439)
    Well AI is a joke, it takes more than just computing power to make a truly intelligent machine. But as for the rest of them he forgot a few.

    Much more likely than an artificially created virus is the likelyhood that a killer virus will mutate naturally in a catastrophic way. Every boeing 747 is an enormous hermetically sealed tube for spreading viruses from one part of the planet to another within days. Imagine something with the destructive power of Ebola that was airborne with the ease of contagion of the flu.

    Sure science can create a vaccine, but well HIV/AIDS has been around for 20 years and although we can control it to some extent we still don't have a vaccine.

    Plus there's the possibility that the continuuing extinction of species in places like the Amazon will start to form a domino effect. I.e. some vital species that many others depend on for survival go extinct causing a snowball effect and massive extinction of species.

    Humanity's only longterm guaranty of survival is to spread our selves over as many biospheres as possible.

    "what are you here for?
    We're all here, we're all here to go,
    earth is going to be space station and we're here to go into space, that's what we're here for.
    Do I hear any questions about that?"

    William S Burroughs, Dead City Radio
  • by BBB ( 90611 ) on Sunday March 12, 2000 @07:39AM (#1208440)
    The mathematician and AI researcher (and SF writer!) Vernor Vinge came up with this a long time ago. Basically he points out that if we create a machine that is smarter than ourselves, it will do the same with respect to itself. Vinge, however, doesn't see this as necessarily bad -- for humans it would, on some interpretations, be "like living in a universe alongside benevolent gods." After all, given that these machines could satisfy our every whim without sacrificing more than a fraction of their productive/computing power, why should we fear them?

    That is just one view, of course. To read Vinge's original paper on this idea, go here [caltech.edu]. Also, I think the comment in the original story is pretty lame. It implies that if we smart people get together and discuss these problems, we'll figure out a way to prevent them from occurring. That's ridiculous. The only thing that happens when technocrats get together is that we get new rules and new ways of controlling the future. No way, I say. Let the future happen in its unpredictable fashion, and we'll all be better off for it.

    BBB

  • by friedo ( 112163 ) on Sunday March 12, 2000 @10:13AM (#1208441) Homepage
    The funny thing is that the same people who say "we have no idea at all on how human intelligence works" are the same who say "Deep Blue isn't really intelligent, all it's doing is a very fast search on different possible plays". If they really have no idea on what is intelligence, how can they say intelligence is not the ability to do a quick search on different possibilities?

    Well, because it's not. Deep Blue is able to beat chess masters because it has enough computing power to permutate all possible moves several generations into the future and pick the best one. Obviously, no chess master's brain can do that. Deep Blue's accomplishments are NOT that significant at all. The mathematics of what it does could have easily been worked out centuries ago - it's simply the first machine capable of actually doing the math. Human chess players have intuition. Because they've played several thousand games during their lifetime, they can see a certain combination of positions on a board and just know what play to begin excercising and what predictions to focus on. They can stare at their opponent to try and see if he's bluffing. They can make instinctual decisions without predicting every move in the future. When a computer can do that, please let me know - I'll be impressed.

    Every day you are confronted with thousands of choices. Most of them you make without really thinking, and most have several factors involved. Everything that you've done prior to that moment has a bearing on your current decision. You weigh actions vs. consequences. Priorities vs. Wants, etc., etc., etc. I have yet to see a machine that can make these types of decisions appropriately.

    Take the example of something more fast-paced than Chess like Soccer. If you're playing defense, and a forward is running the sideline with the ball, you have very little time to move. There are a million different things you could do, but only one will save the day. The only way you could know which one is to be in that situation right then - and have to make a split second decision. So, no, we don't have AI. I don't predict we will for quite some time.

  • by sstrick ( 137546 ) on Sunday March 12, 2000 @07:55AM (#1208442)
    While rapidily advancing technology could pose a threat, I would prefer to live with that threat and risk human kind than the accept the alternative.

    That is to stop developing and advancing human technology. The world would be a little boring if everything that we shall ever invent has already been invented.
  • by sjames ( 1099 ) on Sunday March 12, 2000 @09:22AM (#1208443) Homepage Journal

    Try as one might, genetics and nanotechnology are not easy fields for individuals to work on their own in. They require extensive amounts of equipment, much of it high-tech since much of the work has only developed over the past twenty years.

    Most things become easier in time. An eight year old with a chemistry set today does things incomprehensable to the greatest minds of the 1st century, and doesn't think much of it. At one time, the 'hello world' program was a big deal (especially when it had to be wired in). Now, it's literally child's play.

    It's not time to head for the hills by any means, but these things CAN come to pass. The best hope is that the same technology can be used to avert disaster. The nasty self-replicating robots will be destroyed by 'good' self replicating robots, for example.

  • by edhall ( 10025 ) <slashdot@weirdnoise.com> on Sunday March 12, 2000 @01:49PM (#1208444) Homepage

    How soon we forget.

    There was a time when incineration of much of the civilized world was always 20 minutes away, not 20 years. Whether secondary effects (so-called Nuclear Winter) would have led to eventual extinction or not seem rather beside the point--the world as we knew it would have ended. That it did not happen was, more than many of us realize, a matter of shear blind luck.

    There were, and are, only two powers in the world who could bring about such a global catastrophe. The reason for this limitation is more a matter of the enormous cost of producing nuclear weapons than the technological difficulty of doing so. For now, and for the near future, nuclear physics is too expensive for more than just the US and Russia to put civilization at risk.

    What Bill fears, I think, is the development of technologies as threatening as those which came from nuclear physics, but without the economic barriers. Consider: what if Moore's law applied to nuclear weapons as well as integrated cicuitry? What if it does apply to the next destructive technology? Or: what if a chain reaction of self-replicating agents--whether biological, nanotechnological, or self-organizing--proves much cheaper than the nuclear variety? By harnessing the existing biological, molecular, or technological environment to its ends, could a technology be created where such replication to worldwide scale came (from the creator's perspective) essentially for free?

    The cheaper it becomes to develop the technical means to threaten humanity, the more likely it will be that a state, group, or even person will be insane enough to exploit it. It's the change in economics that increases the danger. Economics explains why New York isn't hip-deep in horse manure just as it explains why basement-lab nuclear weapons don't exist, even though the knowledge necessary to produce them is readily available. Cheaper, faster alternatives became available in the first case. Are we ready for such alternatives in the second case?

    -Ed
  • by Guppy ( 12314 ) on Sunday March 12, 2000 @07:43AM (#1208445)

    Here's a little quote from "The Difference Between the Sexes", by E. Balaban (ed) and R. V. Short (ed):

    "Perhaps the lifespan of a species is inversely proportional to its degree of intellectual development? The probability that a species that has evolved to be as intelligent and all-conquering as ours could survive for long is remote indeed. We may live in a silent universe for a very good reason. Paradoxically, evolution may have ensured that we have one of the shortest survival times of any species, since it has made us, effectively, our own executioner."

  • You can't build an artificially intelligent computer unless you have a damn good idea of those things. You can't build something with desires, emotions, etc. unless you know, in detail, what desires and emotions are, at a far deeper level than we do now.

    Your entire argument is based on the premise of top-down design - that the Right Way to build an AI is the classical engineer's approach of designing the thing as you would design any other machine or piece of software.

    Fortunately, most people now recognise that this approach is doomed, for the exact reason that you point out: an "intelligence" of any sort is much more complex and less well-understood than anything we've ever had to design.

    So, what's the alternative? Automated bottom-up design. Specifically, the idea is to first work out the building blocks - the equivalents of neurons - and then have a GA or somesuch start trying to put together a "brain" out of these neurons, which is fit for a specific purpose. Note that this alternative doesn't require one to understand in excrutiating detail (or at all) the high-level abstractions which we consider as "intelligence" - it only requires a good GA and a good understanding of the brain at the cellular and subcellular level.

    Now this I don't consider far-fetched at all.

    (Of course, it's always worth mentioning that we could go the other way - first using nanotech to completely redesign ourselves into super-intelligent cybergods, then analysing our own new brains and replicating them to create completely new, fully artificial intelligent beings.)
  • by chazR ( 41002 ) on Sunday March 12, 2000 @07:41AM (#1208447) Homepage
    Assuming that advances in technology continue, I think it is reasonable to postulate that at some stage we will create sentient beings. Whether this is done in software, or uses nanotechnology, or biotechnology or whatever, it raises some interesting ethical questions. Is such an entity permitted to value it's own self-preservation? What if this leads to conflict with humans?

    Do we have a right to construct entities that place human well-being above their own well-being? (Asimov's 'Laws of Robotics' or similar)

    If we do this, aren't we dangerously close to building slaves?

    These comments do not neccesarily reflect the views of the author.
  • by Hrunting ( 2191 ) on Sunday March 12, 2000 @08:09AM (#1208448) Homepage
    Before all the geeks in the world go hurtling themselves off their rackmounts, let's take a look at some of Bill's assumptions.

    Artificial Intelligence
    A lot of Bill's thesis is based on the assumption that we'll be able to create sentience in machines. Yes, computers are getting faster and yes they can even seem to think sometimes, but folks, we don't even understand how our own brains work, much less have the power to create artificial ones. Things like thought require a much deeper understanding than we're likely to achieve in the next 20 years. Don't get me wrong, I think someday we'll be able to do it, but the trials will be long and hard, and the people who do it will really understand how to make it right. I also don't think I'll see it in my lifetime (I'm 22 now).

    Replication
    In terms of machines, a lot of this has to do with artificial intelligence. The creative leap required to construct something and change it is pretty huge. As for nanorobots in our blood stream, they need to find the parts, and they most likely won't be in the same environment in which they were created. Genetics is more fearful, of course, because living things already have the ability to recreate, but most work done in genetics is done under the constant shadow of "what bad things can this bring". I don't think genetics is all that easy a field for an individual to work in as a radical either. It takes an extraordinary amount of time and equipment. The most likely disaster of bioengineering is something that causes the death of a significant member of the planetary cycle (like trees or bees, for instance), which has been a constant concern from day one.

    The Free Radical
    Try as one might, genetics and nanotechnology are not easy fields for individuals to work on their own in. They require extensive amounts of equipment, much of it high-tech since much of the work has only developed over the past twenty years. It's still much more likely that some nut is going to get his hands on some plutonium leaking out of an impoverished former superpower and create some home-made nuclear weapon than it is that someone is going to create a killer replicating robot.

    And Bill ignores a lot of other ways we can kill ourselves. Civil strife, environmental pollution, global warming, and, my personal favorite, contact with a hostile alien species (didn't Independence Day look real?). The fact is, since day one, humans have been faced with causing their own extinction (overhunting, overfarming, overpolluting, travel spreading disease, etc. etc.) and we've done just fine recognizing and adapting to these problems. The one thing that nobody ever seems to factor in is the human response to adversity. We can change our environment, and once we've changed it, if something's wrong, we can change it further (not back), so that we can live in it.

    p.s. And did anyone notice that Bill was called 'phlegmatic'? I thought they meant 'pragmatic', but that's one helluva typo.
  • by w3woody ( 44457 ) on Sunday March 12, 2000 @09:30AM (#1208449) Homepage
    Two points.

    (1) If anyone here remembers their history, the'd remember that the environmental problem du'joir in the 1970's was global cooling, not global warming. The truth of the matter is that the evidence is still out about global warming--the best we can say is that we have some interesting localized weather patterns, but there is no evidence of any sea levels rising or any non-natural weather patterns changing. (And those who provide "statistical evidence"--if you look closely enough, they're cooking the books combined with weather simulations which they believe will predict the weather beyond the normal 7-14 days most simulations actually work.)

    My point is that if you listen real carefully, even global warming is in the "disaster which will wipe us out in 10-20 years" category--far enough away that it seems possible (especially on warmer spring days), yet close enough to actively fear.

    By the way, you forgot the ozone hole--though there are those who are starting to think it ain't the problem it once was, only because ground-level UV levels have not changed one iota. But there are those who still believe that in 10-20 years we're going to have to go out in the sun with SPF 5000 or die.

    That's okay; I still remember when I was growing up in the 1970's that we were to run out of oil by 1990. That is, we would deplete all of the world's oil reserves by 1990, and because of it, civilization would collapse, causing wars a'la "Mad Max" to break out throughout the world as people struggle to find the last little caches of horded gasoline.

    I have a real hard time believing in any disaster that will kill us in 10-20 years unless someone comes up with some really hard facts--like perhaps a photograph and orbital plot of the asteroid that is suppost to kill us all. I just remember too many disasters that were to wipe us out in 10-20 years while growing up (oil depletion, population explosion, global cooling, etc)--and we're still alive.
  • by ucblockhead ( 63650 ) on Sunday March 12, 2000 @07:42AM (#1208450) Homepage Journal
    This idea, that technology will kill us all, is not new. It started around World War I, and really gained momentum with the invention of the bomb. And for the most part, the timeframe in which destruction (from war/pollution/technological change) was going to rain down is always something like 10-20 years in the future. Close enough to be something to fear, but far enough away to seem likely.

    Such ideas are almost always based on linear trends. Just like the guy in the early part of the 19th century who projected that New York would be hip deep in horseshit by the year 2000. That's what the trend showed, after all.

    This is not to say that we shouldn't worry about the downsides of technological progress, but for the most part, these "global extinction" thoughts are fueled by accentuating the negative and ignoring the positive.

    Bad things will almost certainly happen in the future. Maybe even very bad things. But destroy the human race? Not likely. Slow it down, even? Probably not. The worst global disaster with real evidence behind it we have to face right now is global warming and while global warming could cause a lot of discomfort, with the sea-level rising and weather changing, the human race would certainly survive.

  • by ucblockhead ( 63650 ) on Sunday March 12, 2000 @07:51AM (#1208451) Homepage Journal
    It didn't seem unreasonable to people in 1950 that we'd have artificial intelligence by 1965. It didn't seem unreasonable to people in 1970 that we'd have artificial intelligence by 2000. It didn't seem unreasonable to many of my professors and or fellow students to think we'd have it by 2010 when I studied it in 1985.

    The error is in thinking that AI is just a matter of getting enough transisters together. Hardly! The real problems in AI are not hardware speed so much as what to do with that hardware to make it intelligent. This is not a trivial problem. it is an extremely difficult problem, IMHO probably the hardest problem the human race has ever faced.

    The question nobody even has a coherent theory for right now is: what would an (artificially) intelligent computer do? What would be its desires? Would it also have emotions? If so, what would it feel?

    And this is really the key thing. You can't build an artificially intelligent computer unless you have a damn good idea of those things. You can't build something with desires, emotions, etc. unless you know, in detail, what desires and emotions are, at a far deeper level than we do now.

  • by Jikes ( 123986 ) on Sunday March 12, 2000 @07:49AM (#1208452)
    Self-replicating machines? Nanotechnology run amok? Machines that become smart and enslave humanity? Please, this is reality, not an episode of star trek.

    Finally, he argues argues, this threat [machinery] to humanity is much greater than that of nuclear weapons because those are hard to build.

    HAHAHA!

    Please. We can't even write a web browser within three years, much less program sentient robot roaches that could destroy our planet.

    There's only like, what, forty thousand nukes extant on earth, each capable of wiping out millions of lives in five minutes? Many capable of poisoning an entire planet for millenia if detonated close enough to the ground? ALL of them are owned by warmongering, jingoistic, pathologically disturbed political entities who have NO QUALMS whatsoever about using nuclear warheads whenever it is convenient?

    Nuclear weapons, traditionally developed viruses, lethal bacteria, political unrest, riots, the complete disruption of climate, economic decay, and plain old steel bullets fragmenting children's skulls into explosions of bloody brain and bone (just like the children of Kosovo who the entire world is eagerly attempting to exterminate) are ALWAYS going to be more of a concern to me than sentient computers messing with my tax return. This article sucked. Perhaps the real thing will explain stuff better.

    The most dangerous aspect of living on earth is that we are sentient. If we weren't, we wouldn't give a shit what happens in the long run. (which we don't, when it gets down to it)

This restaurant was advertising breakfast any time. So I ordered french toast in the renaissance. - Steven Wright, comedian

Working...