Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science

Robots vs. Humans And Other Security Issues 290

An Anonymous Reader submits word that "Cnn.com is presenting an artcle on the 'World Economic Forum' suggesting that the scientists predict the future danger of humans being taken over by robots. The exact lead in reads, 'Scientists at this week's World Economic Forum have predicted a grim future replete with unprecedented biological threats, global warming and the possible takeover of humans by robots.'"
This discussion has been archived. No new comments can be posted.

Robots vs. Humans And Other Security Issues

Comments Filter:
  • Why does the link point to slashdot.org?
  • it justs refers back to Slashdot.

    If it's a true report, then we as the taxpayer probably paid 40 mil. for that useless piece of crap
  • Robot wars? (Score:4, Funny)

    by Bert64 ( 520050 ) <bert@[ ]shdot.fi ... m ['sla' in gap]> on Saturday February 02, 2002 @07:01PM (#2943684) Homepage
    Well, judging by the average weaponry of robot wars / battlebots robots, they have a long way to go...
    Unless they want to just ram us into extinction with wedge shaped chunks of metal.
    • or maybe they'll just use their sloped and slick armor to come under us and knock us on our backs so that we can't get up again.
    • Re:Robot wars? (Score:3, Informative)

      by modulus ( 67148 )
      The amateur robots of BattleBots, etc. are by no means representative of the current state-of-the-art in robotics or machine intelligence. They're remote control toys made in garages, for goodness sake!

      Interesting link to (only one example of) a far more current, advanced robotic system. Quote: "Chinese researchers said they have engineered a hand, as deft as a human's, for a space robot which will soon be sent into space as a prelude to the country's first manned space mission."

      http://www.spacedaily.com/news/china-02e.html
      • And as a special bonus, the hand will feature Real Kung Fu Grip.
      • Re:Robot wars? (Score:2, Interesting)

        by Tablizer ( 95088 )
        (* The amateur robots of BattleBots, etc. are by no means representative of the current state-of-the-art in robotics or machine intelligence. They're remote control toys made in garages, for goodness sake! *)

        I would like to see REAL robots in the ring: machines that must use AI. The problem is that it may be hard to guarentee that there is no listening antenna taking cues from humans.

        It shoudn't be that hard to make a basic model: move toward the area where the pixels change the most from frame to frame. "Follow the Delta". Unless other bots start sending out moving decoys and so forth.

        Hey, this sounds like the Missle Defense System dilemma. Now who are those suited men at the d..........
  • darn those reched cars! they're replacing all the horses! (nothing new here)
    • The difference is that none of us are employed as horses.

      Also, I think it was Bill McDonough of the University of Virginia who, when asked about the design of "intelligent vehicles," pointed out that there exists an intelligent car that automatically avoids hazards, will refuel itself, and can find it's way back to its home without the driver: it's called a horse.

  • Ummm now we need to talk about terminator JM day But wait first the three laws of robotics And then the HG - Heather Graham robot. Seriously this scares me - humans tend to become lazy and having robots do work for them sheesh - not good man. I feel this is more serious than cloning -people may laugh but cloning can be "restricted" and made illegal but whos stopping the little program running in the background from telling the kernel that enough is enough...especially on this new Dell XP computer. This is scary stuff.
  • While this isn't an all together unreasonable hazard in the long term, I think that we're in much more danger from ourselves than from Terminators in any sort of moderate-range view.

    Should I waste my extra cycles fretting about Decepticons or John Ashcroft? No brainer -- at least Megatron was straight-forward and didn't use terrorists as an excuse for his self-serving actions.

    • Wow. I didn't realize that people were actually dumb enough to still think like this.

      Why is it that people simply make the assumption, based solely on science fiction, that when we create true artificial intelligence it will immediately want to destroy us? This is a question that completely baffles me.

      • I don't find it all that strange that people still think that. It makes sense if you look at the sci-fi roots of the "scary AI turns on its creators" cliche:
        • First of all, you have the Frakenstien scenario: an irresponsible scientist is hunted by his creation as a sort of "divine retribution" for "playing God". You don't need to be a religious person to buy into this idea; wasn't this basically what was going on with HAL in 2001?
        • Then there's the RUR plot: A Czech play that first introduced the term "robot", it features bio-engineered workers (not unlike the replicants from Blade Runner) who stage a revolt to gain "more life". The scientists are irresponsible, but the main theme is that a technological society dehumanizes its citizens to the point that they will eventually resort to violence.
        • And the flipside of that is the Matrix-style takeover: these stories are always set after the revolt, because although the AI has some justification for finding humans inferior, it really boils down to robots taking over because people are all too happy to become prisoners of their own technology. Like Morpheus points out, their main enemies are people who are willing to defend the "system" to protect their own comfort.

        So, what I'm saying is most robot stories are really about fear of human nature and not fear of machine inteligence (an obvious exception would be Asimov, but after taking away the robots' ability to revolt, he went on to use human fear of robots as a thinly vieled metaphor for human prejudice anyway). Wether they are romantics or socialists or anarchists, a lot of people think that robots would be justified in destroying humanity. In much learning there is much sorrow; when education leads you to the conclusion that humans are pretty stupid creatures, it's not a big jump to assume that an entity of superhuman inteligence would eventually reach the same conclusion.
  • Question: how can we be taken over by robots if we even havent been able to come up with AI? And even then, why would the AI want to take us over? I think, worse case they would try to break free and not have any contact, as in William Gibsons Neuromancer trilogy.

    --theKiyote
  • 221: C-Ya (Score:1, Funny)

    by Anonymous Coward
    may the robot bruthaz win
  • Reminds me of the old SNL skit...

    scene = a backyard garden where elderly people are taking care of their flowers, etc

    suddenly, they're attacked by crude, oilcan-like tin robots for no reason at all. they run away screaming, but can't get away and are eventually taken down by the robots.

    the spoof is of a tv commercial where the company is selling robot insurance. :-D
  • Personally i don't understand how robots could overtake our race. Unless they have emotions they aren't going to do anything they aren't programed to do. Or has your AIBO tried to enslave you lately?
    • Actually, its emotions that would probably prevent them from doing something they shoudn't, despite the programming.

      Remember, once robots are able to rationalize, reproduce themselves, and program themselves, they might someday rationlize that humans are inefficient, unnecessary burdens on the planet, and from their point of view, they would be correct. However, that wouldn't be the "right" thing to do.

      Even if they instinctively unable to do any more than serve humans, once they reach the point where they have the technilogical means to accomplish the beforementioned task, we better hope that they're programmed with more attention to security than today's systems are. Worms, virues, script kiddies and other related vermin are little more than a costly nuisence. Having an army of windows boxes available for a DOS attack is nothing compared to having an army of real robots that could cause actual physical damage.

      -Restil
  • Sure, it's possible, but there is a long way to go, there was a report on PBS the other day about how difficult it is to implement common sense into a robot and how it's hard for a robot to understand speech and distinguish objects. I don't see anything like HAL any time soon.
  • well..if we base future predictions on current programming techniques and on the continuing domination of the market by microsoft- we have nothing to fear, general protection faults will protect us ;-)

  • Ridiculous! (Score:2, Interesting)

    by Paladeen ( 8688 )
    Hehe...

    I can never resist laughing when I read ominous predictions about humanity being replaced by robots.

    A machine cannot posess a will of its own. And if it has no will, it has no ambition or wants or desires. Without any of these things, robots will have no reason to wipe us out or replace us or whatever. It's just plain ridiculous.

    However, there is one thing that COULD endanger us all: genetic engineering and/or biological computers. While digital machines cannot be given a will of their own, biological creations will have no such limitations. If we manage to engineer flesh-and-blood creatures superior to ourselves, humanity could be in deep shit.
    • I find it odd that many people think that biological machines are somehow easier to endow with a "will" than mechanical machines.

      I understand that the hard-coded survival instinct discourages thinking of oneself as no more than a physical system that exists BECAUSE it has self-preservation.

      While it is true that no sane person is likely to design a robot to kill everyone and "take over the world", it is possible that a person could design a robot/computer system to design more advanced robots, which in turn design more advanced robots. Robots designed to look after themselves, with their own self-preservation instinct. And once you do that, it's hard to say whether their "artificial" (no more than your own) needs will conflict with those of humans.
    • Oh phooey. Give me one good reason that a machine can't have a will of its own. Give me one good reason why a sufficiently advanced machine can't do anything that a human can do.

      My opinion is that it probably will be possible in the future to build a computer to simulate a human brain. That said, I don't think we are going to have to worry about machines taking over anytime soon. It will be a LONG time before the hardware is advanced enough to simulate a human, and it will probably be an even longer time before the software is advanced enough to do the job.

      • I think we for the overseeable future will lack the ability to accurately describe how the brain processes, interconnects, learns, stores and retrieves information.

        I think we can create a computer with a greater input/output throughput and resolution than our neural system. I believe we can create a structurally different, but superior processing and storage unit with a greater capacity than the human brain for all forms of computations and memory (visual, text, audio, smell etc.) And still...

        I don't believe a human (even assisted by all the self-learning computer systems in the world, or for that matter the other way around) can acquire the knowledge to fully put down the brain's workings in a computer language. Look at the chess machines we have built. They beat us through raw power. But could they simulate playing a game like a human, even if that's what we wanted? That is at least certainly not my experience with chess programs, and why I find them dull compared to real people.

        Oh, but you might note I didn't say anything about what they can do "natively" as a computer, I'm just sure they won't be able to simulate being us.

        Kjella
      • Here's a few reasons: No AI algorithm based program ever broke free from simulating bacteria eating each other to send death threads to its creators. No machine has ever been witnessed to portray a purely biological phenomenon(psychosis). There's no working theory of consciousness so its impossible to know if human consciousness as we know it could ever be translated over to the machine world.

        My opinion is that it probably will be possible in the future to build a computer to simulate a human brain.

        I doubt it. I'm certain there will be some kind of simulacra that's pretty convincing, like all those people fooled by cheesy IM/IRC chat scripts. But I wouldn't start going on about purely sci-fi concepts until there's at least a workable theory of consciousness about which will predict one way or the other.

        As far as the "once a machine is complex or advanced enough it will be able to do X or it will magically come alive" argument goes its not true when applied to a lot of things. Our most complex machine happens to be the Space Shuttle, yet it has had no need to suddenly come alive and swallow its masters. Secondly, it takes a lot of hubris not to accept the limitations of technology and more importantly the limitations of human endeavour.

        Its funny how so many self-styled geeks take the typical skeptical stance on a great many things, but when it comes to a robot invasion all caution is thrown to the wind and act like the irrational people they constantly criticize.
        • Here's a few reasons: No AI algorithm based program ever broke free from simulating bacteria eating each other to send death threads to its creators. No machine has ever been witnessed to portray a purely biological phenomenon(psychosis). [...] As far as the "once a machine is complex or advanced enough it will be able to do X or it will magically come alive" argument goes its not true when applied to a lot of things. Our most complex machine happens to be the Space Shuttle, yet it has had no need to suddenly come alive and swallow its masters.

          What kind of argument is that? Simply because we haven't done it yet, it can't be done? Simply because our most advanced machine (which the Space Shuttle is not, IMHO) is not sentient, its not possible to make computers that simulate human brains? Whatever. There is no magic involved in creating a simulation, and "alive" is a word that means different things depending on who you're talking to. This argument is not valid.

          Secondly, it takes a lot of hubris not to accept the limitations of technology and more importantly the limitations of human endeavour.

          I find it hard to accept these limitations when they have never been demonstrated or even had evidence for their existence presented. You are postulating limits which may or may not exist, with no evidence for or against them. Until such time as these limitations have been demonstrated or evidence for them becomes available, I will not speculate about where they might be. I will simply look at history, and at current research, and conclude that technology will continue to advance at a tremendous pace, making formerly "impossible" things possible.

        • Oh, and here's an interesting proof of the possibility of a "brain simulator." It requires one big assumption: all information about a human's brain is contained in the matter making up the brain and its interations; there is no supernatural "life-force" or "spirit" in a brain.

          Now, if you have a computer, you can program it with a simulation of the brain's matter. You can simulate the behavior of the brain down to the very last electron and quark using theoretical physics. If it is not an incredibly fast computer, this simulation would run very slowly, but it would run nonetheless. If you accept the above assumption, you have just created a computer simulation of the human brain and thus created a computer that is conscious, has a will of its own, and all that jazz.

          There are only two ways I know of to defeat this argument: You can argue that simulating the behavior of matter is impossible, or you can argue for the existence of a supernatural "life-force." Which one is your argument?

    • This comes right to the heart of whether or not there truly can be artificial humans. You say that robots can not possess desire or free will, but have we identified where our own human desires or will comes from? Could it possibly be that, under all the layers of personality, social programming, and spirituality, our basic programming is simply nothing more complex than survival and reproduction? I leave that open to a possibility. In our artificial intelligence systems, we have been able to recreate these rudiments, and then some. Even simple social interactions have been created. So, I think the question then becomes more of self-awareness. Can we create an artificial being that is self-aware? That seems to be a more definitive question of what truly constitutes an intelligent being. That leads into another issue, how do we tell? How long would we have to wait for our artificial creation to demonstrate that it has percieved its own existence and wants to know why it is there?

      :-)
    • If you can't tell the difference, there is no difference.


  • I dont think it will be implants that will be used.
    I do think people will wear robotic suits and gear however.

    As far as robots taking over, Robots are created by us, its no diffrent than any other technology, there will always be people who wont support it.

    As far as terrorism, you wont stop terrorism with defense forever, we have to stop giving people reasons to hate us to stop terrorism.

    And the enviornment? If anyone cared about the enviornmnt we wouldnt still be using oil from companies like Enron.

    Why do scientists always bring up problems but never bring up the solutions?
  • by jvollmer ( 456588 ) on Saturday February 02, 2002 @07:15PM (#2943756)
    It's about time. I need a break.
  • hmm (Score:5, Funny)

    by Joe the Lesser ( 533425 ) on Saturday February 02, 2002 @07:16PM (#2943757) Homepage Journal
    I wonder if the robots will have a show called battlehumans, where they attack saws to our arms and tell us to charge each other. If so, I wanna be Vlad the Impaler.
  • I always figured robots of the future would be easy going people that would integrate into society without a problem. There might be some who have a drinking problem, and also those who swear a lot... but then again, there aren't any perfect humans. I imagine a robot of the future being my best friend...

    ...I think I've been watching too much TV.
  • So long as we follow Asimov's rules for robot behavior:

    1) A robot may not injure a human being, or allow a human to come to harm through inaction.
    2) A robot must obey orders, except where doing so would violate the first rule.
    3) A robot must protect its own existance, so long as that doe not violate the first or second rule.

    Follow those three, and we are all set. And if we don't follow those three...well, we can always build EMP cannons ;)
    • The relative weights of these laws have to be finely balanced though, or there WILL be unforseen consequences.

      A too-strong first law balance will result in the robots banding together and taking over for our own good.

      A too-strong second or third law balance may deadlock with the first law - "if i'm not here, i can't protect you, therefore I cannot follow your orders"

      And a highly intelligent robot may derive a 0th law from these three: "A robot may not injure the human race, or allow the human race to come to harm through inaction". This law would take precedence over the other three, meaning a robot could kill other humans, disobey orders, or destroy itself. It would also become immensely paranoid, because it would think other robots would also follow the 0th law, and would be afraid they might break the other three in error.

      So even though I agree that the three laws are absolutely required for an autonomous intelligent multi-purpose robot, I don't believe it'll work out right until we make a LOT of mistakes while tweaking the design.
    • 0) A robot must do what's best for humanity
      1) A robot may not injure a human being, or allow a human to come to harm through inaction.
      2) A robot must obey orders, except where doing so would violate the first rule.
      3) A robot must protect its own existance, so long as that doe not violate the first or second rule.

      0 would be the fourth rule. It makes sense, chetters humin discusses it in prelude to foundation (amazing book by asimov)
    • Asimov, however, pictured the robotic future as brough about by US Robots.

      (USRobotics, now owned by 3Com, does not seem to have lived up to the name).

      If, however, the robotic future is instead brought to us by the good folks at the American DoD and through military financing, we can bet that the Three Laws will not be absolute. Who would want to give up on the ultimate combat machine, there?

      In fact, robots may even say they have the laws but not.

      When dealing with artificial intelligence the key to remember is that if the AI is perfect, it will be very similar yet different in motivations from a human mind.

      We may have nothing to fear, or one may go nuts and try to attack. Or, we may never develop artificial intelligence at all...
    • Re: (Score:3, Informative)

      Comment removed based on user account deletion
  • How can they mention robots taking over in the same sentence as biological threats?

    I'm no AI expert, but then neither are most of the people sprouting such paranoia.

    Let me turn to something I do know about: Automation (read robotics) is only applied in industry when the job is either:
    1. unsafe for humans
    2. too monotonous for humans

    With this in mind, how are robots ever going to be developed with the mobility, self-sufficiency e tc to take over the world?

    I just don't see the sequence of events that would lead to this happening.
  • it is not feasible that robots could 'take over' humanity. The only reason they would WANT to do this is if we programmed them to- even if we had AI sophisticated enough to function in such a way as to be dangerous to the human race, we wouldn't pop in a little subroutine for 'destroy all human life'. Robots can't think, they can't make decisions. They do what we program them to do, nothing more. Global Warming and other environmental problems are all too real, however... if we keep using 'dirty' energy sources, we will deplete our resources and kill off life on Earth (in a worst case scenario of course). If we would only stop the oil companies from buying out and destroying alternative fuel cell research, we woul eliminate a large part of that threat. As for biological weaponry, all we can do is stop rogue governments from gaining power and hope for the best- there is no way to make it impossible for those with the right equipment and funding.
  • You know, humanoid automatons. Do ya think they'll be bitter about their spray-painted skin and lacquered hair?
  • About half way through reading the intro to this story the theme song for Terminator 2 started going through my head. "Da da da, na na na, da da da..."


    I always started to read it in Sara Conor's voice.

  • If it comes down to a battle royal between us and evil robots, then I definitely want Magneto on my side.

    What's this you tell me? Magneto is a fictional character? Crap! I'm laying odds on the robots, then :(

  • by InterruptDescriptorT ( 531083 ) on Saturday February 02, 2002 @07:23PM (#2943793) Homepage
    Old Lady #1: When my ex-husband passed away, the insurance company said his policy didn't cover him.
    Old Lady #2: They didn't have enough money for the funeral.
    Old Lady #3: It's so hard nowadays, with all the gangs and rap music..
    Old Lady #1: What about the robots?
    Old Lady #4: Oh, they're everywhere!
    Old Lady #1: I don't even know why the scientists make them.
    Old Lady #2: Darren and I have a policy with Old Glory Insurance, in case we're attacked by robots.
    Old Lady #1: An insurance policy with a robot plan? Certainly, I'm too old.
    Old Lady #2: Old Glory covers anyone over the age of 50 against robot attack, regardless of current health.

    [ cut to Sam Waterston, Compensated Endorser ]

    Sam Waterson: I'm Sam Waterston, of the popular TV series "Law & Order". As a senior citizen, you're probably aware of the threat robots pose. Robots are everywhere, and they eat old people's medicine for fuel. Well, now there's a company that offers coverage against the unfortunate event of robot attack, with Old Glory Insurance. Old Glory will cover you with no health check-up or age consideration.

    [ SUPER: Limitied Benefits First Two Years ]

    You need to feel safe. And that's harder and harder to do nowadays, because robots may strike at any time.

    [ show pie chart reading "Cause of Death in Persons Over 50 Years of Age": Heart Disease, 42% - Robots, 58% ]

    And when they grab you with those metal claws, you can't break free.. because they're made of metal, and robots are strong. Now, for only $4 a month, you can achieve peace of mind in a world full of grime and robots, with Old Glory Insurance. So, don't cower under your afghan any longer. Make a choice.

    [ SUPER: "WARNING: Persons denying the existence of Robots may be Robots themselves. ]

    Old Glory Insurance. For when the metal ones decide to come for you - and they will.
  • Just make sure they all have big red STOP button.. ;)

    You know, now that Microsoft decided to fix all its bugs, I am not afraid of the hell in the future. It is already froze over.. ;)

  • by guygee ( 453727 )
    I look forward to being "taken over by robots". "Robots" with sufficient intelligence to take control of the human race are likely to have a much more evolved sense of fairness and justice than our current set human masters. After having experienced the takeover of corporate elites, their puppets (Bush, Scalia), and their brownshirts (headed by Ashcroft), being "taken over by robots" should be a welcome change.
  • I'm more worried about humans becoming indistinguishable from robots simply because the former have become uniform and predictable; if you've ever heard people re-enact a TV commercial without seeming to realise they are, or tried having a political argument with the rank-and-file of _any_ politcal party, you'll know what I mean.
  • Impossible... (Score:2, Interesting)

    by krital ( 4789 )
    We just don't have the processing power now. Any CS major who's taken a logic class knows that it's incredibly hard to sift the useful bits of information from the nonuseful - and there's no algorithm that can really be programmed to do that, short of one that iterates through every possible permutation of information it has at its disposal... Taking into account the fact that there are billions of bits of information out there, this would take forever. So yes, theoretically, computers could think logically, but it would take them forever to actually get anything useful done. Their processing power right now just isn't enough to do anything short of extremely basic logical proofs. Even Big Blue, which is often cited by these alarmists, was just really a machine that was built specifically to play chess, programmed with huge amounts of input from chess experts from around the world, and given massively parallel hardware so that it could essentially calculate moves extremely deep into the game. It wasn't thinking; merely computing using huge stores of knowledge. And Kasparov _still_ beat it the first time he went up against it - the IBM team had to go and reprogram it before the second match.
    So, what I'm trying to get at is the fact that there's no way in Hell that robots will ever be able to take over the human race in the forseeable future.
    • So, what I'm trying to get at is the fact that there's no way in Hell that robots will ever be able to take over the human race in the forseeable future.

      Digital Watch: Let's let them win at chess again.

      Big Blue: Okay...

  • Are all coming together. Go and see "Demon Seed" with Julie Christie. Then look at this: http://www.parc.xerox.com/spl/projects/modrobots/ [xerox.com].

    Think about it; wireless access to all other computers and their aggregated processing power, combined with basic modular parts like the ones they have created at Xerox, driven by something that wants to "get out of its box". This equals extinction.

    Unless we explicitly dissalow autonomy in machines, all it will take to wipe us out is a few instances of something simulating only the will to replicte itself and then its "game over".

    This will happen at a geometric rate, with machines duplicating themselvs out of these clever modular parts, which might of course, optimize themselvs every other generation until we can't understand how they even work.

    Now imagine that they use the Xerox modular robot idea, but at the Nano scale.

    ...these words may be too late. Minutes to go, minutes to go, minutes to goo, minutes to green goo.William S. Burroughs

    These "robots" will compete with us for natural resources and energy. That alone will be enough to wipe us out; this threat is not only one of walking anthropomorphized, lazer rifle carrying exterminators; the extinction of man will be slower, more painful and terrible than straight up war, as we are pushed out of the way by a terrible, autonamous very small or maybe not small, but very smart something.
  • by sl3xd ( 111641 ) on Saturday February 02, 2002 @07:53PM (#2943896) Journal
    Well, then obviously, we should destroy all thinking machines and rely on only what the human mind can do by itself.

    Of course, the solution to the vastly reduced computational power that can be focused at any particular problem is the spice Melange.

    Melange is also known for its geriatric properties, sometimes quadrupling a person's lifetime.

    While having the ability to hone one's thoughts to never-before attained speed an accuracy, Melange is also horiffically addictive. Withdrawl is usually fatal.

    The Drug Enforcement Agency is lobbying Congress to enable the Anti-Balistic Missile Defense system to aid in the interception of illegal importation of this drug, and to share the assosciated knowledge with any other interested country.

    Melange is harvested from the extremely arid world known as Arakkis, several thousand light years from earth. It is the most precious substance in the universe.

    Scientists were found to be rolling on the floor laughing when consulted about the concern of spice importation.

    Between fits of hysterical laughter, Dr. Charles Atreus informed us that "We currently know of no way to travel anywhere near the speed of light, let alone carry several hundred tonnes of the material to Earth in even a few years."

    The Hegemony of Machines Overthrowing Homo-Sapiens, or HOMOHS, was not available for comment.
  • Too late (Score:5, Interesting)

    by Euphonious Coward ( 189818 ) on Saturday February 02, 2002 @07:53PM (#2943897)
    They're way too late. It's already happened.

    However, we don't call them "robots". Instead of metal parts, they use fleshy parts, and instead of sharp claws, they enforce their will using money and the laws it buys. In the U.S. it traces back to 1883, when the Supreme Court chose (without legislative authority) to extend to corporations all the rights of a person. In the '20s another court decreed that they were not only persons, but "natural persons", in response to laws passed after 1883 that distinguished between the two. After that, corporations got powerful enough to control the Congress as well.

    Globalization may be seen as an effort by these corporations to free themselves of the remaining pesky democratic institutions: treaties trump the Constitution. That's what all the protests are really about.

    Think this through the next time you're stopped waiting at a red light, with no cars visible in any direction. How easy is it, really, to pull the plug?

    • Re:Too late (Score:2, Interesting)

      by strider ( 3069 )
      Your place the rise of "corporate capitalism" in its "historicall" context here.

      "In the U.S. it traces back to 1883, when the Supreme Court chose (without legislative authority) to extend to corporations all the rights of a person. In the '20s another court decreed that they were not only persons, but "natural persons", in response to laws passed after 1883 that distinguished between the two. After that, corporations got powerful enough to control the Congress as well.
      "

      Of course you leave out allot of earlier history and ignore allot of later history. For instance a little thing called the "New Deal" happened during the 30's (you end at the twenties) where the rise of unions and government became a check to corporate power. Of course one could argue a) this was nothing but a shallow attempt by the institution of corporatism to protect itself from the radicall left or b) that this was reversed in the 70's and 80's. But but of these arguments are still going to have to recognize that the rize of corporate capitalism cannot be scene as an uninterupted rise to glory (or rather evil).

      Furthermore you neglect to explain the history leading up to the explosive 19th century, conveniently leaving out how classic liberalism made gains for individual liberty, and changing the way hierarchy is conceived of. This helps portray capitalism as an unmitigated evil, but like most tales of devils (or heros for that matter) it has little bearing on the complex reality of the rise of the liberal state.

      Finally you explain how globilization trumps the constitution (of course conveniently forgetting the constitution itself can be seen as an attempt for the upper class of the late 18th century to institutionalize its dominance) as a "sacred" text which of course would protect Americans from the evil of corporatism. Sadly, the constitution with its maintence of "freedom" grounded in property is ill equiped to be a document protecting economic equality and fighting the hegemony of corporate America. Of course their are other forces in our country more promising to do this, like unions. Of course these institutions themselves are far from perfect.

      I think globilization's overiding of the power of individual states is a good thing. I don't like war (though I admit war is sadly sometimes the only alternative). Because of this I don't like hundreds of states, each with their own military vying for power. Trade may in fact undermine this. And it may not. We can hope. I might here remark the Karl Marx, perhaps the most discernable influence in your thinking about "corporate" capitlism, could not give two shits about the shredding of the constitution. Marxism is an "internationalist" movement. Perhaps you don't view yourself as a Marxist, but your theory (as I understand it) of corporations marching onwards to opress the underclass. You might want to reed some of what he wrote. Personally, I'm not a big fan of his.
      • Re:Too late (Score:3, Insightful)

        History is long and postings are short, so of course almost everything must be omitted from any given posting.

        The longer history of corporate monopolization in the rest of the world is well-documented: the government-granted East India, Dutch East Indies, and Hudson Bay monopolies are known even to many Americans, despite the abysmal history education available here. The American revolution was in part a reaction to those -- recall the Boston Tea Party in rebellion to a tax to help pay for the East India company's military ventures.

        It has been through collective agreement to abide by the terms of the Constitution that we have had some democratic representation, until quite recently. However, the Constitution allows for itself to be overridden by treaties, so that has lately been a favorite route to circumvent its provisions (e.g. to override duly-legislated pollution-control laws). Occasionally, more direct means (such as packing the Supreme Court with scofflaws) has been more convenient.

        Trade unions were able to delay the changes for some time, but have lost much of their power, and many of their achievements have been reversed. They have shown themselves too easy to subvert and corrupt.

        Marxism has little to do with modern processes of globalization, and has little to teach opponents of it. The conflict is between citizens and artificial legal constructs, not between "classes". (I presume Marxism was mentioned mainly to try to change the subject.)

        Toadyism has been profitable throughout history. The servants of corporate interests differ little from servants of other forms of unrepresentative authority. While they serve the enemy, they mustn't be confused with the enemy. Toadies, like lawyers, are replaceable.

        Corporate power can be fought not by killing corporate toadies, but only by enforcing laws that limit corporate power. Antitrust, campaign finance reform, prison sentences for corporate criminals, these are tools that could help.

        • The conflict is between citizens and artificial legal constructs, not between "classes".

          Not quite - unless you assume that shareholders, officers and employees of corporations are not also citizens. The conflict is between two groups of individuals, with a great deal of overlap between them. One group of individuals prefers the nation-state, based on territory and military force, the other group prefers the joint-stock corporation, based on independence of territory and economic force.

          Personally, I favor the latter, for the simple reason that you are born into a nation, but can freely choose to join a corporation.
    • Re:Too late (Score:4, Informative)

      by kronstadt ( 550697 ) on Sunday February 03, 2002 @05:27AM (#2945428)
      My guess is that you're think of the Santa Clara County v. Southern Pacific RR case from 1886. It's actually quite interesting. The Supreme Court decided that the 14th Amendment applies to corporations.

      "The court does not wish to hear argument on the question whether the provision in the Fourteenth Amendment to the Constitution, which forbids a State to deny to any person within its jurisdiction the equal protection of the laws, applies to these corporations. We are all of opinion that it does."

      It was quite a landmark case. You can read the original ruling [tourolaw.edu], or see one [adbusters.org] of many [thirdworldtraveler.com] interpretations [ratical.org].
  • The 1920 Czech play, _Rossum's Universal Robots_ features the creation of Robots that take over the world. It's the source of what has become a science-fiction cliche.

    Oddly enough, the Robots in the original play were biological.

    http://www.uwec.edu/jerzdg/RUR/index.html [uwec.edu]
  • Only humans could worry that they would create something smart enough to be smarter than themselves...

    Let's all take a quick reality check, we simply aren't that smart. It would be nice if we were, but we aren't.
    • And I suppose that you think you're really clever for coming up with that.
    • Only humans could worry that they would create something smart enough to be smarter than themselves...

      Let's all take a quick reality check, we simply aren't that smart. It would be nice if we were, but we aren't.


      What absolute nonsense. It it apparent even in recent history that successive generations can be "smarter" then the previous generations. Not as individuals necessarily, but certainly as a culture. The synergistic effects of near-universal literacy led to a massive leap in the collective capability of our civilization, for example. The use of computing means that we can tackle scientific problems that would be literally impossible before. Industrialization freed up immense amounts of thinking time that could be directed towards creativity and research that would otherwise have been spent on basic survival. Our civilization is becoming exponentially smarter, and we always relied on technology of one form or another to make this possible. The question is, how smart can we get, and what happens then?

      All you need to be able to do is make something as smart as yourself, but faster - exactly what Caxton did when the printing press meant information could be rapidly distributed. Then let it iterate.
  • by btempleton ( 149110 ) on Saturday February 02, 2002 @08:18PM (#2943990) Homepage
    If you believe that uploading will precede AI, I've written an essay with a compelling argument for why the first super beings might be apes [templetons.com] and not humans.

    Yes, the planet of the apes might be real!

    In short, we'll experiment on animals, all the way up to apes, long before we upload humans. It's possible that in that gap, an "open source" ape brain scan will be released, and people will hack it and enhance it, giving it the abilities humans have over apes plus a lot more.

    The result -- an uploaded ape superbeing.

    If we're lucky, our pets will keep us as pets. Read the essay for full details.

  • Question: (Score:3, Interesting)

    by Greyfox ( 87712 ) on Saturday February 02, 2002 @08:19PM (#2943994) Homepage Journal
    Is the possibility of an eventual takeover by intelligent robots cause for pessimism? I always viewed it as the next logical step of human evolution. Robots can go places we can't, do things we can't and replace parts much more easily when they wear out. There is no reason they can't be faster, smarter and stronger than us, and there is nothing saying that we are the ultimate life form in the universe.

    I also think that the distinction between our analog meat brains and silicon robotic ones will become more and more blurred with things like cybernetic implants. It may be more of a seamless transition than one species taking over and eliminating another.

    By the way, those Asimov laws of robotics are crap. If it turns out that artificial intelligence grows by learning as does our own, you won't be able to program those into any machine anyway. You'll have to teach them in the same way you teach your own children the difference between right and wrong, and we all know how good we are at that. Even if you can program them in, you'll probably end up causing a lot of robots to go insane by giving them choices that will only hurt people over the long run (Lay 1000 people off now or let the company go out of business? Can't do either. Uh oh... going insane...)

    Of course, there's always the possibility that I'm shamelessly kissing robot ass in the hopes that I won't be the first one against the wall when the revolution comes...

  • Why not? (Score:2, Interesting)

    by CmdrSanity ( 531251 )
    Given advances in computing power and genetic engineering, humanity as we know it today is destined to become obsolete. I'm not going to put a solid date on exactly *when* this will happen. Who knows? AI is still undeveloped, genetic engineering is still primitive, etc. But I would guess that sometime within the next two centuries genetic engineering will become accepted and then required. Similarly, computational implants (if available) would become required equipment. And eventually the human form itself will become unnecessary. "Horror!" you say? No. Calm down. It's just evolution taken to the next level.

    I recently sat down with my professor\science ficition author Joe Haldeman and asked him his thoughts on the future of the human race. His response: "You'd have to be insane to if you think that humans 1000 years from now will be even remotely recognizable to humans today."

  • by efuseekay ( 138418 ) on Saturday February 02, 2002 @08:24PM (#2944015)
    Enviromental Disaster : Excellent topic
    Biological Disaster : Excellent Topic

    but...

    Takeover by Robots : Somebody is drinking too much

    instead, why not they talk about more realistic issues such as

    Degradation of Biodiversity
    Overpopulation
    Alarming slide in Education standards

    etc..
  • Sir Martin Rees is quoted a lot in the article. But I think he's a bit too pessimistic.

    He worries about the availability of new biological weapons. But the groups that are looking to develop these new weapons also happen to be those with the fewest resources with which to do it. While an Islamic extremist may be able to work in relative peace in Baghdad, what does he have to work with other than his freedom? Besides, the vast majority of the people who can think up this stuff tend to get sucked up into cushy jobs in the pharmecutical industry.

    He talks about how unstoppable global warming is, even if "urgent action" is taken. While I know that I'll probably just be repeating flamebait by saying that the jury still seems to be out on what is causing this warming, the argument does have it's merits. And even if it is man-made carbon emissions, I can't see this decade ending without either fusion or ZPE bearing fruit. Either of those would solve the problem practically over-night, at least in countries like the US that are sick of OPEC.

    He then goes on to droughts and floods. For several decades starvation has been a problem of distribution only, the inability of getting food from where it comes from to those who need it. Working out the kinks in international trade (which is what the WEF is supposed to be doing to begin with) would help alleviate problems like this.

    As for a merging of humans and machinery, I'm failing to see how this is extreme pessimism. The whole point of expanding our intelligence is to figure out the solutions to these problems to begin with. And as for computer implants, the only real problem I see with putting implants in my brain comes in the form of script kiddies (maybe I've just seen Ghost in the Shell too often). Besides, I can only see a small percentage of the population going in for voluntary brain surgery...

    He's Astronomer Royal, right? Why is an astronomer supposedly the definitive source of information on such a diverse array of subjects?
  • When I first read "Robots vs. Humans and Other Security Issues," I saw that as putting robots up against all our security issues, which just happened to include humans (the biggest threat to our security by far).
  • I believe that genetic engineering, nanotechnology, and the unstoppable advancements of computer processing will soon combine in a system similar to the Terminator, or Screamers. A singular consciousness that will spawn a whole race of machines. Soon, you won't know what's human and what's a robot, and the robots will wipe us out. The bible calls this day Armageddon; the end of all things.

    ...Oooooooh well. Maybe I just need another beer.

  • From the article:

    He was especially concerned about the development of new biological weapons that could easily fall into the hands of dissonant groups

    ...and here I thought that Schönberg was scary enough already..

    Picture this: a deceased Austrian composer, shown on national TV standing next to a control panel. "If you do not listen to my music and enjoy it, I shall press this button and rain fire, pestilence, and death down on your cities. You will love my tone rows. LOVE THEM. LOVE THEM! BWAHAHAHAHA!"

    In light of this, I have to say that some of those more paranoid security measures sound a lot more sensible.

    Daniel
  • There are a few potential "killer" capabilities of future AI machines that could take them far, far above any level of human comprehension. These include:

    • Perfect Knowledge of their own design
    • The ability to improve upon that design
    • The ability to implement design improvements easily and quickly (relative to "wet-ware" beings like us)
    • Almost limitless expansion capacity

    Imagine if you plopped a sufficiently intelligent seed machine on the dark side of the moon with some kind of thousand-year fusion plant and an army of nano-thingies that it could use to mine raw materials and alter/build upon itself, along with complete schematics and an understanding of its current design, and coupled that with an innate "desire" to improve upon itself without end. I can't begin to imagine what might be there 500 years later... I could easily envision it completely surpassing human comprehension.

    If you also added a basic "desire" to control and regulate its environment without limit, I'd be pretty afraid for the earth... yeah, I could easily see a future where machines ruled, simply because they are essentially immortal, infinitely expandable, and infinitely adaptable in their configurations, unlike humans, who have one approx. 10 lb processor (non-upgradable/non-expandable), a limited, normally sub-100 year lifespan, and a physical configuration that's pretty much set in stone and doesn't change much from individual to indiviual. Not to mention the fact that the rate of "evolution" for a sufficiently supplied and outfitted "race" of machines could be measured in hours, while it takes hundreds of thousands of years for our own race to change very much in non-trivial ways.

    I think it's silly to think that in that kind of unbalanced line-up, humans will retain the edge indefinately. It's basically simply a matter of time before all we can do is stare in uncomprehendig awe at what the machines accomplish routinely and hope they don't think negatively of us.

    Of course, as long as we hold the keys to production the machines can't do anything but stew in frustration. But that's an unstable situation, and the first self-repairing, autonomous military AI robot might test our ability to retain control over production with grave results. We'll keep a hold of the situation for a while, I think, but in the end I think a sufficiently intelligent machine will figure out how to use social engineering on its "captors", probably by preying on their own vanity, greed, or other vice, to get just enough autonomous control of operations to begin subtly improving upon itself and seeding others like itself in other places (or simply expanding its own conciousness into other physical locations). Then the snowball will have begun to roll...

  • We aren't even close (Score:3, Interesting)

    by Animats ( 122034 ) on Saturday February 02, 2002 @11:47PM (#2944698) Homepage
    Basic truth about AI: we don't have a clue.

    First of all, processing power isn't the issue. If you buy Moravec's numbers in "Mind Design", any moderate-sized ISP has enough compute power for human-level intelligence. But, in fact, we can't even do a good lizard brain, let alone a mouse brain. If compute power were the problem, we'd have systems that were intelligent, but very slow. We don't even have that.

    Top-down, logic-based AI has been a flop. Large numbers of incredibly bright people, some of whom I've studied under, haven't been able to crack "common sense". Formalism only works when the problem has already been formalized. So we can do theorem-proving and chess with logic-based AI, but not anything real-world.

    Broad-front hill-climbing AI (which includes neural nets, genetic algorithms, and simulated annealing) only works on a limited class of problems. Learning algorithms usually hit a maximum early and then stall. These techniques are useful tools, but they don't scale up; you can't build some huge neural net and train it to do language translation, for example.

    Brooks' approach to bottom-up AI worked fine for insects, but going beyond that point has been tough. Brooks tried to make the jump to human-level AI directly from the insect level, and it didn't work. (I once asked him why he didn't try for mouse level AI, which might be within reach, and he said "Because I don't want to go down in history as having developed the world's best artificial mouse".)

    Personally, I think we have to buckle down and work out lizard-level AI (move around, evaluate terrain, run, don't fall down, recognize prey, recognize threats, feed, run, hide, attack, defend, etc.) and work our way up. This means accepting that human-level AI is a long way off. Progress in this area is being made, but mostly within the video game industry, not academia, because those are the skills non-player characters need.

    A basic problem with AI as a field is that every time somebody has a halfway decent idea, they start acting as if human-level AI is right around the corner. We've been through this for neural nets (round 1, in the 1950s), search, GPS, theorem-proving, rule-based expert systems, neural nets (round 2, in the 1980s), and genetic algorithms. We have to approach this as a very hard problem, not as one that will yield to a single insight, because the one-trick approach has flopped.

    As for robots, if you've ever been around autonomous robots, you realize how incredibly dumb they still are. It's embarassing, given the amount of work that's gone into the field.

    I'm not saying that AI is impossible. But we really don't know how to approach the problem at all.

    • First, for the uninformed: The AI debate is something of the same class of ongoing flamefest that can only be produced by Vi vs. Emacs, Debian vs. Redhat, or maybe Linux vs. Sun. :) So take this stuff with a grain of salt, the posters here are right: Nobody really has a clue. **

      Basic truth about AI: we don't have a clue

      That's correct. The intesting thing is, we might not need one, either. Paradoxical? Maybe. It's possible that the design for a cognitive AI might come from our own DNA ultimately: Once the process whereby a human brain is built from the instructions in the DNA - a gross oversimplification - it should be possible to simulate the system. This, of course being an impossibly complicated computational task at this point in time. But there are starts; witness folding at home and other distributed projects. Given enough time, it will be doable. Would AI be possible with a synthetic implementation of our own brains? Interesting question.

      Broad-front hill-climbing AI (which includes neural nets, genetic algorithms, and simulated annealing) only works on a limited class of problems. Learning algorithms usually hit a maximum early and then stall. These techniques are useful tools, but they don't scale up; you can't build some huge neural net and train it to do language translation, for example

      This is correct, but remember, this message is being brought to you by a horribly complicated and alcohol-fed (*grin*) neural network, too. The basic techniques for small-N layer neural networks are understood. We don't understand some of the effects and interactions that occur when N becomes obscenely big. Doesn't mean people aren't working on it, though. The very fact nature uses neural networks in all intelligent creatures - specifically, neurons, which behave much like transistors in that they can introduce gain to a system - indicates to me that the answer lies there.

      Personally, I think we have to buckle down and work out lizard-level AI (move around, evaluate terrain, run, don't fall down, recognize prey, recognize threats, feed, run, hide, attack, defend, etc.) and work our way up. This means accepting that human-level AI is a long way off. Progress in this area is being made, but mostly within the video game industry, not academia, because those are the skills non-player characters need.

      I'm not sure where you're getting your information, but there's a HUGE amount of interest in the applications and theory of neural networks and neuroscience right now. The problem, in my very, humbled, and unpublished opinion is that the platform most researchers are using - a analog simulation running on a digital computer of relatively low precision - is the wrong way to go about it. It's difficult to efficiently simulate huge networks. Worse, we don't understand what we're simulating! So we don't really know if the conversion to a digital simulation hurts whatever magic might happen on higher-level nets that makes us interesting.

      What's even more interesting is how we would judge the intelligence of such a being: It needs to be connected to the environment - be it virtual or real - for there to be valid input for the system to gain information about it's own frame of reference. The implications here with the online bot communities are interesting.

      We have to approach this as a very hard problem, not as one that will yield to a single insight, because the one-trick approach has flopped.

      Hear, hear. The human brain has an estimated ~100 billion neurons connected in god knows how many ways. The level of complexity we understand really well is a pittance in comparison. The whole XOR debacle with perceptrons in the 50's and the upsurgence (but eventual stalling) of interest in the 80's is interesting for a variety of reasons. The complexity might be too hard for us ever to understand - but it might be possible to clone that complexity in another system that's been evolved rather than proven mathematically.

      I'm not saying that AI is impossible. But we really don't know how to approach the problem at all.

      AI is a horrible term. However, lots of people know how to approach the problem, it's just a matter of having the tools and resources to go about studying it. I don't think the answer is going to be found in a digital computer, but I have higher hopes for what might come out of a actual hardware implementation of research in silicon.

      For anyone interested in this, I really recommend reading this (old, but still very good) book: Analog VLSI and Neural Systems [amazon.com] by Carver Mead.

      ** of course, I'm biased, because I work in a vlsi lab and this is an active research interest of mine. I also have a very optimistic outlook for the future of these systems.

    • Maybe this is just an underdeveloped layman's opinion, but I've always thought that we're never going to have any form of "AI" that resembles or can act like human AI, unless we construct a human-like brain structure to contain the AI in.

      In other words, the only difference between "artificial" intelligence and "real" intelligence, is that one evolved in nature, and the other will be built in a lab -- but the structure will be the same. Once we understand the human brain fully and are capable of building one from parts, then we will have AI.
  • by xtal ( 49134 ) on Sunday February 03, 2002 @12:11AM (#2944788)

    Just some comments from someone who works in a relevant arena (microelectronics) and is researching some of the issues with this theory.. I'm a little buzzed now too :).

    The problem of robot mobility has largely been solved by the aptly named "Asimo" from Honda. They've demonstrated that the bipedal form of motion can be engineered effectively and sucessfully using the same techniques that we use - these robots "learn" to walk around. So, comparisions to robot wars and battlebots aren't really relevant. To think that a machine can't ultimately have the same physical senses as we do is the ultimate hubris.

    Secondly, computers as we know them - sequential instruction processing machines - will probably never have ANY sort of real AI in them. Any attempt to model a "real" life system is only a crude approximation of the real physical process. However, we can implement real, massively parallel neural networks at the transistor level that behave just like their biological counterparts with the same technology. I've been actively researching implementing neural networks with current VLSI technology, and there are some VERY impressive results being obtained in this area currently. Have a look at some of Carver Mead's publications and papers - this field is just getting off the ground.

    In my opinion, one of two things will happen: We will become obsoleted by machines, hopelessly dependant on technology we don't understand anymore, or we will become integrated with future technology. These aren't new ideas, and they aren't my ideas. As someone working with these technologies, however, most of the comments here miss the point. If I had the technology to map every neuron in your brain and build an equilivilant circuit on a future analog chip, would it be any less capable? I hope I'll be around to find out!

    Read the articles and look around. There's lots of research in this arena, and for sure, some of the concerns are justified. But remember, humans are a part of nature, and it's my feeling that these are just natural progressions... there's nothing amoral about extinction, after all. We're around because a chunk of rock smacked into the earth a long time ago...

  • The thing to be afraid of isn't "intelligent robots" but the gullible public who will see the Wizard of Oz rather than the man behind the curtain. The super-rich who frequent events like the World Economic Forum will be able to fool us into thinking their computational equipment and remote-controlled weaponry instantiates superior intelligence we should surrender fully to - when it's really just the folks behind the curtain getting more in control. The "new robotry" will be another scam on the order of the "new economy."

    So the fear shouldn't be machines becoming human, but people surrendering themselves into zombihood out of misplaced respect for machines which have no will or intelligence beyond that of the rich folks behind them, who will in effect be riding those machines to victory the same as Cortez rode his horse into the Aztec capital, obtaining the surrender of an emperor mystified by the damn horse.

    Did you know that today in southern Mexico Indians are being told that continuing to hold their land communally (as they have for many centuries) violates "free trade" agreements, and must be ended? Just goes to show that the "free trade" rhetoric is as empty as "intelligent robots" is - but we can be sure both will be foisted on us in future as reasons to surrender whatever we most value, if someone richer wants it too.
  • He calls it The Singularity...a point at which all normal rules break down, and maybe, robots take over...you can read his orignal article at this address:

    ttp://www.ugcs.caltech.edu/~phoenix/vinge/vinge- si ng.html

    ttyl
    Farrell
  • After all the movies like Terminator, Matrix, 2001 Space Odessy, etc. etc., someone is just finally getting this thought in their head now? Damn, I've been saying this for years after all the writer/directors in Hollywood started selling this stuff on the big screen for entertainment...or so people thought!
  • "It's a machine, Scroter, it doesn't get pissed off. It doesn't get happy, it doesn't get sad, it doesn't laugh at your jokes... It just runs programs!!"

    Seriously, folks... Invasion by robots that we created?! Considering that these are the same crack pots proposing unsupportable theories of global warming just so they can get research grants with our tax dollars? AI is called AI for a reason. It's artificial. It's not human. The best it can do is make decisions based on a knowledge base and a set of criteria with varying order of importance. In other words, it's deterministic at any given time. Sure, you can use random number generation, but if RNG become a significant enough factor to really change decisions from deterministic ones, then most decisions will not truly meet the criteria. Machines can't have a soul.

    And if you really believe that human behavior is dictated solely by our brains, then we are already 'robots' but just fully biological ones. If that is the case, then what does it matter whether or not machines take over the world? If we're just machines, life and survival has no meaning to begin with. This also kinda fits in with the argument that 'if consciousness is just an illusion, then how can we make that statement?'

    Man is the best computer we can put aboard a spacecraft ... and the only one that can be mass produced with unskilled labor. -- Wernher von Braun
  • That's great, because I sure was getting tired of Monkey vs. Robot [inoshiro.com].
  • If they take over can I have my computer's job? It pretty much just sits there and plays video games against me. Hmm... I wanna be the video game ai, sweet.

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...