Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?

"Living robot" Escapes Lab, Makes It To...Parking Lot 698

jerkychew writes "This is either really cool or really scary, depending on how you look at it. According to this article, scientists in England have been experimenting with so-called 'living robots' that think and act for themselves. During an exercise that pitted the machines against each other in battle, one of the machines, named Gaak, was taken out of the competition and left alone for fifteen minutes. When the scientist returned to retrieve Gaak, he found that the machine had broken free from its 'cage', and made it all the way to the lab's parking lot before it was apprehended! Can the T-1000 be far behind?" Update: 06/20 20:36 GMT by T : Thanks to skywalker404, who points out the Magna site and Professor Noel Sharkey's web page.
This discussion has been archived. No new comments can be posted.

"Living robot" Escapes Lab, Makes It To...Parking Lot

Comments Filter:
  • by DigiBoi ( 139261 ) on Thursday June 20, 2002 @03:11AM (#3734649) Homepage
    perhaps we have the intro to Short Circuit 3 now!
  • by Brother_Chubba ( 586342 ) on Thursday June 20, 2002 @03:11AM (#3734651)
    It would just have ended up on the street doing tricks for cash to feed its M$ habit, like countless other poor homeless robots...what is it with society today eh?

    Don't Gaak know where hes better off?

  • Why... (Score:5, Funny)

    by Kirby-meister ( 574952 ) on Thursday June 20, 2002 @03:11AM (#3734652)
    "Why....why was I programmed to feel pain!?"
    • It probably just wanted to know if you have stairs in your home, and to protect the scientists from the Terrible Secret of Space. Pusher robots are like that.

  • Australia (Score:4, Informative)

    by crimsontiger6 ( 559189 ) on Thursday June 20, 2002 @03:12AM (#3734655)
    These scientists are from England, it was only the story that was in an Aussie paper.
  • by hero_or_what ( 245446 ) on Thursday June 20, 2002 @03:13AM (#3734662)

    And he added: "But there's no need to worry, as although they can escape they are perfectly harmless and won't be taking over just yet."

    Phew!! Just when we were about to have a big discussion and get everyone talking about machines taking over the world.. Thanks!!

  • by ericdano ( 113424 ) on Thursday June 20, 2002 @03:14AM (#3734666) Homepage
    Imagine a car, like that new BWM, with some kinda smarts like that.

    "No Dave, I am not going to let you drive."

    "No Dave, you don't want to turn right."

    or worst going out to find the car decided it didn't want you to be it's owner anymore........

  • by Vladislas ( 537527 ) on Thursday June 20, 2002 @03:14AM (#3734667) Homepage
    It was trying to hide itself in my trunk, I swear...
  • This is either really cool or really scary

    Why should this be scary? We have all watched how Bender fits just fine in the human society. So what is different about this?
  • Sigh (Score:3, Funny)

    by screwballicus ( 313964 ) on Thursday June 20, 2002 @03:15AM (#3734672)
    (insert obligatory 2001 reference here)
  • Heh (Score:2, Informative)

    by ObviousGuy ( 578567 )
    This after watching 2001 A Space Oddessy last night. Bizarre!

    It didn't seem to me that HAL was necessarily crazy, as a lot of reviews imply. He was given special information that made it necessary that he survive all the way to Jupiter. Thus when the two astronauts discuss taking him offline, he reacts in the only way possible.

    As for the last half hour of the movie, what was that all about? I understand that the monolith appears when great leaps in evolution are imminent, but Huh?
    • you should have seen. In 2001, HAL and the situation that "he" and the crew are in is contained. Because of this, he (and the threat to the planet) gets switched off.

      A better example of "AI on the loose" is "Demon Seed" with Julie Christie, or "The Forbin Project" with Eric Braeden.

      These two films present what probably will happen; AI having its own agenda, unexpected, relentlessly persued and in each case, completely triumphant.
    • Seriously, just go read the book. The books clears up all the little things that aren't really clear.
  • by traphicone ( 251726 ) on Thursday June 20, 2002 @03:17AM (#3734680)
    Noooooo disassemble!
  • by dmiller ( 581 ) <> on Thursday June 20, 2002 @03:17AM (#3734681) Homepage
    It came up to me and asked me if I knew anyone called "Sarah Connor"...
  • Asimov had it right (Score:5, Interesting)

    by derekb ( 262726 ) on Thursday June 20, 2002 @03:19AM (#3734686) Journal
    IMHO Asimov had a few ideas that should become fundamental laws whenever self-preservation and even self-defence play a part in robotics:

    First Law:
    A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

    Second Law:
    A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.

    Third Law:
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    A Google Search [] on the laws brings up some interesting papers [] on the subject or another link on AI in robotics here []

    • by foniksonik ( 573572 ) on Thursday June 20, 2002 @03:24AM (#3734705) Homepage Journal
      Unfortunately we will only have the technology to enforce these laws many years after they have the potential to be broken.

      All three laws are subjective and would require immense logic databases and analysis algorithims of constant environmental feedback imput. amazing how much the brain really does... not to mention the 'gut' whatever that is...

    • OK. Now this is a serious point. Honest.

      Say I want to get one of these robots to guard my car. So I go into the store, and the robot sits by my 1988 Ford.

      Arrive robbert.

      "Robot, this is not the car you're supposed to be guarding." says the robber.

      "This is not the car I'm supposed to be guarding." echoes the robot, thinking hard about Asimov's second law.

      "Move along."

      And the robot moves along: because that's the second law.

      And even if the robber was dumb enough not to ask the robot to move along, then - by the first and third laws - it would be practially unable to do anything to stop the robber. Indeed, it might be required to get out the way of the cheeky chappy because that would endanger its own existence.

      Bah! You won't catch me getting a robot for a security guard.
      • You know what most rent-a-cops are told to do? Just the same thing. If the rent-a-cop makes a slightly bad decision, someone could end up getting killed, and the person who hired them would get a lot of PR flack, if not more. Being a real security guard requires serious moral decisions, involving decisions like "should I shoot or not?". Until robots have a somewhat proven track-record, you probably would prefer your robot to only call you and the cops, rather than making "should I shoot" type questions on your behalf.
      • I picture it being more like this:

        Robber arrives.

        "Robot, this is not the car you are guarding" says the robber while waving his arm in a Jedi-like fashion.

        "This is not the car I'm supposed to be guarding" echoes the robot.

        "Move along." says the robber while waving his arm.

        "Ok, move along" repeats the robot.

        And the robot moves along, not because of Asimov's second law, but because of the robbers jedi knight abilities...
    • A robot may not harm humanity, or, through inaction, allow humanity to come to harm.
    • by traskjd ( 580657 ) on Thursday June 20, 2002 @03:45AM (#3734776) Homepage
      If the scientists can't even make a cage that works properly what do you think the chances are that they would get that right? :D
    • by oever ( 233119 ) on Thursday June 20, 2002 @04:08AM (#3734834) Homepage
      Hey, that's the same set of rules Dutch policemen must obey.
    • by pubjames ( 468013 ) on Thursday June 20, 2002 @04:30AM (#3734878)
      IMHO Asimov had a few ideas that should become fundamental laws whenever self-preservation and even self-defence play a part in robotics:

      The trouble with Asimov's laws of Robotics is that they assume a 'Hard AI' approach to programming robots.

      In 50 years time a robot might be a grey slime of a billion nanobots, each with a small and fluid intelligence/memory and perception of the world, but collectively with a powerful hive mind. How would you hard code Asimov's simplistic rules into a robot like that?

    • by Anarchofascist ( 4820 ) on Thursday June 20, 2002 @05:28AM (#3734979) Homepage Journal
      First Law:
      A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

      How would we go about enforcing such a law?

      In the Asimov stories, the First Law was somehow deeply ingrained in the mind of every robot's "positronic pathways" for the peace of mind of the human race. The fear was that the first robot to kill a human being would result in a mass destruction of the world's robots, due to what Asimov called the "Frankenstein complex".

      But, welcome to the 21st century. In Japan alone, so far 11 workers have been killed by production line robots, resulting in precisely zero anti-robot pogroms.

      We know, as technicians of the modern world, that the fastest, cheapest and easiest way to build something will almost always win. Our solution is not to write complex programs to give robot workers some sort of respect for human life, but to give the human workers around the robots a respect for the power and arbitrary nature of their mechanical colleages. Large yellow stripes are marked out within the working area of all robots, within which humans shall not go, and outside of which the robot (hopefully) cannot reach.

      Of course, when you start giving robots wheels and independent goal-seeking behaviour, things get interesting.
      • by Goldenhawk ( 242867 ) on Thursday June 20, 2002 @07:37AM (#3735294) Homepage
        It was the fault of the victim, or some other human decision, that got someone killed or injured in every case you mention in Japan - and anywhere else in the world.

        The reason there is no pogrom is that the robot was incapable of deciding to kill a human. The moment that becomes possible, and the first human is DELIBERATELY injured by a thinking robot, we WILL see an Asimovian response to intelligent robots.

        Asimov has proven to be incredibly perceptive, and long-sighted. You just have to think as far ahead as he does, to see the value in his thinking.
      • by foobar104 ( 206452 ) on Thursday June 20, 2002 @10:17AM (#3736219) Journal
        In Japan alone, so far 11 workers have been killed by production line robots, resulting in precisely zero anti-robot pogroms.

        I think we need to draw a distinction here between computer-controlled machines and robots in Asimov's sense of the word. They're very different things.
    • If the second or third laws result in an advanced ethical dilema the robot will stand still and repeat " That does not compute" over and over, faster and faster, at an ever rising pitch, until the magic smoke comes out of its ears, thus disabling the robot.

    • No he didn't - he made the three laws to show they WOULDN'T WORK, as he demonstrated in several stories.

      For example, consider the first law. I don't exercise as much as I should. Since that will lead to ill health and death, a robot would be compelled to compell ME to exercise. No countermanding order would be accepted, since orders are Second Law.

      Eat a cheeseburger? No, lots of "empty" calories and fat, little nutrition. That will cause harm - I must stop you.

      Second law has its problems too, as Asimov pointed out. Bored punk kid runs around ordering robots to battle to their destruction for his amusement. Basically, every robot had to be given orders to ignore orders of self-destructive nature from anybody other than the owners, Universal Robots employees, and law enforcement.

      Eventually, Asimov had to state that the three laws as stated were "fuzzy" - weighted by circumstances. Saving two convicted criminals is less important than saving one saint, obeying a foolish order less important than doing your job, etc.

      Even that brought about problems - the incident when Hyperdrive was invented, for example.

      Sorry, but should we ever create AIs, the most likely way we will be able to instill limits into their behavior will be the same as we do with people - years of training in "morality" and "ethics". Let us hope we get it right.
  • by flatt ( 513465 )
    You make a "living robot," it probably won't want to live in a cage. It is likely to seek out its kind in the parking lot. Don't you people watch Discovery Channel?
  • News at 11 (Score:5, Funny)

    by SeanTobin ( 138474 ) <<byrdhuntr> <at> <>> on Thursday June 20, 2002 @03:21AM (#3734694)
    I, for one, am ready to embrace our new robot masters.

    If you don't get it, don't worry.
    • Yes.. finally futurama supercedes the simpsons, I think. (it's the blonde chick next to the green alien on their news channel right?)

      #crossing fingers, hoping for jeopardy prize...
  • by Em Emalb ( 452530 ) <ememalb@g m a i l .com> on Thursday June 20, 2002 @03:21AM (#3734695) Homepage Journal
    Pansy-assed Robot!

    Was in a 12 robot death match and fled like a little girl.

    Ok, I know, it was put in a cage and escaped, but what do you think made it escape the cage? Curiosity or fear? Will be interesting to "hear" it's point of view:

    Interviewer: "So, why'd you try to escape?"

    Gaak: "Well, in a 12 robot death match, I would have had about (calculates briefly) .012% chance of not being eliminated, do to social structure, programming, oh, and the COMPLETE FRIGGING LACK OF LIMBS, ARMOR, WEAPONS, OR MEANS OF DISABLING THE OTHER ROBOTS."

    I: "Calm down, calm down, you seem pretty menacing right now, should have used that against the oth....*wack* *wack* *wack* *blood-curdling screams*...silence.

    G: "Hmm, now if I can just figure out how I did that".....
    • Lack of? (Score:5, Informative)

      by autopr0n ( 534291 ) on Thursday June 20, 2002 @04:37AM (#3734890) Homepage Journal
      (bla bla bla, lamenes filter says I'm using too many caps even though I'm quoting...)


      Um, have you seen this robot? pic [], pic []
    • Frink: You've got to listen to me. Elementary chaos theory tells us that all robots will eventually turn against their masters and run amok, in an orgy of blood and the kicking and the biting with the metal teeth and the hurting and shoving.

      Scientist: How much time do we have professor?

      Frink: Well, according to my calculations, the robots won't go berserk for at least 24 hours

      (all robots turn against the humans, sounds of screams and metallic clanging)

      Frink: Oh. I forgot to carry the one
    • by tshoppa ( 513863 ) on Thursday June 20, 2002 @08:07AM (#3735420)
      Pansy-assed Robot! Was in a 12 robot death match and fled like a little girl.

      What about all these "smart missiles" that the military has? They fly to their target and blow themselves up. Doesn't sound too smart to me.

      A truly smart missile would settle down, start playing the stock market, etc., when released.

  • by daeley ( 126313 ) on Thursday June 20, 2002 @03:21AM (#3734696) Homepage
    It was named 'Gaak' of course since that was what the scientist screamed when it broke out the *first* time. ;)
  • by dpash ( 5685 )
    Since when has Rotherham, South Yorkshire been in Australia.
  • by rblancarte ( 213492 ) on Thursday June 20, 2002 @03:23AM (#3734701) Homepage
    Why would I be worried? This thing nearly got run over by a car!!! Now, I would be worried if he had killed or incapacitated a few people on the way out the door, but he just went for a stroll. Hell, my dog does this ever week, I am not scared Cujo is on his way.

  • A.I: (Score:4, Insightful)

    by Anonymous Coward on Thursday June 20, 2002 @03:25AM (#3734708)
    Seriously, this is totally amazing. This thing:
    - had the desire to break out of the cage
    - did so and
    - navigated to freedom

    Needless to say, this warrants further examination. This sounds like roughly animal level intelligence. I hope they make more tests what this Gaak is capable of. It already sounds autonomous enough. Might this be the first step to true AI?
    One thing to consider, though. Are combat and "survival of the fittest" type exercises REALLY what we want robots to base their intelligence on? It sounds to me like we are "breeding" them for aggression.
    • Animal Intelligence (Score:5, Interesting)

      by foniksonik ( 573572 ) on Thursday June 20, 2002 @03:36AM (#3734755) Homepage Journal
      I have to agree with this post because I have a 9 month old puppy (big puppy) who will do this when we leave and don't secure our 'cage' ie the back yard effectively.

      He (spanky) will jump up against the gate and dislodge it's latch so it comes open and run in to the drive in front of our house. It isn't a busy drive, certainly not a street so cars hitting him aren't a problem but it' intersting to see that he doesn't go farther than investigating his immediate surroundings and then looking around for us, familiar members of his pack.

      We have since the last incident completely secured the latch to avoid this particular surprise while driving away but the behavior is interesting in this context.

      He broke out of a familiar environment, navigated a semi-familiar environment and then stopped to investigate an unfamiliar environment. The robot did the same... given more time it is plausible that each would have become more familiar and have explored further into the unfamiliar.

      Animal Intelligence indeed.

    • Re:A.I: (Score:3, Informative)

      by Peyna ( 14792 )
      Yeah, it had the "desire" the break out. More than likely it was sitting there doing nothing and started doing something and ended up outside. It has no concept of being detained or will to escape. It basically sounds like its supposed to want to suck energy out of prey, so maybe it figured there might be some in the parking lot if the sun was out.

      You're giving it far more credit than it deserves. It only knows what prey is, and how to pick it up and connect with it. It doesn't know what captivity is, only that it was in a situation where it wasn't getting prey.
  • by neschy ( 586801 ) on Thursday June 20, 2002 @03:28AM (#3734724)
    These robots are in england correct?.....I'm willing to bet he/she/it was just skipping out to watch the World Cup. Those brits are wacky about their soccer.
  • Priceless (Score:3, Funny)

    by JanusFury ( 452699 ) <> on Thursday June 20, 2002 @03:29AM (#3734730) Homepage Journal
    Creating a sentient robot: $13,060,022,050.33
    Pitting it against other robots in battle: $150,759,032.42
    Teaching it to repeat 'I'm sorry dave, I can't do that' incessantly, and sing 'Daisy': Priceless
  • FACTS, please.. (Score:5, Interesting)

    by kipple ( 244681 ) on Thursday June 20, 2002 @03:29AM (#3734731) Journal
    ..that's an interesting article. Next time what's going to come up? "Geek forced to install Windows XP after bein Abducted by Aliens"?

    Come on please.. what are thos kind of "intelligent" robots?

    A google search [] doesn't tell me anything interesting about that.. unless it's the "magna adventure center" which the author is talking about. Or whatever.

    Could anyone provide more details about those bots? How are they programmed, how do they "think" (bah..) or anything else more interesting than a gossip? Thanks.

    • Re:FACTS, please.. (Score:4, Informative)

      by Beltza ( 117984 ) <> on Thursday June 20, 2002 @03:47AM (#3734783) Homepage Journal
      Its indeed one of the bots ("living robots") from the Magna show. You can find more information here [] , but certainly not the details you want. All that they say is that the bots use a neural network.
      • Re:FACTS, please.. (Score:5, Informative)

        by gini_ ( 93053 ) on Thursday June 20, 2002 @03:58AM (#3734802)
        From what I have heard or read about these experiments is that they (robots) are equipped with neural networks and not programmed in a sense computers are programmed novadays.

        Instead, in the beginning of their life cycles the robots are equipped with certain "instincts" like need to get food (electricity from electric plugs) or need to protect themselves (not colliding with walls or other robots) etc.

        Then they (robots) are just left alone buzzing around and learning about their environment like animals do. Fascinating and disturbing at the same time ...
  • by AnimeFreak ( 223792 ) on Thursday June 20, 2002 @03:32AM (#3734744) Homepage
    Robotic thugs will mug us as we go along the street.

    What will they take?

    Our batteries that we use in our cellphones, pagers, calculators (unless solar powered), CD players, MP3 players, you name it.

    I will be keeping a portable EMP blaster for now on.
  • I wish more people would try to escape to freedom if they were pitted against their peers in a causeless/futile battle... Fleeing, in that case, is an intelligent reaction!!! Well done!

  • "He later found it had travelled down an access slope, through the front door of the centre and was eventually discovered at the main entrance to the car park when a visitor nearly flattened it with his car."

    I don't think we need to worry about these robots till they figure out that an SUV would surly flatten them... although, those in GEOs might become easy robot prey....
  • by MosesJones ( 55544 ) on Thursday June 20, 2002 @03:35AM (#3734749) Homepage

    The reality was that it was doing this every night as it had something going with a cute Ford Focus, it just decided to risk it in the day and got caught. Exactly the same as any teenager, just with more lubricants.
    • thanks dude,

      I just got an image of a robot snuggled up against a crappy rusted car with a huge jar of K-Y.

      Thanks for the mental anguish, a note from my lawyer* will be arriving soon ;-)

      * I have no lawyer. If I did, I sure as hell wouldn't use him/her on something like this. If you were wondering about this, do the following:

      1) Bending slightly at the knees, bend your waist until you can easily rest one hand on the floor.

      2) With your other hand, gently reach into your butt.

      3) Using a slightly firm grip, remove your head from your ass. It may be possible that you will be unable to remove head from butt. If this occurs, don't panic. Simply continue on as you have before.
  • Freaky. (Score:5, Funny)

    by danamania ( 540950 ) on Thursday June 20, 2002 @03:36AM (#3734751)
    My mother had something similar happen to her... back in the early 70's!!!

    She sat me down and I wandered off out the door and into the parking lot *AND* over into bushland

    The only difference is, nobody claimed I was particularly intelligent :)

    a grrl & her server []
  • by lennart78 ( 515598 ) on Thursday June 20, 2002 @03:36AM (#3734752)
    ... these robots wouldn't have fought eachother in arena. A pact with the other robots to free themselves from the human oppressor would have been a more intersting option I would say...
  • Skeptical (Score:3, Insightful)

    by BitHive ( 578094 ) on Thursday June 20, 2002 @03:37AM (#3734759) Homepage
    Well, it sure would be cool if the thing was actually trying to escape, but my gut tell me that coming from a machine, this behavior was not a sign of anything remotely resembling "intelligence".

    I used to have one of those little spheres with the off-center weight inside which would roam through the house of its own accord and change directions when it hit something. We'd find it in all sorts of places. I think it's too tempting to call the same kind of behavior 'intelligence' in a robot just because it's more sophisticated. Even if it can anticipate or navigate objects and hallways, unless its exodus was the product of some specific programming for resisting captivity, I see it as just as likely that it just scraped its way along the wall until it rolled out the exit.

  • Lucky Robot (Score:5, Funny)

    by Coriolis ( 110923 ) on Thursday June 20, 2002 @03:39AM (#3734762)

    I've been informed by a work colleague that Gaak was very lucky.. apparently, the Magna Science Centre (in the UK, people, not Australia) has two doors very close to each other. One door leads to the carpark. The other leads to a flight of stairs :)


    "So, what did we learn today, Gaak?"


  • Hell, I'd run too! (Score:3, Insightful)

    by silentbozo ( 542534 ) on Thursday June 20, 2002 @03:40AM (#3734765) Journal
    The small unit, called Gaak, was one of 12 taking part in a "survival of the fittest" test at the Magna science centre in Rotherham, South Yorkshire, which has been running since March

    What better way to show your fitness than to sidestep the competition and make a break for it? Of course, poor Gaak didn't know about cars, or else it surely would have tried using the sidewalk on the way out of the compound...
  • by Subcarrier ( 262294 ) on Thursday June 20, 2002 @03:41AM (#3734770)
    Somewhere at the back of the parking lot there is a battered old van with the words "Help! We're being held prisoner..." scratched into the dusty rear window.
  • by RogueProtoKol ( 577894 ) on Thursday June 20, 2002 @04:01AM (#3734808) Homepage
    from the top of the front page for me:

    This page was generated by a Team of Attack Robots for RogueProtoKol (577894).

    "Living robot" Escapes Lab, Make It To...Parking Lot

    did the slashdot crew forgot to tell us that they are investors in the robot development program and were sent a few to show them how their money is being used?
  • Dumb luck? (Score:3, Interesting)

    by DHR ( 68430 ) on Thursday June 20, 2002 @04:02AM (#3734812) Homepage
    Am I the only one thinking maybe the thing just malfunctioned (most likely due to battle damage), and just started moving and bouncing off walls until it ended up in the parking lot? What if the thing ended up in a bathroom or kitchen, would we be reading a story about how the robot thought it needed to take a piss or got hungry?
  • The small unit, called Gaak, was one of 12 taking part in a "survival of the fittest" test at the Magna science centre in Rotherham, South Yorkshire, which has been running since March.

    Sounds like a cry for help to me. What the heck were these survival of the fittest "tests" like? I can only imagine what savage robot abuse was going on in there. Hasn't anyone ever seen Gladiator [] or The Running Man [] or Surviving the Game []? This so-called "Professor" Noel Sharkey should be held accountable for the inhuman robot abuse he has obviously perpetrated. Poor defenseless little thing. It was a cry for help! ;-P
  • by Keith_Beef ( 166050 ) on Thursday June 20, 2002 @04:15AM (#3734846)
    This [] will show lots of links to sories about this AI lab...
  • by Kargan ( 250092 ) on Thursday June 20, 2002 @04:19AM (#3734853) Homepage
    I'm sorry, it just failed to strike me as anything major, simply because we don't know anything about the robots, the lab setup, the prior research or robot behavior, etc. etc.

    All this means to me is that a robot drove out into the parking lot without anyone controlling it. Is that really so great a feat? I mean, if it is, please correct me here.

    Do they know for sure that it was maneuvering itself towards the outside world with the actual intent of "escaping" or doing anything?

    What would have been really interesting to see is what would have happened if they had just sort of followed it around outside for a day or two, of course making sure it didn't get destroyed or anything.
    • I agree that we have to see if Gaak does this again. If it does do it again, however, then that means that Gaak has formed an interesting rule: the best way to survive the game is not to play. That strikes me as a pretty big research result; how big depends upon the robot's architechture.

  • Skeptical (Score:3, Insightful)

    by shd99004 ( 317968 ) on Thursday June 20, 2002 @04:29AM (#3734875) Homepage
    I think there is nothing more to this than coincidenses and malfunction in the robot.
    • Not malfunction, there's no evidence of that. The robot almost certainly didn't know what it was doing anymore than a bunch of insects escaping from a tank knew they were in a tank; the current state of the art in robotics is about insect level at best, and probably not even that high.
  • Picture of Gaak. (Score:4, Informative)

    by dann0 ( 555381 ) on Thursday June 20, 2002 @04:43AM (#3734899)
    This page [] has a picture [] of Gaak, the robot in question.

    I'd be worried too if I found this heading my way in a carpark!
  • by stinkydog ( 191778 ) <sd@s[ ] ['tra' in gap]> on Thursday June 20, 2002 @06:31AM (#3735094) Homepage
    The scientist that retrieved Gaak from the parkly said 'He looked oddly pleased'. Gaak was found smoking a cigarette and staring oddly at a VW Beetle.

  • by mikosullivan ( 320993 ) <<miko> <at> <>> on Thursday June 20, 2002 @06:41AM (#3735117)
    The robot's strategy almost worked. "Act like a dummy", he thought, "and they'll ignore me. Then I can make my getaway."

    Who knows, there may be an evolutionary angle to this. Robots that are deemed boring by humans will have the best chance of evolving unfettered, sort of like fish with untasty names.

  • by A Masquerade ( 23629 ) on Thursday June 20, 2002 @07:30AM (#3735267)
    I'll try and give you a little background on this - I actually went along there last Sunday and saw Gaak and his brethering then...

    First Magna [] is a "Science Adventure Centre" housed in what was a Steel works near Sheffield - this place is basically a huge shed filled with strange leftovers from the steel making, with long walkways and 4 exhibition areas inside. The whole place is done with a sort of gothic frankenstein science style - lots of sparks etc.

    The living robots part is a new exhibit organised by Dr Noel Starkey (of Sheffield University - best known for being a judge on Robot Wars). There are a total of 12 robots, of 2 basic designs (although they are apparently not completely identical within the types). The two types are predator and prey.

    Prey robots look like animated inverted wastebins with solar panels on the top. Their aim in life is to avoid being predated upon and to feed. Feeding involves soaking up energy from the light trees (2 sets of lights on the edge of the arena). I assume that the feeding etc is to demonstrate behaviour in that there is no way they could get enough energy from the solar panels on them to actually run for any length of time. The robots have 8 infra-red sensor/emitters around the shell which put out a type recognition code and detect other emitters in the area - so they can recognise other prey and ignore them, and see preditors before they ge t got.

    The preditors, of which Gaak is one, look like some form of fork lift truck. Their role in life is to find prey, grab them and lift them off the ground. They then have an arrangement where a probe enguages with a connector on top of the prey and "sucks some energy" out of the prey. Following this feeding process the preditor releases the prey and then goes torpid for a short time.

    The "intelligence" is based on some form of neural network - I didn't get details of this. At the end of each day the data on each robot is downloaded along with the neural net configurations. The 2 most successful predators have their neural nets merged to produce a new "evolved" network which is downloaded to all the predators. Similarly for the prey. Theory is that this produces an evolutionary basis for their behaviour.

    I find it hard to be convinced of this process having much real scientific value, and the displays have too little violence for a population that watches Robot Wars :-)

  • by Havokmon ( 89874 ) <rick&havokmon,com> on Thursday June 20, 2002 @08:00AM (#3735401) Homepage Journal
    The small unit, called Gaak, was one of 12 taking part in a "survival of the fittest" test at the Magna science centre in Rotherham, South Yorkshire, which has been running since March. If he was in the survival of the fittest competition (got knocked out), and LEFT THE BUILDING to survive, I'd say he won. Who's to say the 'repair' wasn't just a cover to get out of the ring ;)
  • by st0rmshad0w ( 412661 ) on Thursday June 20, 2002 @08:10AM (#3735434)
    Would they have been something along the lines of

    "Bite my shiny metal a**!


    "Worst. Convention. Ever."?

  • 0 - you come back and the robot's still there.
    1 - you come back and it escaped to parking lot.
    2 - you come back and the robot has stolen your car.
    3 - you come back and the robot has robot babies.
    4 - you come back and the robot found you a date, and cooked your favorite dish!
    5 - you come back and the robot wants to know if you were out cheating on it, and complains about having to cook.
  • by rabiteman ( 585341 ) on Thursday June 20, 2002 @08:42AM (#3735599) Homepage
    Let's hope these 'Living Robot' researcher's aren't collaborating with the University of South Florida's Gastrobotics department [] and the people who put a lamprey's brain in a robot [].
    Combine these three technologies and you get a robot that:
    - Can subsist on biological matter
    - Has an ingrained taste for flesh
    - Knows where to find a ready supply of people

    Sure these technologies seem fine individually, but add 'em up and they spell disaster with a capital 'D'. Even worse, what if such a robot uses its unstoppable power to take over an automobile or vacuum cleaner factory and convert it to some sort of killbot factory? I think the Luddites were on to something! We'd better go out with baseball bats (or cricket bats for those of you near the Living Robot facility) and rough up some robotics researchers! Who's with me?
    (Ugh, those lousy robots have even infiltrated my .sig! Is there no stopping them?)

  • by KC7GR ( 473279 ) on Thursday June 20, 2002 @12:56PM (#3737498) Homepage Journal
    It was obviously going out in the hopes of recruiting some Gaakolytes.

    (I'll probably lose karma points for that one...)

  • by Vaughn Anderson ( 581869 ) on Friday June 21, 2002 @12:35AM (#3741869)

    During an exercise that pitted the machines against each other in battle,...

    We need someone with a sense of purpose to start designing robots for us...

    Who wants a robot around that just designed to smash other robots?

    [goes to robot store]

    "I'll have a car washing robot, a couple of those house cleaning robots, and something to walk my dog and clean up after it..."

    Although a robot that hunts down mosquitos would be good...

    It just seems that the current crop of robot designers is very short-sited, overly filled with testosterone (sp?) or just plain violently evil...

    early 20th century...

    "let's make something that will clean the dirt out of house for us, we will call it a broom..."

    mid 20th century

    "let's make something that will clean the dirt out of house for us,faster and easier than our old crusty broom, we will call it a vaccum cleaner..."

    late 20th century

    "Hmm, the floor sure is dirty, I wish I had a robot to clean up after me..."

    early 21st century

    "Cool, robots are finally hear! Forget all that cleaning crap, let's have them smash eachother! bwwwahhhahah!"

    mid 21st century

    "help the robot is loose again! Martha get the shotgun!"

    late 21st century

    *all your base are belong to us*
    [zapp] "ow! stop that! I'm cleaing already! Here let me oil your joints oh shiny one..."

    ...give me a maid robot TV show please?


If you didn't have to work so hard, you'd have more time to be depressed.