Slashdot is powered by your submissions, so send in your scoop


Forgot your password?

3 Bots Win Pentagon's Robotic Rally 81

An anonymous reader writes "We've got a winner in the Pentagon's $3.5 million all-robot street rally, the Urban Challenge. Three, actually. Wired reports that 'bots from Stanford, Virginia Tech, and Carnegie Mellon all completed the course within the six-hour time limit. The robo-cars had to complete different missions taking varying times, so the flesh-and-blood judges will take a day to figure out who takes home first prize."
This discussion has been archived. No new comments can be posted.

3 Bots Win Pentagon's Robotic Rally

Comments Filter:
  • by LiquidCoooled (634315) on Sunday November 04, 2007 @06:20AM (#21230355) Homepage Journal
    They merged up into a superbot and won the contest by destroying everything around them.
  • by ZaMoose (24734) on Sunday November 04, 2007 @06:31AM (#21230393)
    Ben Franklin Racing [] (a collaboration between UPenn, Lehigh and Lockheed Martin) also finished within the 6 hour time limit.

    The judging will certainly be interesting.
    • by vertinox (846076) on Sunday November 04, 2007 @08:53AM (#21230775)
      Yeah, but I think Ben was penalized for its initial delay at an intersection for too long at the first mission. After they rebooted it seemed to be ok and actually did pretty good getting around the intersections it had problems with. When Honeywell was taken out of the race, Ben was waiting behind it at the stop sign until they pulled Honeywell out but I don't think they penalize for that.

      It is something to note that the two teams that finished but finished last were MIT and Cornell which had a collision with each other somewhere around Mission 2. But they both finished which is pretty awesome considering what it takes to run this corse.
    • by samkass (174571)
      That's unclear from the article... the article says the race started at 8am and Ben came in at 2:50pm. Do you have an alternate link to different results?

  • by Anonymous Coward on Sunday November 04, 2007 @07:13AM (#21230509)
    I will admit that I haven't read up on the exact rules , but I find the name Urban Challenge to be a bit misleading. From what I have seen the environments are very sterile compared to real life urban environments , yes the name gives the impression that robots can now drive in a ciry like New York. This reminds of the 60s I think it was when computer scientist claimed that because a robot could restaple boxes , we will have androids in 20 years. Then it became clear that the algorithms didn't scale well with the complexity of the environment ( to put it nicely) and Artificial Intelligence became a somewhat disappointing field for the general public atleast.
        All I am saying is that we and the tech journals should be carefull with exciting names like "Urban Challenge" or "60miles through urban landscape".
        Other then that , congratulations to the teams , I didn't expect such good results.
    • by mangu (126918) on Sunday November 04, 2007 @07:47AM (#21230577)
      Then it became clear that the algorithms didn't scale well with the complexity of the environment (to put it nicely) and Artificial Intelligence became a somewhat disappointing field for the general public at least.

      That's the problem with hype. They have cried "wolf" too many times. It was the same thing at the end of the 19th century, when people were researching flight. Steam engines were too heavy for their power, airplanes had to wait until engines became powerful enough. There were many people, among them some respectable scientists, that wrote articles "proving" that heavier than air flight was impossible.

      At this point, computers are too expensive and consume too much power to be practical for anything that involves "human-like" intelligence. But we are making progress, at least we do have unbeatable chess-playing computers, a feat that not so long ago many people considered impossible. Of course, computers do not follow the exact path of reasoning that humans do when playing chess, but they are unbeatable anyhow. Airplanes do not flap wings either, but they fly faster and higher than any bird.

      Unless Moore's law ceases to function, we can expect desktop computers with a complexity comparable to that of a human brain in twenty years or so. Given the hardware, it's only reasonable that someone will invent a way to make a computer emulate a human brain in its full power, just like people invented machines capable of flying when they got engines with enough power.

      • Re: (Score:2, Interesting)

        by hyades1 (1149581)

        I've heard estimates of both one and three terabytes as adequate storage to accurately reproduce a neuron-by-neuron reconstruction of a human brain. Assuming they figure out how we assemble and integrate everything to produce sentience, 20 years might be longer than we need.

        Remember how the Luddites used to sneer that a computer the size of Manhattan couldn't model the behaviour of a cockroach? Then somebody figured out that about 6 basic commands would do the trick?

      • by HUKI365 (1113395)
        Yeah, but we have been saying this since the 1980s! We are approaching 2010 - and have done little that science would've thought would happen. No civillian space flight, no humans on planets, no artificial intelligence, no cure to AIDs, cancer, asthma or the common cold. No voice recognition or text-to-speech worth two hoots. Only now are we moving close to reasonable touch displays and miniturised memory!
        • by vertinox (846076) on Sunday November 04, 2007 @08:59AM (#21230805)
          No civillian space flight, no humans on planets, no artificial intelligence, no cure to AIDs, cancer, asthma or the common cold.

          Actually, I don't think its not the fault of emerging technologies but rather emerging technologies must scale to the following questions:

          1. Is it profitable?
          2. Is is mass producible?
          3. Does it follow a decentralized free market model rather than a centralized regulated model?

          No one predicted cell phones and the internet in the 80s as they are now, yet if someone told me in 1989 about Youtube on my hand held device I would have scoffed at it being too Star Trek like. Yet today we have such technology.

          The reason we don't have AI, Civilian Space Travel, and Flying Cars is because they meet none of the 3 criteria I mentioned. AI today would require a computer that costs billions of dollars to build, a civilian space program would cost billions to build, and the flying car industry would be too regulated and dangerous to even consider marketing to people.

          Which is why cell phones and internet caught on because those things are quite decentralized in how they work (Yeah I know the cell phone companies are monopolies but you can sell someone a cell phone and it doesn't cause any problem with the rest of the system etc etc)
          • by samkass (174571)
            No one predicted ... the internet in the 80s as they are now, yet if someone told me in 1989 about Youtube on my hand held device I would have scoffed at it being too Star Trek like.

            Not "no one"... at the risk of bringing up stupid falsehoods about who did or didn't claim to have invented what, here's part of the text of a speech Al Gore gave to congress in 1986:

            Mr. President, it gives me great pleasure to support the proposed National Science Foundation Authorization Act.


            Both of these amendments seek
            • "as they are now" is an important qualifier in the grandparent post. Gore's speech spoke to many of the technical requirements, but in terms of the way the 'net is actually used today, and it's extremely broad importance simply wasn't on anyone's radar in those days. I started using the 'net in the 88-89 time frame and e-mail was about as exciting as it got at the time. There was speculation in places like usenet discussions, but the earlier poster is correct -- nobody predicted the net as it is now.
              • by samkass (174571)
                From the speech: "Today, we can bank by computer, shop by computer, and send letters by computer. Only a few companies and individuals use these services, but the number is growing and existing capabilities are limited."

                I agree that no one could have predicted every detail, but the web was created, in part, with the money Gore drove through Congress to fund exactly this sort of capability. Considering the speech is from 1986, I think it's pretty prescient.
          • Re: (Score:3, Informative)

            by ucblockhead (63650)
            Lots of people predicted cell phones. In Heinlein's futurist essay written in 1950, he predicted in 2000 that everyone would have a wireless phone you could put in a pocket. He revisited this essay a couple times and in the last revisitation in 1980 he referred to the wireless phone prediction as "obviously correct".
        • No voice recognition? Have you ever tried Dragon System's Nat Speak 9.0? Seriously. I am just a very happy user not some shill. It is astounding. However, I am also very disappointed by the future. I mean here we are and we are nowhere near where I thought we would be by now. No mars, no moon presence, no cool robots. Okay. The internet, but the telegraph was far more of a jump in it's day than the net was in it's. However IMHO you can check speech to text off the list.
      • by LS (57954) on Sunday November 04, 2007 @09:07AM (#21230847) Homepage
        I hear what you are saying - I also believe that anything is possible given enough time and hard work. Yet I think you are VASTLY underestimating the task of creating a human-like intelligence. Faster and more powerful != More Intelligent. Flight and chess are child's play compared to the human mind. It's also a false assumption to believe that a Turing architecture machine will be able to simulate the human brain with whatever specious equivalence used to compare human and computer processing power. The brain is NOT a computer. Computers themselves are simple expressions of a mere slice of how we understand our own mental processes to work. Do you know anyone who understands how any mind works, let alone their own, whether they be computer scientists, psychologists, cognitive scientists, or neuro-biologists? To put it simply, in order to expect a human-like intelligence in 20 years requires two things we do not yet have: An understanding of human intelligence, and a hardware architecture that is able to implement it.

        • Re: (Score:2, Insightful)

          by jpfed (1095443)

          Faster and more powerful != More Intelligent.

          Sorry to nitpick, but yes, it does. Intelligence as a function of speed, power, and strategies remains monotonically increasing with speed and monotonically increasing with power, up to the bounds of the complexity of the problem domain (cf Go and Tic-Tac-Toe). It just so happens that there will be diminishing returns on existing strategies, and finding new strategies will at some point be more cost-effective than making things faster or more powerful.

          To put it simply, in order to expect a human-like intelligence in 20 years requires two things we do not yet have: An understanding of human intelligence, and a hardware architecture that is able to implement it.

          For some definitions of "human-like", sure. But a s

        • by mangu (126918)
          Do you know anyone who understands how any mind works, let alone their own, whether they be computer scientists, psychologists, cognitive scientists, or neuro-biologists?

          No, I don't. Yet it's funny what so many people say about Deep Blue: "that's not the way humans reason about chess". Huh? If we have no idea how the human mind works, how can we be so sure that deep inside a human chessmaster's subconscious mind there isn't a search engine looking over all possible game positions?

          We do have a rather good i

        • by Petersson (636253)
          Computers themselves are simple expressions of a mere slice of how we understand our own mental processes to work.

          Computers are just interrupt-driven number crunchers. That's what they do. They do it very quickly and very effectively.

          The letters I see on my screen when I'm writing this text are no real letters, it's just lot of color dots and some digital representation in computer's memory. There's no spoon.

          Computers are similar to brain as a steam locomotive is similar to a horse (sorry, I just hate car a
        • "The brain is NOT a computer. Computers themselves are simple expressions of a mere slice of how we understand our own mental processes to work."

          Funny, last time I saw, computers were machines capable of emulating any mathematical operation... Not based on how we think our brain work, nor limited to it.

        • We don't necessarily have to understand human intelligence in order to duplicate it. Given a fast enough computer one could simulate all the individual cells in the human brain and body with high fidelity, thereby creating a "virtual" human who has human intelligence (and flaws, too).
      • by gaelfx (1111115)
        A.I. maybe in 20 years, but androids? Just because we have intelligence does not mean we understand how to use it. Artificial Intelligence might work on a computer, and no matter how portable that computer is, it doesn't mean we'll be able to transport it intelligently or even very dynamically, regardless of cost. But then again, they will invent a time machine, ensuring it's use for military and thereby make it the single greatest threat to the existence of man. We do need something new to hunt :D
        • Building a human-form robot is easy enough. Refine the basic mechanical parts, get a better battery, and work on a better balance system and you're all set. Controlling it is a bit trickier without an AI, but I think we could feasibly build a robo-maid in the next 20 years. Maybe not to the point of affordable mass production. But certainly clever enough to do the laundry, wash the dishes, and make the occasional grilled cheese sandwitch for researchers.
      • by SmallFurryCreature (593017) on Sunday November 04, 2007 @09:28AM (#21230995) Journal

        You link computing power with intelligence, clearly this means you now NOTHING about programming. Doom does NOT become F.E.A.R. by simply running it on a faster CPU.

        SOMEBODY has to write the program that becomes the AI. It REALLY does not matter that much how fast the underlying hardware is that executes that program, the simple fact is that AI code right now just ain't that smart. Not even if an AI can take weeks to calculate can it come anywhere close to what a human can do in terms of reasoning with the input available.

        A smart program that is just very slow would be an amazing breakthrough and if that happened then all we need to do is wait for computers to get faster, but right now the AI code just ain't there. If it was, it would long since have been given a supercomputer to run on.

        These robots in the challenge have a simple task that any human can do, "see" the enviroment, and act upon that information. For years this has been attempted and the systems just ain't getting any better despite the fact that computing power has skyrocketed. Simply put, no code exists that can take a video image and turn that reliably, consistently in information that tells the decision making software what the enviroment is like.

        For instance, the detection systems have problems with blue colored cars against asfalt. Consider a human being, put a car painted blue against a background painted the exact same color in blue lighting so it totally blends in. WOuldn't fool a human for a second since we would still see the windshields and through it the interior of the car and reason out that there must be car there even if we cannot see the bodywork.

        Same with an other obstacle, a barrier hanging in the air, the teams actually complained about this because they thought all the barriers would be on the ground. This shows you why AI programming is so bad, the programmers are morons, the barrier involved is very common at road blocks. The car, designed and programmed to only scan the ground is unable to determine that a barrier might higher up.

        Worse, when it hits it, it can't react to it. The cars have to be stopped, not one of the cars, not even that best was able to simply stop, backup and try a different course.

        You can throw more GHZ at it, but all that will give you is faster dumbness.

        What happens when a computer has the same complexity as the brain? You will have a very fast, braindead piece of machinery. It is the programming that matters.

        Your anology to flying is flawed, we knew that things could fly, gliders had been around for ages, all that was needed for a power source that had good enough power to weight ratio. We do NOT have the AI code or any idea how to make it. Compare it to say faster then light travel. We don't know how, so claiming that if only we develop an infinite source of energy we can do it, is flawed.

        • by Orange Crush (934731) on Sunday November 04, 2007 @09:58AM (#21231195)

          I agree with you except on one point:

          Your anology to flying is flawed, we knew that things could fly, gliders had been around for ages, all that was needed for a power source that had good enough power to weight ratio. We do NOT have the AI code or any idea how to make it. Compare it to say faster then light travel. We don't know how, so claiming that if only we develop an infinite source of energy we can do it, is flawed.

          FTL violates physics as we know it and we've never observed anything indicating it's possible. We *know* sentience is possible and the hardware and code exists. It's right behind our eyeballs. Human-level intelligence can be had in a device smaller than a bowling ball giving off less waste heat than a 100 watt light bulb.

          I fully agree that most don't understand the magnitude of the problem and we have a very long way to go, but we know for a fact that it's 100% solvable. Nature already did it.

          • by shystershep (643874) * <bdshepherd@gmai[ ]om ['l.c' in gap]> on Sunday November 04, 2007 @10:24AM (#21231383) Homepage Journal

            we know for a fact that it's 100% solvable

            I'm afraid I have to disagree with your logic. Yes, physics as we know it would be violated by faster than light travel, so we certainly don't know if it can be done at all. Your argument for AI is flawed, though: simply because we know sentience is possible, it does not follow that we know sentience can be created artificially. We know sentience is possible in biological organism, but we do not know if it can be recreated in a machine. Even if your definition of AI includes creating an organism that has sentience, as opposed to the current understanding of AI (machine/software-based), your statement that "it's 100% solvable" does not necessarily follow.

            I think it's somewhat more likely than not that we will eventually develop true AI, but I don't think you can jump from the mere fact that sentience exists to saying that artificially duplicating it is a given.

            • I'm trying to avoid the whole "what's natural vs. what's artificial" debate and look at it from a perspective of simply "what's possible". We know sentience is possible, because we're here. We know animals can do advanced optical pattern recognition, navigation, and lots of other "hard AI" problems because they're here. We only understand a fraction of the complex electro-chemical interactions happening within brains to create intelligences. We know a lot of it is analog, and it may not be feasible to m

            • Wow. If it weren't for the subtlety, I'd think you were an Intelligent Design guy.

              Anyways, it is perfectly valid to say that the problem (creating sentience) is 100% solvable. If it weren't, humans couldn't exist. (Your point about "artificially" is meaningful only for a fixed and narrow definition of artificial.) The thing that might not be right is to say that the problem is 100% solvable by humans. We might not be smart enough to create an AI. But I think we are. It seems pretty certain that we'll soon h
            • Re: (Score:2, Interesting)

              by Ironpoint (463916)

              "We know sentience is possible in biological organism..."

              And what evidence do we have of this? A bunch of biological machines running around saying "I'm sentient" is not good enough for me. No one can explain where sentience comes from or at what point on the tree of life it begins. Most people would agree that bugs and dogs are not sentient but argue that people are without explaining much about their reasoning. The simplest explanation is that people, dogs, and bugs really aren't sentient even though
              • by vertinox (846076)
                And what evidence do we have of this?

                I think he may have confused intelligence with sentience. Neither require each other, but it helps.

                As in...

                My cat is sentient but he's not intelligent enough to drive my car.
                These robots are not sentient but they are intelligent enough to drive my car.

                Of course I have no evidence to really prove my cat is sentient other than he appears to be so and I don't feel like cutting him open to double check that he didn't get replaced by a cat android while I was sleeping.

                In fa
            • by fain0v (257098)
              You may know something about programming, but you don't seem to know anything about biology. We are all molecular machines made of carbon, nitrogen, oxygen, hydrogen, sulfer, etc. I can at least understand people that make the argument that it will take us hundreds of years to develop AI. But to argue that only "biological" systems are capable of it is absurd.
              • Try reading my post again. I was commenting on a previous post that claimed that, since we know sentience/intelligence is possible, then we can be 100% sure that we can replicate it mechanically. My point was that the logic does not extend to a different kind of system, i.e., just because we know intelligence is possible with one type of system (biological) it does not necessarily follow that it is possible with a mechanical system.

                I specifically said that I thought machined-based AI would be possible; a

        • by IlliniECE (970260)
          Comparing this to FTL shows that YOU don't have a very good understanding of the issues. And to call the programmers morons? Ok.. would you mind posting the code here on slashdot that would win this competition just like a human? I'll be waiting...
      • another reply (Score:5, Informative)

        by SmallFurryCreature (593017) on Sunday November 04, 2007 @09:35AM (#21231045) Journal

        You mention chess. Alright, Deep Blue. Lets challenge deep blue. Half way through the game, we switch the board, introduce a new rule. Jumble the pieces up and tell it to pick them up and put them in the correct place again.

        NOT a challenge for a human being. Deep Blue? Will fail totally, unable to even understand the commands.

        AI is NOT the same thing as doing a simple task over and over again really fast. Laser range finders are nothing new, ACTING upon that information, THAT is the challenge. Especially when that information is not constant and reliable.

        Kasparov showed that when he switched styles constantly, Deep Blue was unable to cope. That Kasparov went on to beat Deep Blue is often forgotten. It however showed very clearly that Deep Blue had been setup by HUMANS to beat Kasparov, when he became another player by changing style, the computer could not cope, it had no AI to deal with this.

        It reminds me of futurama and robot blurns ball. Putting a howitzer on the field does NOT prove robots are good pitchers. IF Deep Blue could be put in front of a `checkers board and pick that game up in seconds, like a human could, then switch to TicTacToe and then play some poker ALL without human input, then I would be impressed.

        • It's also worth noting that Chess and Checkers and all games of that type which claim to be in the field of 'AI' are really just extensions in the field of deep search. Deep Blue doesn't REASON about chess, it consults a giant play book. If it's outside of the playbook, it actually just runs through millions of possibilities, as deep as it can. Even still, there was always a team of master chess players keeping watch over Deep Blue's actions.

          Deep Blue has never had anything to do with AI, despite hype to th
      • by inca34 (954872) on Sunday November 04, 2007 @09:56AM (#21231185) Journal
        This has little to do with Moore's and a lot to do with the fact that sensors do not follow Moore's law. We were using the same sensor technology as was available 15 years ago with marginal to no improvement on quality or capability.

        The software side of this DARPA Urban Challenge should consist of no more than an enormous, but straightforward, state machine that contains all the logic for traffic decisions. Plug that into a simulator and you've got the main software part done.

        The problem everyone had out in the field for the qualifiers (I was on one of the teams) was perception. How do you know what you see is an obstacle? And how do you deal with false positives, and more importantly, false negatives? Some people believe in cross referencing sensor data, which is called sensor fusion. It is difficult, to say the least, to characterize every possible obstacle that ought to be considered a true obstacle if it lies on your vehicles path, let alone have a 10^-6 failure rate for improper detection.

        The highway lane following has been solved since the 70s, check out R.E. Fentons work on Automated Highways in Transportation Science (1970). We had some "recent" developments in the early 1990s where we got some autonomous vehicles to do the autobahn at 100mph with more modern sensors and vehicles, but really didn't improve that much because the sensors aren't there yet.

        Your sensor choice goes something like this:
        $75k for a Velodyne 3D laser system
        $5k for the SICK 2D (planar) lasers
        ~$25k for stereo vision cameras (per set)
        ~$1k for radar
        $75K for the Applanix integrated GPS and IMU

        The Velodyne is a spinning set of 64 lasers, with 64 photodiodes. Each manually placed so that the photodiodes are aimed precisely where the lasers are pointed. The entire head of the unit spins at ~2Hz and generates 1 million points per second. Most of the teams that bought one mounted it on top of their vehicle. This sensor is great if you have infinite processing power available to crunch the data and turn it into cost maps. It however has some serious problems: it's very expensive, it's not mass manufacturable, the point data for a rock and a shrub are indistinguishable (a weakness of all lasers), some obstacles we're interested in absorb laser or reflect it away from the photodiodes, it has too much information, and it has moving parts.

        The SICK 2D planar lasers have more or less the same problems, except there's less data to crunch of course. These lasers also have moving parts internally, which spin a mirror at maybe ~20Hz to get distance data over a 2D plane. Same issues as the Velodyne, except it's manufacturable (has been for 15 years now).

        Stereo vision is really hard to do right. When you have roughly a year to develop the platform and the algorithms, I don't expect much, and I didn't see much. This may be the answer in the future for passive detection, but I don't see it working at the moment.

        RADAR is the right sensor for this type of work. It gives you distance and speed. If you're clever it also gives you the "cost" of a particular object. Radar is how you can tell the difference between a shrub and rock, or a car and a plastic fence. The real cost in the RADAR is not the sensor, but the $100k guy who knows RADAR well enough to set it up right and get good data out of it.

        The Applanix GPS and IMU with 200k RPM laser gyros are not manufacturable and not practical for autonomous vehicles because of the cost. Perhaps the MEMs solutions will catch up and make IMUs cheaper, but in the mean time we're stuck with these systems if you care about your position.

        That's my take on it. Improve the sensors and we'll get autonomous vehicles. Buy another Cray, strap on a generator and a multi-ton air conditioner is not the solution. We need to reliably and cheaply generate cost maps that are relevant to the vehicle that's being automated. Once that's been done reliably, we will have autonomous vehicles. Cheers.
        • by Sinical (14215)
          The sensors are coming. One promising sensor is "flash LIDAR" (or LADAR, "light" vs "laser"). Here's a mention from Google's first result (for a Grand Challenge thing in 2005):

          True 3D solid-state flash LIDAR devices exist. We've visited Advanced Scientific
          Concepts in Santa Barbara, CA, and have seen an eye-safe 128 x 128 pixel solid state
          flash 3D LIDAR suitable for outdoor work in operation on an optical bench. The device
          consists of two custom chips bonded back to back using ball grid array techniques. Th

        • "~$25k for stereo vision cameras (per set)"

          Stereo cameras are a kind of sensor that improved a lot on the last 15 years. Today we get better prices, better focus, the entire set is smaller and lighter, uses less energy...

          Yet, we can't make a good use of them. That is because there is no computer we can put on a mobile robot able to get all the data a stereo head gives us on real time. We fall back to getting partial data, and optimizing for what we think the robot will face. We lose lots of flexibility

          • by inca34 (954872)
            I agree that the algorithms are by no means a perfected art, yet. However, once they are chosen and run through the appropriate testing gauntlet for acceptance with respect to the requirements of the project, the software becomes firmware and semi-custom hardware. The embedded solutions are way more viable than running the dual quad core intel Xeon boxes with external generators, air conditioning units, Windows XP, etc. It's just a matter of getting the appropriate development time for getting the false pos
            • "...the software becomes firmware and semi-custom hardware."

              We are still not capable of gethering all possible info from a stereo head (at several fps), even with totaly custom hardware and the fanciest algorithms available. Well, theoreticaly, we can, but it would suck so much power (besides being expensive) that a mobile robot wouldn't go anywhere.

              I disagree that stereo vision isn't the way to go. Jamming is too common on medium-to-small sensors, even from natural sources. Also, a passive sensor could

    • All I am saying is that we and the tech journals should be carefull with exciting names like "Urban Challenge" or "60miles through urban landscape"
      It's all PR.
      But, PR is vital to getting $$ from a basically anti-scientific federal government.
  • by leko (69933) on Sunday November 04, 2007 @08:56AM (#21230789)
    Some of the robots were paused for a long time, and each was clocked individually. There is really no point in speculating as to who the winners are, because in addition to the time, how well the bots obeyed traffic laws as well as just how safe they drove in general are all taken into account. We should find out the scoring soon enough (sometime this morning.)
  • Better Coverage (Score:4, Informative)

    by Anonymous Coward on Sunday November 04, 2007 @08:57AM (#21230797)
    That article is pretty sparse on detail. The best coverage I found was at []
  • How Long? (Score:2, Funny)

    by xLittleP (987772)
    How long until one of the sore losers goes into "Destroy All Humans" mode?

    Actually on second thought, wouldn't that be the one DARPA would want to have? It's win-win!
  • Wake me when a robot can drive me to work down I294 in rush hour during construction season. I'd buy one, because then I could take a nap while the robot drove me to work.
    • by Sparr0 (451780)
      10 years. Tops. In 20 I expect it to be illegal for a human to operate a vehicle on a major roadway. 50% of all accidents are caused by drunk drivers, but 99% of all accidents are caused by human drivers.
      • Seriously, that would be, well ... cool. Waaay cool. And if all those vehicles operated on a mesh network, communicating with themselves and with remote data sources (weather services, etc.) traffic jams could be a thing of the past, and we'd all get where we're going more safely, a lot faster, and on less fuel. If some blockage occurred that would otherwise cause traffic to pile up, cars could automagically route themselves around the problem. "Sir, we are talking an alternate route to the airport because
  • Autobots, transform?
  • by sgt scrub (869860) <saintium@[ ] ['yah' in gap]> on Sunday November 04, 2007 @12:05PM (#21232305)
    I don't know why the contestants are spending all that cash and beefing up the AI on these machines. A tape recorder that mumbles incoherent obscenities could pass as a NY cab driver.
  • Three autonomous vehicles crossed the finish line within the 6-hour time limit here at the DARPA Urban Challenge in Victorville, CA.

    Why is NOW the first time I've heard where it was being held? I looked through the past /. articles, and NONE of the pages linked bothered to put in the two magic words to tell readers where it was being held. You had to navigate DARPA's site to find that info, so basically, only those people already determined to go would find out where it was happening.

    Had I at all known it

  • I thought these things weren't going to happen for at least a decade(the desert challenge that is).
  • I for one welcome our three autonomous navigation robotic overlords.
  • Urban Challenge Event winners announced!

    1st Place - Tartan Racing, Pittsburgh, PA

    2nd Place - Stanford Racing Team, Stanford, CA

    3rd Place - Victor Tango, Blacksburg, VA

  • by SnowZero (92219) on Sunday November 04, 2007 @02:45PM (#21234063)
    On the official website [].
    1. Tartan Racing (Carnegie Mellon)
    2. Stanford Racing Team (Stanford)
    3. Victor Tango (Virginia Tech)
    • by qeorqe (853039)
      I have submitted a story []. It has a few more details about the race. It does not have the details of the elapsed times or the time corrections.
  • darpa urban challenge videos ordered by date [] We were prepared on
  • Fill in the blanks.

    1) Tartan -
    2) Junior - GNU/Linux; Fedora Distro.
    3) Victor Tango -

The 11 is for people with the pride of a 10 and the pocketbook of an 8. -- R.B. Greenberg [referring to PDPs?]