Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

AI Researchers Produce New Kind of PC Game 342

Ken Stanley writes "In an unusual demonstration of video game innovation with limited funding and resources, a mostly volunteer team of over 30 student programmers, artists, and researchers at the University of Texas at Austin has produced a new game genre in which the player interacively trains robotic soldiers for combat. Unlike most games today that use scripting for the AI, non-player-characters in NERO learn new tactics in real-time using advanced machine learning techniques. Perhaps projects such as this one will encourage the video game industry to begin to seek alternatives to simple scripted AI."
This discussion has been archived. No new comments can be posted.

AI Researchers Produce New Kind of PC Game

Comments Filter:
  • Coral Cache (Score:5, Informative)

    by Anonymous Coward on Monday June 27, 2005 @07:33PM (#12926659)
    Slashdotted before it even went live. Here is a working link. [nyud.net] Downloads are currently at 511, I hope their counter has more than 9 bits...
  • by XanC ( 644172 ) on Monday June 27, 2005 @07:34PM (#12926670)
    If it's UT anywhere but Austin, you say where.
  • If it's fun... (Score:5, Insightful)

    by InferiorFloater ( 34347 ) on Monday June 27, 2005 @07:34PM (#12926673)
    If this technique provides for fun gameplay, or more importantly, a notable difference in the experience, then sure, it might become more common.

    Keep in mind though - entertainment is meant to be entertaining, not neccesarily realistic or academically advanced.
    • Well, different people have different fun.

      some people, hasve more fun if they are playing the most realistic game out there.
      • Well, training machine-learning agents to fight in a digital battlefield isn't really result in "realistic" behavior - those agents are just going to behave in the optimal manner they've learned.

        The goal of most game AI is to get a lifelike and entertaining behavior, which can be pretty easily approximated in very simple algorithms.

        I'm not knocking the game there either; I haven't played it. There was just a hint of "why don't games use advanced AI techniques" academic frustration in the post - I was po
        • "Well, training machine-learning agents to fight in a digital battlefield isn't really result in "realistic" behavior - those agents are just going to behave in the optimal manner they've learned."

          You make a very awesome point here.
          A training tool will produce results. But only good tools will produce good results.

          Meaning, a tool has to represent a real world situation with the best accuracy possible, for the trainies to come out properly trained.

          Something I hadn't really thought of, until now.
    • Re:If it's fun... (Score:4, Interesting)

      by bratboy ( 649043 ) on Monday June 27, 2005 @08:56PM (#12927269) Homepage
      as an ex-game programmer, i can tell you that developing AI is hard mostly because you don't want the game to be too hard. developing AI which will always win is easy. in this case it's a somewhat specialized "core wars"-style genre, but in most games (in which AI interacts with players) overly potent AI is more of an issue.

      and then there's the fun factor. i seem to remember an article about one of the Id games in which they developed all sorts of interesting behaviors for the AIs, played with in for a while, and eventually came to the conclusion that "turn and move toward player" gave much better gameplay.

      on a separate note, i remember a game from the late 80's in which you had to program logic circuits to get a robot to perform tasks of increasing difficulty... not a game with a lot of commercial appeal, i'm sure, but i spent many hours trying to solve problems using those little graphical circuit boards...

      daniel

      • Re:If it's fun... (Score:5, Insightful)

        by DerWulf ( 782458 ) on Tuesday June 28, 2005 @08:21AM (#12930226)
        I've heard that before. Now, are you really telling me you could do an rts AI that could kick my ass by 'thinking' instead of:-knowing the map beforehand
        -having an increased production rate
        -having fights tweaked to the AI's favor
        -starting with more units
        -always being aware of all movement on the map, regardless if it'd be visible to that player
        -controlling everything at once
        -receiving all relevant information at once

        I really don't think so. It's driving me nuts in all games that harder settings *always* means 'AI can cheat more'. This is the reason I don't like RTSs and hardly can stand to play CiV. Omnipotence and Omnipresence is not AI. AI (in games) should emulate how a human would play (advanced planning, patteren recog. etc) with all the strengths and weaknesses that come with that. A good AI in that sense would hardly overwhelm the player seeing how sucessful multiplay games are. Just face it, technology and AI research is just not capabable of pulling it of. Just say that instead of 'well, you really wouldn't want it'.
    • Imagine training gnomes in World of Warcraft to go out and do battle! Or better, massive multiplayer online first person shooters that give you battalions to train. If you train them right you win, if not you lose.

      Gives people the ability to have AI on their side for once.
  • by jockm ( 233372 ) on Monday June 27, 2005 @07:36PM (#12926688) Homepage
    One of the earliest forms of AI I ever learned about was MENACE [atarimagazines.com]. A pre-computer means of training a system to play and win Tic-Tac-Toe. I will confess to loosing more than a little time "training" my system.
  • by Al Mutasim ( 831844 ) on Monday June 27, 2005 @07:37PM (#12926697)
    This is a neat concept, with or without the "neuroevolution" approach (evolving artificial neural networks with genetic algorithms). Including human brains in the training loop for algorithm development is key. The reason so many AI algorithms have found limited application in fielded physical systems (such as weapon systems) is because the competing approach--dozens of smart engineers, working long hours, tweaking human-readable algorithm code and Monte Carlo simulating the tweaked designs over and over for years--is so effective.
  • by Prof.Phreak ( 584152 ) on Monday June 27, 2005 @07:38PM (#12926701) Homepage
    Perhaps projects such as this one will encourage the video game industry to begin to seek alternatives to simple scripted AI.

    The DOD will get interested, and use a similar technique to train -real- robots?

    • The DOD will get interested, and use a similar technique to train -real- robots?

      The DOD is perfectly capable of creating robots that kill people. The hard part is making those robots NOT kill the people you don't want them to kill.
      • The hard part is getting the robots into outter space. Or... at least to the tops of very tall mountains.
      • Re:Or perhaps... (Score:3, Insightful)

        by suzerain ( 245705 )
        The hard part is making those robots NOT kill the people you don't want them to kill.

        Yeah, because as humans, we do a really good job of making that distinction. Hopefully that's not the model we're using to train these robots...

      • The DOD is perfectly capable of creating robots that kill people. The hard part is making those robots NOT kill the people you don't want them to kill.

        Apparently real trick is to build robot soldiers that can withstand a slashdotting.

        Iraq 0600, April 4, 2013, US robot forces are on the border of Iraq for the start of Operation Iraqi Freedom II: The Really Really Patriotic One.

        US 0605, April 4, 2013, Slashdot posts a story about them.

        Iraq 0605:18, April 4, 2013, Entire US robot force is slashdotted. Inv
    • by Sir Pallas ( 696783 ) on Monday June 27, 2005 @08:20PM (#12927031) Homepage
      ..and he says that's what the Marines are. But really, the DoD does fund a lot of machine learning; however, the current state of the art only allows machine to solve specific problems. You need a traning metric, etc. and that's not trivial.
  • Not at all new (Score:5, Informative)

    by Digital Avatar ( 752673 ) on Monday June 27, 2005 @07:40PM (#12926722) Journal
    This isn't entirely a new idea. CROBOTS, for example, put one in the position of designing AIs that control tanks and then pits them against one another in an arena.
  • uh-oh (Score:2, Funny)

    by GroeFaZ ( 850443 )
    Better download it now before their server learns to resist the slashdotting

    403 Forbidden. Nice try, maggot
  • by pin_gween ( 870994 ) on Monday June 27, 2005 @07:43PM (#12926748)
    Oh hell, you know this will taken over by /.'ers AND do /.'ers know a damn thing about soldiering?

    Probably not, but beware -- you may just create a robotic system administrator/repairman. Don't put yourselves out of a job!!!
  • by infonography ( 566403 ) on Monday June 27, 2005 @07:44PM (#12926758) Homepage
    Joshua: Greetings, Professor Falken.
    Stephen Falken: Hello, Joshua.
    Joshua: A strange game. The only winning move is not to play. How about a nice game of chess?

    For those of you who actually look on a user's history of posts, yes this is a variant of another post I did, however it's apropos here as well.
  • hopefully it will (Score:4, Insightful)

    by Saven Marek ( 739395 ) on Monday June 27, 2005 @07:45PM (#12926767)
    > Perhaps projects such as this one will encourage the video game
    > industry to begin to seek alternatives to simple scripted AI.

    hopefully it will encourage the video game industry to begin seeking alternatives to Yet Another High Resolution First Person Shooter.
  • T2 (Score:2, Funny)

    by hilaryduff ( 894727 )
    "may seepeeyou iz ay newral ned prooozezzor... a laaarning compooota"
  • begin? (Score:2, Interesting)

    by Surt ( 22457 )
    I implemented learning AI in a couple of popular video games (including at least one multi million unit PC title) more than 5 years ago, and I'm pretty confident I wasn't breaking any new ground.
  • It would be a lot easier to train a robot to train the other robots to fight (in the long run)...Wouldn't it?
  • by metamatic ( 202216 ) on Monday June 27, 2005 @07:48PM (#12926791) Homepage Journal
    "Galapagos" by Anark had a robot creature with some kind of neural net, and you had to teach him to navigate around by providing him with appropriate stimuli and rewards.

    It could get frustrating--sometimes if he hit a particular deadly obstacle too often, he'd become traumatized, and would then refuse to go anywhere near it, which could make the level impossible until you had allowed him to wander around and petted him and calmed him down.

    Great game, though. I wish there were more like it.
  • Linux port is comming soon :D And it's gonna use GTK1!?!?!
  • Please dupe (Score:2, Flamebait)

    by r00zky ( 622648 )
    Please dupe this article when there's a torrent available and the Linux version is finished.
    Until then it's quite useless.
    • Actually, I second that.

      I wouldn't mind additional notification when a Linux verison is available. :-)

      (I really do run solely Linux.)

      - shazow
  • Torrent (Score:5, Informative)

    by TaxSlave ( 23295 ) <lockjaw@lockjaws l a ir.com> on Monday June 27, 2005 @08:05PM (#12926919) Homepage Journal
    Only for the purposes of helping distribution, and for a limited time, torrent available at nerogame.exe.torrent [lockjawslair.com]
  • Isn't "scripted AI" a contradiction in terms? Can't we start using more correct jargon when referring to computer controlled enemies/allies until AI is finally perfected?

    How about this:
    Artificial, Non-Intelligent Matrix Associated To Individual Object Nodes

    Or ANIMATION for short...:)
  • Good, but... (Score:3, Insightful)

    by badbit ( 888830 ) on Monday June 27, 2005 @08:14PM (#12926988) Homepage Journal
    Is it fun to play?
  • by CodeBuster ( 516420 ) on Monday June 27, 2005 @08:15PM (#12926990)
    The problem with expensive investments in AI is that the publisher must have a series of successful games built on the fruits of that labor before there is any profit. This could possibly be mitigated somewhat by licensing this engine for use by other companies, but this is also weighed by the fact that your competitors are now using the same or similar types of advanced artificial intelligence in their games which may hurt sales of your own games. Large publishers, such as EA and Microsoft, have the resources and wherewithal to make these long term bets, but the smaller boutique firms have neither the willingness nor the ability to finance the development of these types of advanced engines in house. It may be useful to look at some numbers from 2004, courteously compiled by the http://www.shrapnelcommunity.com/blog/2005/02/24/ [slashdot.org]" >shrapnelgames blog.

    The total revenue for the game industry in 2004 was 1.2 billion dollars which was down 100 million from 2003. During this same period only two games had sales of over 500,000 units, but there were 18 games which had sales of 250,000 or more. Based upon the varying definitions of what constitutes a "new release" there were roughly 1,100 games released in 2004 of which maybe 6% earned a profit. The average budget for a competitive game is said to be around two million dollars with an average break even point of around 110,000 units sold. The average retail game price is $24.45 with only 5,000 total units sold.

    Clearly, the open source community is willing to undertake these efforts on their own initiative or for other reasons related to research, as was the case with the student produced game. I am in no way denigrating the efforts of these students, what they produced with the resources available to them was simply amazing and of surprising quality. However, in the world of retail games it takes a certain amount of marketing, advertising, and Wal-Mart end caps to rise above the background noise, unless you are like the aforementioned established game companies and the reputation speaks for itself, at least until they release a real stinker. At the end of the day, when all things are factored in, there is simply not enough money in the budget of the average game to make this type of advanced artificial intelligence worth the risk and expense, at least right now. However, if there is any constant in the game industry it is change and this will probably change in the years to come. I would like to see some new and innovative games too, instead of Madden 2017, but it looks like we will have to wait a while yet.
    • > At the end of the day, when all things are factored in, there is simply not enough money in the budget of the average game to make this type of advanced artificial intelligence worth the risk and expense, at least right now.

      More to the point, it's in a game company's best interest to ship a brittle AI that people will learn to beat handily after a few weeks of play, so they'll be back to the trough for the next offering.

      The game industry's worst nightmare is a game that stays fun for two years. An

      • The game industry's worst nightmare is a game that stays fun for two years. An AI that learns the game with you, and adapts its strategies to yours so that you have to keep innovating, might make that possible.

        I'd like to see a football game that does that - where the other teams in the conference evolve to use strategies that counter your style of play.

  • Why is it that folks throw around "AI", when all they've done is teach computers how to switch around and use pre-defined strategies to deal with a tightly situation? Real AI is when you can take pre-learned strategies, adapt and apply them to a situation that are only minimally like situations you've faced before?

    Of course, using that definition, most folks aren't intelligent... which makes me think my definition must be close :)
  • Repeat after me.

    7 SPEs for vector processing = bayesian learning goodness

    Those 7 SPEs with their bandwidth will be able to take inputs like video, sound, even EEG data from the brain. Combine this with bayesian learning techniques and the machine will infer what factors in the raw data correlate with its advantage in the game world. Imagine a game that can sense your fear with the right "helment" perhiperal containing active electrodes.

    All the people who are saying SONY/IBM wasted die space on the SPEs d
    • Well, I would agree with you, but there are a few problems with your conclusions.

      Cells work with vectors and floating point numbers. While this is great normally for video games and such, this is kinda horrible for control/branch operations. Human minds are MIMD, not SIMD or MISD, therefore we don't transfer into machines well.

      I like the idea of implementing a learning system based on the electrical activity in our minds, I just don't think it'll work. We issue instructions on the neuron-to-neuron basi
    • For all of you taking about the branch prediction issue AnandTech has brought to light, that is in light of modelling physics in the game world. That can be pretty chaotic all the collisions and what not, but that is not what I am talking about.

      I am talking about extracting data from streams of user input. Not modelling physical consequences in the game world, which does involve lots of branching.

      Imagine an SPE taking raw video and running a tight algorithm to detect particular movements. This could be an
  • by potus98 ( 741836 ) on Monday June 27, 2005 @08:53PM (#12927249) Journal

    I'd like to know if the NEROs can evolve more advanced tactics such as:

    When its health is less than 5% and likely to die, make a final kamakaze run at a tough enemy to deliver a mega bomb, draw fire, etc...

    Gang beat downs - Even though the NERO is closer to enemy tank B, focus your fire on enemy tank A since its damage is critical and about to be pushed over the edge.

    Unload power ups - Before picking up a weapons upgrade that would replace my super grenade, go ahead and lob all of my super grenades before picking up the power-up.

    Waiting for power ups to cycle - In some games, a power-up changes every few seconds. Could the NEROs learn to wait for spread-fire on one level versus lazer fire on another level? Okay, levels is too easy, how about depending on the situation, what my friends have, etc...

    And most importantly, could NERO's be taught to perform "ethical cheats"? By ethical cheat, I mean take advantage of the game engine or environment in a way not intended by the developers. -Not by patching code or using network sniff bots.

    Sure, these seem like pretty simple tactics, but YOU try programming this kind of AI. It's next to impossible!

    • As for what you call "ethical cheats", that is what evolutionary algorithms are really, really good at. Trust me. You have design your fitness function (scoring system) very carefully for this not to happen. It is a major source of frustration, disappointment and thoughts of getting a normal job among neuroevolution researchers. E.g., you want evolution to come up with a nice neural network that drives smoothly around a track, but evolution (that bastard!) finds out that it can actually score higher faster
      • Ahhhhh yes, I can see how that would be the case. Because the algorithims could be searching all possible paths/scenarios with lighting speed, the environment they are constrained within has to be rock solid -no wiggling through cracks in polygons, etc...

        What frustrates a non-neurevolution researcher who watches game AI, is when an NPC gets stuck in a loop running around the same post, or starts banging and jittering against a wall. It seems "they" could put an out-of-bounds counter that says: "If you repe

      • In my earlier graduate research, I had several instances where the GA would discover physically unrealistic solutions due to bugs or tuning problems in the model. The problem involved the evolution of a neural network to control a hybrid wheeled/legged robot (the legs were mounted similar to the two rear legs on a cricket). In the robot model, we used a spring/damper model to simulate the ground contact of the feet. However, our integration method was sensitive to high-stiffness equations, and ground con
    • I've played around with it now and it looks like you can only set artificial goals for fitness level, like the robot that scores the most hits has the highest fitness, or the one best able to stick to a group.

      I don't think that kind of boring fitness function is likely to breed any of the cool things you want.

      Also, many of those things just aren't in this game, like powerups, or perhaps even the ability to detect the health of the enemy.
  • by tek_hed ( 123623 )
    "In the far-flung future of the year 2000, functional programming has taken over the world and so humans live in an almost unimaginable luxury. Since it's so easy, humans have used robots to automate everything, even law enforcement and bank robbery -- the only job left to humans is to write their robots' control programs." http://icfpc.plt-scheme.org/ [plt-scheme.org]
  • I've been thinking for a while that we need some innovation in game AI. It's the one area that really hasn't progressed very much lately. Sure we've got a bit more processor time to throw at it, but that doesn't really achieve that much.

    New approaches are needed, and I think machine learning is the way to do it.

    Also note that most machine learning algorithms require lots of floating point arithmetic. I'd gladly sacrifice one of my pixel rendering pipelines to it in order to get better gameplay. /me has
  • by 0111 1110 ( 518466 ) on Monday June 27, 2005 @10:39PM (#12928010)
    Does anyone remember a research 'game' which was sort of like Pacman but with real motivation. IIRC, the Pacman character was programmed to seek pleasure and avoid pain. Certain pellets were considered positive reinforcements and others were considered negative reinforcements. It ended up having some almost spooky emergent behavior, like hiding in a corner if there were too many negative reinforcement pellets. It seemed to develop responses almost like fear. Stuff like that. I can't recall the details unfortunately. I think it was done as a university project or something, maybe in the late 80s. The idea of generating unpredictable emergent behavior from a relatively simple computer program has stayed with me.

    I think that will be the next stage of computer characters: to make them unpredictable even for the programmers. Rule-based learning can get you somewhat complex behavior, but it is all predictable. What we need is genuine example-based learning. So that the resulting behavior would be impossible for anyone to predict and constantly changing and evolving. Of course I am thinking along the lines of various neural network, connectionist architectures. Their unpredictability is generally considered a downside, but for a game the black box aspect seems perfect.
  • In Forza Motorsport for the XBox, I heard that you can have something called a "drivatar", that learns your driving style and can race for you when you don't feel like playing on a certain track.
  • I always get interested in "new genres" of video games, especially because most video games these days involve carrying big *cough* guns, shooting people, and having the opportunity to hear abusive one-liners said to women.

    Needless to say it's pretty boring for anyone who isn't all that macho. Even Vampire: Bloodlines was spoiled by the offensive scenes and the dull FPS combat gameplay... and that's hard for me to say because there's nothing I like more than sneaking around in shadows and sucking out peop
  • by smchris ( 464899 ) on Tuesday June 28, 2005 @07:23AM (#12930015)

    Will these things be marketable? "Ma, I'm not playing games, I'm training my robo-warrior!"
  • Money (Score:3, Interesting)

    by WebfishUK ( 249858 ) on Tuesday June 28, 2005 @08:24AM (#12930249)
    I remember thinking (not very hard) along these lines some years ago. I was doing a PhD in machine vision and we were using Doom/Quake engines to generate simulated environments for testing robot navigation algorithms.

    My thought was that you would train an entity yourself in a series of one-on-one battles or training bouts. These could be staged or otherwise constructed to make mini-games e.g. perhaps testing your entity in predefined scenerios. Once you were happy with its performance you could dump it onto a USB stick and take it around your friends house or upload it to a server for an online game. The main game would put your entity in an arena against a number of other 'gladiators'. They fight it out etc. Online this could allow for 'spectators' who watch the game and potentially even bet on the winner. This might allow for prize money or other revenue stream to be introduced.

  • Forza Motorsport (Score:3, Interesting)

    by Spacelord ( 27899 ) on Tuesday June 28, 2005 @09:08AM (#12930481)
    The X-box racing game Forza motorsport already has something like this. You can train a "Drivatar" to race just like you. Once it's properly trained, it will take generally the same line as you, take corners the same way... and it also makes the same errors as you.

    More info about it here: http://www.drivatar.com/ [drivatar.com]

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...