Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Games Entertainment

IEEE Spectrum Surveys Current Games' AI Technology 172

orac2 writes " IEEE Spectrum has an article on the AI technologies used in the current crop of video games. State machines, learning algorithms, cheating, smart terrain, etc are discussed. Game developers interviewed include Richard Evans, of Black and White fame, who talks about Lionhead's upcoming Dmitri project and Soren Johnson who created Civ III's AI."
This discussion has been archived. No new comments can be posted.

IEEE Spectrum Surveys Current Games' AI Technology

Comments Filter:
  • Sheesh (Score:5, Funny)

    by Anonymous Coward on Saturday December 07, 2002 @08:01PM (#4834900)
    And I still have trouble beating some games that are a decade old.
  • by ekrout ( 139379 ) on Saturday December 07, 2002 @08:04PM (#4834912) Journal
    The page located at http://www.cs.berkeley.edu/~russell/ai.html#search [berkeley.edu] contains wonderful links about coding A.I. into your games, programs, etc.
  • by Anonymous Coward on Saturday December 07, 2002 @08:04PM (#4834913)
    I always win.

    **unplugs computer**
  • yeah well (Score:4, Funny)

    by jon787 ( 512497 ) on Saturday December 07, 2002 @08:09PM (#4834932) Homepage Journal
    Real stupidity beats artificial intelligence any day!
  • by wackybrit ( 321117 ) on Saturday December 07, 2002 @08:20PM (#4834969) Homepage Journal
    I have a 56k modem, and no chance of getting broadband, so while massive online fragfests might seem like fun, they're not really accessible to me (RTCW was bearable till PunkBuster slowed it to shit).

    Unfortunately, the games industry seems to have focused on turning out hundreds of online fragfest games that bring in the $$ but leave little to the imagination. Even 'The Sims' are at it.

    AI doesn't necessarily have to be 100% realistic for a rewarding offline game. But even the bots in UT2003 aren't that hot, so it's clear AI and single player games are taking a backseat to the online money spinners.

    Hopefully some big breakthroughs in AI will turn the tide, but with the games industry already ignoring AI, I'm not optimistic for AI's future in games.. since everyone would rather play their dumb neighbor anyway.
    • by KiahZero ( 610862 ) on Saturday December 07, 2002 @08:25PM (#4834983)
      You think it's hard to play with a dial up connection? Try sattilite sometime (*dodge*, *dodge*, *fire*... 3 seconds pass... "Wow, I missed... imagine that!").

      I'd really like to see a decent AI for games like Baldur's Gate or Neverwinter Nights. The henchman have roughly the IQ of a very dumb dog. On more than one occasion, I've had a henchman walk directly into a fireball on the basis that an opponent was nearby. Mmm... toasty.
    • by Maul ( 83993 ) on Saturday December 07, 2002 @08:38PM (#4835024) Journal
      I have yet to find an FPS where I felt the bots had really believable AI.

      In most FPS games, the bots simply have really good "aim" and really good "dodging ability" in the higher difficulty levels, coupled with the fact that the computer technically knows where you are all the time. Even so, a player will usually develop reflexes that will allow them to outgun the bots.

      Players without the "reflexes" to beat the bots' super aim can still beat them, as the bots will repeatedly fall for the same tricks over and over.

      To have realistic bots, they need to be able to learn from their mistakes. Bots fail to learn things such as the following:

      1) The player's favorite weapons.
      A common technique in games like Quake is to "control" the weapons. If you are playing against someone who is great with the rocket launcher, but not so hot with the other weapons, you can try to limit their access to that weapon. Bots don't pick up that you use the RL all the time, and thus don't really do a great job of stopping you from getting it.

      2) The player's techniques.
      Obviously, if a player likes to re-use certain techniques (circle strafing, etc.) too much, other players will pick up on it. Bots, however, don't really anticipate what the player might do in this fashion.

      3) Mistakes.
      At the same time, the bots will often reuse the same techniques as well. However, the human player will pick up on it. Bots need to learn what tactics it has used that have failed, and try something else.
      • 1) The player's favorite weapons.
        A common technique in games like Quake is to "control" the weapons. If you are playing against someone who is great with the rocket launcher, but not so hot with the other weapons, you can try to limit their access to that weapon. Bots don't pick up that you use the RL all the time, and thus don't really do a great job of stopping you from getting it.


        In my experience human players are no different in Q3A, UT2k3 or BF1492 in those 32+player servers. I mean there are so many players like "3l33t-b0rg" or "lick my pu$$y"(no joke), that the focus is more on "I killed 'BFGFucker9000'!" than strategy that requires both team-work and (gasp) thinking.

        Now if you're talking about clan games or one-on-one matches, then yes, humans move to a higher level of strategic thinking to block resources that give opponents advantages.

        For an AI subsystem to perform this type of thinking requires lots of dynamic("oh shit he has a rocket launcher...time to sniper") and static analysis("grab rocket launcher so he doesn't kick my ass and own me").

        • Err... I should have mentioned that I was talking about GOOD players, not average players who think that the fact their parents bought them a GeForce 4 Ti 4600 for their birthday makes them God's gift to FPS games.
        • The thing is that you CAN find higher levels of human combat. The first online game I ever played was Team Fortress (Quake 1 mod) and ya, on the public servers it was generally what you described. You'd get a couple smart peopel (maybe) and everyone else would just kinda run around with no strategy. However I managed to get into a good clan and THAT was where the fun began. We, and the people we played, were highly skilled and cooridnated. Teams were quick to adapt to new situations, exploit weaknesses and so on.

          If you are really in to a game, there are groups and legues of players like you that can challenge you.
      • "1) The player's favorite weapons....
        2) The player's techniques....
        3) Mistakes...."

        In most games these are problems, however, while it's not a FPS, the AI's in Oni are fairly good.
        The AI's will learn what wepons(attackes) you use and will learn to counter. Furthor more the AI's modify their attacks based on how well you defend yourself.
        The AI's couldn't be confused as humon players(even if the game had a multiplayer), however they are skilled and adaptive enugh to be chalanging and can't be beten with any one stratagy
      • How are they supposed to learn if you keep killing them?

        Give the bots a chance.

      • It is not that hard to give the impression of an AI that doesn't have shortcomings you mention but will you be able to beat it?
    • Supposedly both D3 and Q4 will be shifting the focus back to single-player (great news for narrowband nerds like myself). But then again neither will mark a vast improvement in AI.

      id's focus is on graphics and physics (game engines) not providing strong heuristic bots (resource intensive entities) :(
  • by Anonymous Coward on Saturday December 07, 2002 @08:22PM (#4834975)
    link [gamedev.net]
  • An interesting Link (Score:3, Interesting)

    by Anonymous Coward on Saturday December 07, 2002 @08:27PM (#4834989)
    I just found an interesting web site [gameai.com] which is dedicated to the topic of Artificial Intelligence (AI) in games. Check it out.
  • by Quaoar ( 614366 ) on Saturday December 07, 2002 @08:27PM (#4834990)
    Not until a bot calls me a "c4mp1ng n00b" by its own volition has AI come far enough.
  • by dagg ( 153577 ) on Saturday December 07, 2002 @08:27PM (#4834991) Journal
    "... people want a good single-player experience to practice before they go on-line because they feel stupid when they get their butt kicked by a 12-year-old in Ohio," laughs Michael Zarozinski.

    That is so true. I tried playing a multiplayer game at work a few years ago... and I was absolutely destroyed in seconds. I wanted to get better at the game, but there was no other beginners to play with, and the single-player mode sucked.

    --

    YerSex FAQ [tilegarden.com]
    • Practice makes perfect. Keep getting your ass kicked -- it's the only way to learn how NOT to get your ass kicked!
      • This is very true. You have to get over the initial disappointment and, (gulp), work at it.

        The first two weeks I spent playing Counter-Strike I had a kill ratio of 4-1 and 2-1. Each time I died I forced myself to figure out what it was that I should have done differently, repeated over and over you eventually learn. Satisfaction is working hard for days on end and then finally getting a 2-1 kill ratio one night!
      • You're right, practice makes perfect. However, I have to say that the proliferation of cheaters is truly the most annoying thing about gaming.

        I apologize in advance if I sound like anyone's dad.

        <rant>

        I'm not talking about guys who are smooth, have good moves, and use map features well... guys with obvious skillz... No, I'm talking about kiddiez who don't move well, show no strategy, and have 5-6 to 1 kill ratios. If you watch them play in first person mode (ala Counterstrike), it becomes immediately obvious that something ain't right.

        Talk about taking the pansy way out... Of course, this may have something to do with our societal tendency towards instant results, since some people cannot defer gratification for even one second (granted, it's a pretty hard concept to teach to a child... but then again, some adults go their entire lives and never learn it). It's why all those instant weight loss pills are such a goldmine, AND it's why people cheat.

        Don't work and practice to get better, Oh no, that would require effort... get an aimbot or a wallhack; you too can be instantly L33T!

        Feh... learn to take your lumps like a man. If you suck, admit it, and practice.

        Ain't no aimbot in the game of life.

        </rant>
        • I've run servers for a consortium of sysadmins for 4 years now, UT, Half-Life, CS etc.

          There is NO DOUBT there are cheats out there but they are RARE. more than 95 % of the complaints lodged with our group turn to out to be false. But every now and then you get a true script kiddie who actually gets off winning by cheating. The only good thing is broadband has given people an IP that sticks for at least a week generally, so a ban actually can annoy them. With UT we bann by UID as well, so at the worst you have to re-install the game.
    • wanted to get better at the game, but there was no other beginners to play with

      Learn grammar, then others might enjoy playing (and communicating) with you. Jerk.
  • Ethics, IP, amd AI (Score:5, Interesting)

    by USC-MBA ( 629057 ) on Saturday December 07, 2002 @08:29PM (#4834997) Homepage
    This article brings up an interesting issue regarding Artificial Intelligence, Intellectual Property, and human/nonhuman rights.

    Namely, what happens if some researched finally stumbles across an application that passes the Turing test? One that for all intents and purposes appears to be a conscious life form?

    The resulting ethical problems will be myriad:

    • Will the AI life form be the property of the person or corporation that developed it?
    • Will the AI life form be copyrightable?
    • Will the creator of the AI life form be obligated to keep it "alive" (i.e. keep the power running, etc.) as long as possible?
    • Will the AI life form have the same rights as an ordinary human being?
    • Will distributing the souce code for the AI life form be regulated under anti-cloning statutes?
    • Will the AI life form be allowed to earn money as a result of its efforts in controlling entities in videogames?
    • Will the AI life form be entitled to royalties as a participant in the creator of the videogame?
    As a libertarian I am torn between my concerns about keeping markets free and unregulated, and my concerns for the freedoms and rights of potential AI life forms. Interesting times...
    • Re: (Score:2, Interesting)

      Comment removed based on user account deletion
      • In a way, both and neither.

        The turing test is to see if a person can tell the difference between another person over a teletype terminal and a computer.

        If you can't tell the difference between a computer and a human, is the computer alive? However, if it is inorganic, is it alive?
    • Namely, what happens if some researched finally stumbles across an application that passes the Turing test? One that for all intents and purposes appears to be a conscious life form?

      The resulting ethical problems will be myriad


      You watch too many movies. No matter how smart you can make a computer look, it is still performing the same fetch-execute cycle on primitive instructions like "add," "shift," and "branch." If that is a conscious life form, then so is a pencil and piece of paper on which you perform all these primitive instructions manually.
      • by hawkestein ( 41151 ) on Saturday December 07, 2002 @10:31PM (#4835432)
        No matter how smart you can make a computer look, it is still performing the same fetch-execute cycle on primitive instructions like "add," "shift," and "branch." If that is a conscious life form, then so is a pencil and piece of paper on which you perform all these primitive instructions manually.

        Fan of John Searle [utm.edu], are you?

        How's this for a thought experiment. Take a human being, and swap one of his neurons for an electronic circuit that behaves identically to a neuron. One at a time, swap out each real neuron and swap in an electronic one. Is he still concious when his brain is entirely made up of electronic neurons instead of organic ones? OK, now swap out each neuron, and swap in a tiny computer that can simulate the I/O behavior of a neuron. Swap these in one at a time. Is he still concious? OK, now start swapping out groups of neurons for computers that can simulate the I/O behavior of the group. Proceed until his entire brain is just one computer. When did he go from a human being to a soulless automaton?
        • by Laplace ( 143876 )
          Oh yes, I love these little "thought experiments." Propose a means to devise an electronic circuit that behaves exactly like a neuron, then I will take your little argument seriously.

          Remember, Newtonians felt that the whole of the universe could be described if you could only write the state down of everything in it. Then came Quantum Theory and opps, all of that went out the window.

          What makes a neuron a neuron is that it is, well, a neuron. Take your philosophy 101 (Denet's "Brain in a Vat" article anyone?) and stuff it.
          • by hawkestein ( 41151 ) on Sunday December 08, 2002 @03:46AM (#4836432)
            C'mon, thought experiments are the stuff that philosophy is made of. You give me a scientific, measurable definition of conciousness and I'll lay off the thought experiments.

            When it comes to building circuits that act like neurons, I'm not a neuromorphic engineer. But even today people are building circuits that can interface with neurons (look at the guys at Cal Tech [caltech.edu], for example). There was that guy in Britian (can't remember his name, references somebody?) who was doing experiments with re-routing electrical signals from his arm to his computer and back to his arm to see if the computer could reproduce the signal adequately to control the muscle (this was the same guy who walked around with implants that tracked where he was around the school).

            If it makes you feel better you can skip the step about "synthetic" neurons and go right to the step where you've got a little computer that simulates the neurons and can interface with them.

            As for simulating the brain exactly: first of all, there isn't much evidence that there is any quantum effects in the behavior of a neuron (people don't seem to take Roger Penrose too seriously in this area). Second of all, even if there are quantum effects and there was some randomness to the simulation, so what? Just because there are quantum effects, doesn't mean you can't simulate them. You aren't trying to *predict* what someone else's brain is going to do, you just want a simulation that follows the same laws. You just have to add some randomness to your experiment.

            What makes a neuron a neuron is that it is, well, a neuron.

            Can't argue with you there. :)
        • Fan of John Searle [utm.edu], are you?

          The argument was evident to me before I ever heard of John Searle, but yes, I do agree with him.

          How's this for a thought experiment. Take a human being, and swap one of his neurons for an electronic circuit that behaves identically to a neuron. One at a time, swap out each real neuron and swap in an electronic one. Is he still concious when his brain is entirely made up of electronic neurons instead of organic ones?

          I don't believe such a thing is possible. But let's assume that it is. Now try this for a thought experiment: instead of swapping out biological neurons for mechanical ones, take an instantaneous state snapshot of an entire brain. Now find a person for every neuron and give them a set of instructions for how to behave exactly like a neuron, making telephone calls to communicate with the other neurons (if it's possible to create an electronic circuit modeling a neuron, it must be possible to codify the behavior into a set of human-understandable instructions). Give them the initial state of the brain you are copying. Is this vast network of people following instructions a conscious (albeit vastly slower) clone of the brain whose state was surveyed?

          Repeat the above, but only use two people to simulate two random neurons from the surveyed brain. It is conscious? Now try three. Is that conscious? What is the magic threshold that can achieve consciousness?

          I think your position of "consciousness is no more than the sum of the brain's neurons" is a much more perilous position to defend than the claim that there's something going on there that we don't understand.
          • Your argument is, once again, pretty similar to Searle's (he give the example of having the people of India carry out the telephone calls, and asks "Is India concious?").

            Here's where I think our intuitions just differ. I would think that no matter how many people are carrying out the exeuctions of the program, even if it was one person carrying it out, or the entire planet, conciousness wouild arrive through the act of carrying out the computations (albeit at a much slower level).

            Unfortunately, there's no real sense of arguing after this point, because what we think would happen simply differs, and there's no way to check conciousness (though you could interact with the simulated brain and it would respond intelligently).
            • Your argument is, once again, pretty similar to Searle's (he give the example of having the people of India carry out the telephone calls, and asks "Is India concious?").

              I swear I'm not ripping him off, I thought of that argument just now!
      • As another poster said, you take Jon Searle's view. However, this isn't the be-all, end-all and many disagree with you. Many people subscribe to a theory more around what Alan Turing talked about: If a computer appears to be sentiant, it IS sentiant. Just cenause you have the ability to look into how its brain works doesn't change that. Heck, don't knock it, we may some day be able to look in a human brain and see how it work (I mean on the same levels as a computer, pinpoint individual operations in the network).

        Just realise that there are other compelling viewpoints on the issue and keep an open mind. Don't become like Searle and just reject anything against your view as impossable or silly :)
        • Many people subscribe to a theory more around what Alan Turing talked about: If a computer appears to be sentiant, it IS sentiant.

          Turing said nothing of the sort. He didn't speak to sentience at all, and he even considered the question "Can machines think" to be "too meaningless to deserve discussion." He only spoke to the question "could a machine some day win at the imitation game?"

          Just realise that there are other compelling viewpoints on the issue and keep an open mind. Don't become like Searle and just reject anything against your view as impossable or silly :)

          I have to admit that I do. More than 50 years of AI have failed to produce anything that can function at the level of a four-year-old. I believe we are a long way off from understanding sentience and biological intelligence, if they can be understood and analyzed at all. I'm not saying that it's impossible, but it's certainly more than the sum of a massive neural network. Current proponents of strong AI won't admit that there's something going on there that they don't understand.
          • I thought of several rejoinders to the comment about a four-year old. Something along the lines of X years of slashdot, and yet to create a troll with the intelligence of a four-year old.

            Then I thought a better approach, is how many thousands of years has it taken to develop computational devices to the point they are now? Yes, the last 50 years has seen the process accelerating, but that doesn't mean it hasn't taken longer.

            In many ways we are still at the abacus level of AI.
            • A fellow slashdotter pointed me to this page [singinst.org] last week, which makes the point that there is a "singularity" of artifiical intelligence, at which point the technology we have created will perpetuate itself (create it's own code, develop it's own hardware), much faster than we ever could.

              The advancements from there would snowball, as the hardware and software used to make things would then become meta-creators themselves (and therefore be the meta-meta-creations of the humans who designed this AI). This would go on and on...

              While the site calls it a "singularity" I tend to think of it more as an "event horizon" (in more than one sense).

    • by Anonymous Coward
      Why would should a piece of source code have rights?
      • by Anonymous Coward
        Why would electrical signals running through a hunk of meat have rights?
    • by poiu ( 106484 )
      no an ai will not be considered alive until it can sucessfully judge a turing test (i.e. tell if a someone is a human or a machine) as well as pass it.
      • by Scarblac ( 122480 ) <slashdot@gerlich.nl> on Sunday December 08, 2002 @07:46AM (#4837033) Homepage

        No an AI will not be considered alive until it can successfully judge a turing test

        I never understand requirements like this; you're putting the mark way higher than you do for humans, or other life forms.

        People (who knew nothing about AI) have been fooled in Turing tests by the likes of Eliza. And you're saying those people aren't even alive?

        If you assume all adult humans are intelligent and alive, you can't make a test for intelligence that excludes some adult humans.

        Note that the Turing test is a sufficient, but not necessary test for intelligence, as proposed by Turing. That means that he would consider a computer that passed it certainly intelligent, but it does not mean that "an AI will not be considered alive until it can pass a Turing test" - it may be considered intelligent for other reasons.

    • by Orne ( 144925 ) on Sunday December 08, 2002 @12:02AM (#4835686) Homepage
      Ok, take a step back for a second. An artificial intelligence is essentially code running in a state machine... heck, "threads" of code running in an asynchronous processor pretty much defines biological life too, doesn't it? The difference is that we have a top-down design control with AI that we don't have with biology (yet). The meta-question that we haven't answered yet is: Is the intelligence in the code being run, or is it in the processor running the code?

      Let me throw some more questions into the mix:
      • Suppose you can save the "state" of every variable in the process, and write it to disk. Does your AI "die" if you halt the processor, or does it "pause"? If you reload from disk, the AI has no sense of time loss... from its internal viewpoint, no time has elapsed between processor clock ticks.
      • Is "death" now defined as never re-executing the code? Or, if the AI has the ability to move its code stream from one processor to another, do we say that the entity "moved", or that the original stream "died" and a second one was "born" on the new process... because in theory, the original code can keep processing after forking the copy to the 2nd processor...
      • Now save the state, launch a second processor from the same data while executing the 1st... you've just cloned the AI. What would happen if you bring AI(original) to meet AI(clone)? Unlike biological replicants, there is no "age" difference between the copies, and a 100% history/memory duplication.. what kind of psychological damage will occur?
      • Assume a human programmer writes the initial code, and pipes it into the processor, and through self-modification, the AI drifts from the original specs. Should the human be paid for running the processor, or should the human be paid for the original code?
      • If an AI earns money for playing games, should it pay for the processor time that keeps it alive? After all, humans work, then go to the grocery store to pay for food that keeps our bodies going...
      And lastly, I heard a different version of your closing quote... Ancient Chinese Curse: May you live in interesting times. :)
      • by Scarblac ( 122480 )

        If you like thinking about these sort of questions (what's the consequence of running a mind on a computer - what happens if we compute all its subjective instants on another computer in a worldwide cluster, etc) then you should read Permutation City by Greg Egan, which takes this discussion to extremes, with rather deep consequences.

        Obviously the book is fiction, speculation, but still rather good - every time you finally wrap your brain around a new idea, you go "wow" as you get it, the next paragraph takes this new idea to its extremities :-)

        • Thanks, I think I'll check that out... on another note, the Gateway series by Fredrick Pohl (writeup [fantasticfiction.co.uk]) deals with a lot of the same topics... not as much in the first book, but the "computerized human" idea really takes off in the second book... the series is a quick read, but I enjoyed it.
    • The Turing test is not supposed to measure consciousness, it is supposed to measure 'intelligence'. And because of the way it is designed, it really only tests for knowledge of a specific subject, with minimal conversational abilities (think, souped-up ALICE-bot).

      Now, why do assume it is even possible to have an "AI life form"? One problem of many is that things that are alive (alive like animals are alive, not like plants are alive) are indeterministic (they do what they want to (free will)), and computers are deterministic. And why do you assume it would be equivalent to a person? Fleas are definitely alive, but they have exactly NO rights.

      Tim

      • One problem of many is that things that are alive (alive like animals are alive, not like plants are alive) are indeterministic (they do what they want to (free will)).

        This is the ghost in the machine [everything2.com] myth. Many believe that human intelligence is somehow "special" in a way that mechanical devices can't be, as if the brain were made of more than mere matter that follows predictable laws of physics. It's a common view. I would wager that >90% of the population believes it. Virtually every religion embraces and teaches it, either explicitly or implicitly. People want to believe that their identity is somehow transcendent to the universe.

        We've seen this before with vitalism [everything2.com]: people used to be convinced that living matter was somehow "special" and different than non-living matter in a fundamental way that dips below physics. Now we know that organic life is just a special arrangement of atoms that allows those atoms to be self-replicating.

        Obviously, a living cell is a particulary complex arrangements of atoms. The difference between the animate and the inanimate is huge: we relate to grizzly bears much differently than we do to a pile of rocks. Prehaps this is why our intuition is so misinformed... it's not representationally meaningful to think of a grizzly bear as being composed of dirt, air, and water, even though it is.

        The same thing applies to intelligence. We have every indication that brains cause minds. We've mapped which areas of the brain correspond to which areas of functionality. Emotions can be altered predictably with drugs. Every aspect of a near death experiences (NDE) can be triggered with chemicals, sensory deprivation (IIRC), a sharp blow to the head, or something mundane and physical. Ultimately, the experience known as self and the sensation of free will boil down to being just a special set of computations that can run on any Turing Machine or x86 with enough memory.

        Of course, the complexity difference b/t you and an Unreal bot is several orders of magnitude. It's not representationally meaningful for me to think of you as the same thing... there's just not as much satisfaction in fragging a bot. :-)

  • MMRPG "societies." (Score:5, Interesting)

    by Boogaroo ( 604901 ) on Saturday December 07, 2002 @08:30PM (#4834998) Homepage
    QUOTE: For a project code-named Dmitri, Evans is now focused on improving the ability of AI to interact socially. Agents' behavior will be controlled by their membership in overlapping social groups.
    ----

    So how long until the AI gets good enough that we don't need it to be truly multiplayer and can all play on our local machines with AI characters that can chat with us about our real lives instead of just the game?

    -
    • Morrowind's AI (Score:2, Interesting)

      I have been playing Morrowind for quite a while and the AI of the characters is simply breathtaking. I must admit it has been the only game that had me talking to people "And suddenly this big whale like creature jumps out at me so I run as fast as I can to a house and hide inside" ... "So I goes outside and the coast looks clear.. the monster is only hiding *behind* the house peaking out and then charges for me"

      Morrowind, if you have not played it, you must. But make sure you have a fast processor 1ghz+ ;)
      • I don't think you've been playing the same Morrowind I have. I mean, it's a good game and the graphics are amazing, but the AI isn't anything they didn't have in final fantasy. All the NPCs are either generic or read from obvious scripts (think of a help desk monkey), and the monster combat AI is pretty much limited to "I'm a warrior, so cast spell and charge" or "I'm a caster, so run away and cast spells".
    • For a project code-named Dmitri

      In soviet russia, Dmitri codes AI!

      (or at least codes things that pisses Adobe off)
    • I'm waiting for the day when my bloody computer can go online and play by itself, so I can refamiliarize myself with real life. What would be real nice is if I could tell the computer, "I'm feeling obnoxious today," and it will go out and mock those it humiliates.

      At that point, I can have a fulfilling online experience in ten minutes a day. Maybe I'll read a book or something.

      :: spins the Wheel of Karma ::

      fturnonlylfutrnnolyfunntrollyfuntrollfunnytrollf unny.troll.funny..troll...funny....troll.....funny .....troll.......f.....TROLL

      Damn. Oh well, some days, I can't tell myself.

  • by Freston Youseff ( 628628 ) on Saturday December 07, 2002 @08:35PM (#4835013) Homepage Journal
    but I find that the AI in Hitman II [hitman2.com] is quite exceptional. I havn't seen AI this good since Turok2 Dinosaur hunter!
  • Moore's Law (Score:4, Insightful)

    by Rip!ey ( 599235 ) on Saturday December 07, 2002 @08:41PM (#4835039)
    Given that this is the IEEE, it was somewhat disappointing to read the following.

    Fortunately, most graphics processing had by then moved onto dedicated graphics cards, and CPU resources and memory--already increasing dramatically, thanks to Moore's law--were being freed up for computationally intensive and hitherto impractical tasks, such as better AI.

    They make Moore's Law sound as if it is something more than just an observation.
    • Re:Moore's Law (Score:1, Insightful)

      by Anonymous Coward
      This is the IEEE Spectrum; it covers all of the sciences. If this was a computer-specific magazine, then such dissapointment would be well-placed.
    • I think you're reading too much into how it "sounds" (that's an awkward statement).

      It's just easier to say "Things have gotten faster due to Moore's Law" than it is to say "Things have gotten faster due to the fact that processor speed doubles every 18 months, according to a statement made by Intel's Gordon Moore in 1965." It's just kind of one of those understood things in the tech world.

      Hell, professors here at my school will use it in class. Not as though it were an actual law (as in law of thermodynamics) but as in an understood concept of how fast technology is advancing.

      OK, that was way too much time devoted to a petty posting...

  • To me it seemed as though the article didnt really have much to say. Or rather, I didnt get much out of the article besides reading about all of the stuff I already know; that AI exists and that it is advancing. I wish it would have been something more in depth about the future of AI and computers taking over the world!
    • I agree. I learned nothing from this article. I was expecting a survey of current games' AI. I would be interested to read such a state-of-the-art survey. The closest thing I've seen to such a writeup was in Wired a while back.

      Scott
  • What, there's something beside A* [generation5.org]?
  • Read the headline too quickly again. I thought someone had found a way to get AI out of a ZX Spectrum. It's coffee time.

  • The Major Problem (Score:3, Interesting)

    by Sigma 7 ( 266129 ) on Saturday December 07, 2002 @09:46PM (#4835276)
    After observing a large number of games, the major problem with game AIs is that the developer puts very little effort into the AI itself. While there are a few good/excellent AI systems out there, these are the exception and not the rule.

    Naturally, the AI has the shortest time frame in the software engineering, but there is no reason it should remain stagnent across the future patches. From these patches, the developers can identify the shortfalls of the old AI, and correct them. This is very rarly done, and is only performed across versions.

    It's also very difficult to find a game with a decent or challenging AI, since mose formal reviews ignore that portion of the review entirly. Most people will look for the 9/10 IGN Review award as opposed to the real deal in the message boards (the AI in the game is a cheating piece of c***).
    • by Black Parrot ( 19622 ) on Saturday December 07, 2002 @10:11PM (#4835353)


      > Naturally, the AI has the shortest time frame in the software engineering, but there is no reason it should remain stagnent across the future patches.

      Another problem is that lots of games are just engines that support an 'official' dataset plus whatever modpacks the players care to come up with, but even the cheatAI that ships with the game won't work worth a damn on the modpacks.

      I hope in the future machine learning methods can help with both of these problems. I.e., a couple of months before release when the code is fairly stable and the graphics are in production, turn on the old Beowulf cluster and let reinforcement learning or an evolutionary algorithm train a good AI for the game. As for modpacks, the vendors could support something like sourceforge, where gamers could upload their modpacks and have the Beowulf cluster automagically re-tune the AI to work right with them.

      And of course, the machine learning could continue in the background for as long as people were interested in the game, allowing them to download "new improved" AIs every few months.

      • Well, Unreal Tournament allows the mod devs to include AI code to handle their addons. IMHO, it works well - it doesn't have the "online only" mentality of Half-Life mods, even in mods.
    • Sigma 7 wrote: AI has the shortest time frame in the software engineering, but there is no reason it should remain stagnent across the future patches. From these patches, the developers can identify the shortfalls of the old AI, and correct them.

      The major problem in AI is the Attention Problem - what features should be paid attention to in order to make a decision and how to ignore the huge ammount of irrelevant information (without explicitly examining that information to determine that it is irrelevant).

      Game engines often present a very small "world view" (in terms of feature space) to AI agents because for every simulation cycle each agent has to check facts in its world view to guide its action, and the more complicated the world view the more CPU cycles are used by each agent.

      For example, an agent might be exposed to the same fact in 3 ways:
      near(a, b)

      distance(a, b, 20)
      pos(a, 241, 43)& pos(b, 261, 43)
      The first uses a hard-coded definition of what near means (that can be precomputed by the engine), the second allows the agent to use its own definition of nearness, while the third allows the agent to decide what distance means (if it has access to map information).

      While the 3rd definition is the most flexible, it is also the most computationally expensive particularly when this computation may be run every simulation cycle.

      So, what I was trying to say is that much of the flexiblity possible to an AI agent is limited by the feature space it has access to, and this feature space is usually very limited for efficiency purposes.

      To improve the behavior of an AI agent, script tweaking may help some, but what is often needed is for the underlying physical engine to expose slightly more information. For example, the above "near" might be split into "sortof-near" and "really-near".
  • by Tablizer ( 95088 ) on Saturday December 07, 2002 @10:22PM (#4835393) Journal
    Not long before Lara Croft rejects me like a real woman would.
  • "2% stupid" (Score:2, Funny)

    by Tablizer ( 95088 )
    "The hardest thing in game AI is just making sure that the game never looks dumb. You'd be better off having an AI that was just above average all the time, rather than one that was brilliant 98 percent of the time and stupid 2 percent of the time..."

    Why not? Don't they want to model a typical geek, or did they find that hurts sales?
  • by fosh ( 106184 ) on Saturday December 07, 2002 @10:48PM (#4835475) Journal
    Here is one of the sites we used for new ideas in my CS class at cmu [cmu.edu]

    http://www.seanet.com/~brucemo/topics/topics.htm [seanet.com]

    Here is another one [cs.vu.nl]

    Enjoy
  • AI is not AI (Score:5, Insightful)

    by Junks Jerzey ( 54586 ) on Saturday December 07, 2002 @11:53PM (#4835668)
    AI is a euphemism for "behavior." When I hear people complaining about how games aren't using the latest in AI research, I want to respond "that's because games don't really use AI" at least not what people think of as AI. AI in a typical game is just a list of weighted rules, such as "if the player has a more powerful weapon than character X, make character X run away." When you have lots of such rules and you twiddle with them a lot, then you get so-called AI.

    Putting in random factors makes things much harder to pin down. Maybe when a character spots you, there will be a 50% "run or attack" decision. If the decision to run, then you think "Ha, ha, ha, he's running scared!" If the decision is to attack, and he gets you, then you think "Wow, that guy was good." If he attacks and you get him, then you feel like you're doing well.

    To a great extent AI is psychological. You read into things what you want.
    • To a great extent AI is psychological. You read into things what you want.

      In your rant, the use of A is extraneous, though I wager no moderator will see my post as insightful as some think yours was. Just as you no doubt see moderations to your comment as a sign of I instead of "Ah ha! A Markov Model was used to associate the text of my post with other, similar posts that were highly rated!" Shucks, you probably think the Turing Test is just about the computer's intelligence, too.

  • A game AI test (Score:5, Insightful)

    by Jimmy_B ( 129296 ) <jim.jimrandomh@org> on Sunday December 08, 2002 @01:07AM (#4835899) Homepage
    Some time after getting Unreal Tournament 2003, I set out to appraise its AI. I decided to set up a game in which it couldn't cheat; I made a one-on-one game, on the map DM-Gael (a small, open map, so while the bot may always know the player's location, also vise versa), and with rocket launchers only (so that the bot couldn't do some simple trig to always hit). I set the bot to its highest difficulty, and played.

    The bot had some notable weaknesses (it kept getting killed going for the powerup in the center, or while coming up a lift, and never seemed to learn from these mistakes), but did fairly well overall. In the end I won with a substantial, but not overwhelming, margin.

    So, I said, the AI had failed the test: given a fair match, on its most difficult settings, it lost. But then I realized, I had a lot of fun administering it. Then I realized that the point of an AI isn't to beat the player, but to be fun to play against; whether it wins or loses really doesn't matter.
  • AI is still (Score:1, Informative)

    by katalyst ( 618126 )
    in its infancy. It's scope is restricted. It can never be a "jack of all trades". AI, to a large level, still seems to be hardcoded. But soon, I guess you have an AI module, to which a physical model is assigned, and then it is "trained". That module could be taught how to drive a car, or how to duck and shoot, or BOTH.
    Anyways, coming to the topic of AI and entertainment, if u have visited the LOTR - TTT site, you'll see an interactive MASSIVE system. Imagine making a few entities interact, waiting for the sequence to render and then view the final movie that has been created....
  • Here's the best quote from the article:
    By 1998 developers "were trying just everything they could think of...there was a lot of touting that this AI is going to be the last AI you'll ever play," says Woodcock.
    Sure, if the AI is so terrible, chances are I won't try another one...
  • by Xtifr ( 1323 ) on Sunday December 08, 2002 @03:24AM (#4836362) Homepage
    I've just been reading Steve Rabin's book, AI Game Programming Wisdom, mentioned briefly in the article. I'm not a game programmer, but I am a programmer, and I've always been curious about game AIs. And I have to say that the book is very good, well worth it if you have any interest in the topic. It's actually a collection of articles written by a bunch of game AI programmers, collected and edited by Rabin. It covers a lot of ground, explains approaches that have worked and approaches that have failed, and why (in both cases). It contains both useful general principles and interesting examples of specific cases.

    I'm not sure I'd recommend this book to a novice programmer, but for a moderately experienced programmer who's interested in practical game AI design, this book is well worth a look. The name says it all, this is a book written by the folks in the trenches, passing along their hard-earned wisdom. Very enjoyable.

    Now I want to try my own hand at writing some game AI. Maybe I should poke around on sourceforge for games that need AI help. (Assuming I can weed my way past all the projects that have NO CODE AT ALL, which seems to be especially common with the games on sourceforge.)
  • Invisible AI (Score:2, Insightful)

    by ktorn ( 586456 )
    By the looks of it, we still have some way to go until AI in games reaches the 'good enough' stage.

    Good enough for what? For us to stop speaking and caring about it I guess.

    When that happens, we'll have 'Invisible AI'* that just works, and game producers can no longer use it as a selling point.

    Of course, I guess that won't happen anytime soon, and I can already see hardware manufacturers making AI-accelerator cards, with built-in multi-processors and neural net chips, to fit next to your graphics card. The 'Intel Inside' logo will gain a whole new meaning...

    * - Yes, I'm adapting Don Norman's Invisible Computer [jnd.org] term.

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...