Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Chip Promises AI Performance in Games 252

Heartless Gamer writes to mention an Ars Technica article about a dedicated processor for AI performance in games. The product, from a company called AIseek, seeks to do for NPC performance what the PhysX processor does for in-game physics. From the article: "AIseek will offer an SDK for developers that will enable their titles to take advantage of the Intia AI accelerator. According to the company, Intia works by accelerating low-level AI tasks up to 200 times compared to a CPU doing the work on its own. With the acceleration, NPCs will be better at tasks like terrain analysis, line-of-sight sensory simulation, path finding, and even simple movement. In fact, AIseek guarantees that with its coprocessor NPCs will always be able to find the optimal path in any title using the processor." Is this the 'way of the future' for PC titles? Will games powered by specific pieces of hardware become the norm?
This discussion has been archived. No new comments can be posted.

Chip Promises AI Performance in Games

Comments Filter:
  • by Zanth_ ( 157695 ) on Wednesday September 06, 2006 @02:14PM (#16053996)
    What may occur is a separate box consisting of the GFX card, Physics Card, AI card, PSU for the above along with supporting memory modules just to power existing games. Mulitple cards consisting of mulitple chips with multiple cores will likely overpower the common case. Thus for the hardcore games, a separate box wired to the main rig could be the norm. Thus, for the average home user, we will get smaller and smaller (Mac mini et. al) but for the gamer we'll see module system, with multiple boxes and multiple PSU's to help with cooling and overall performance goodness.
  • Re:Not Gonna Work (Score:3, Interesting)

    by arthurh3535 ( 447288 ) on Wednesday September 06, 2006 @02:19PM (#16054033)
    Actually, they could do something similar to the graphics, just allowing for "weaker" AI routines that can work on standard system.

    It will be insteresting to see if games are "more fun" with smarter AI or if AI really isn't the big and important thing about making interesting games.
  • Re:hm (Score:5, Interesting)

    by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Wednesday September 06, 2006 @02:25PM (#16054091) Homepage

    I read the blurb this morning. The idea is that it accelerates the basic operations that everything uses (line of sight, path finding, etc.). The more complex AI (actual behavior, planning, etc) is built the normal way. It simply offloads the CPU and thus allows faster calculations.

    The other real difference is that it is better than current algorythms. So instead of using A* for pathfinding, it works correctly even on dynamicaly changing terrain. This would mean things like no longer having NPCs getting stuck on rocks or logs or some such (*cough* half-life 1 *cough*).

  • way of the present (Score:2, Interesting)

    by krotkruton ( 967718 ) on Wednesday September 06, 2006 @02:36PM (#16054176)
    Is this the 'way of the future' for PC titles? Will games powered by specific pieces of hardware become the norm?

    Short-run, maybe; long-run, no. IMHO, things will consolidate like they always seem to do. Video cards are necessary for more than just games, so they won't be going anywhere. Physics and AI cards seem to be useful for nothing but games. It would be foolish to combine all video cards with physics and AI chips because not everyone plays games, but why not combine the physics and AI chips? Farther down the road someone will come out with a new card to enhance some other aspect of gameplay, and eventually that will merge with the physics and AI chips on their own card.

    Things are always being consolidated on PCs. Look at all the things on mobos that used to require separate cards 10 years or even 5 years ago. Designers get better and better at cramming more things into a smaller space (even if that is getting harder and harder to do), so it seems to me that these things will keep merging together when it is useful to do so. In this case, I don't think most PC users want to have 3-5 cards just for games, so it is useful. I could be completely wrong on that point though.
  • by legoburner ( 702695 ) on Wednesday September 06, 2006 @02:38PM (#16054194) Homepage Journal
    I am not so sure, as I think that some sort of programmable PCI-X card is going to exist sometime soonish which will allow programmeable hardware processing of simple routines like line of site or pathfinding (or other mathematical problems), and this will offload from the CPU. This would be more logical as it can then be used in many different applications from custom rendering for render farms, to hardware-assisted protein folding through to complicated firewalling/packet sniffing and back to gaming and other desktop usage. They have been in development for a while now, I just wonder when they will start becoming available to the consumer.
  • by bockelboy ( 824282 ) on Wednesday September 06, 2006 @02:39PM (#16054198)
    One begins to wonder what the "endgame" scenario for the respective manufacturers of the physics and AI cards we're seeing. I can foresee three distinct situations:

    1) The CEOs, investors, and engineers are complete idiots, and expect all the gamers of the world to buy separate physics, AI, and graphics cards
    2) They're hoping to provide chips to ATI or nVidia for a "game card" instead of a "graphics card", the next generation of expensive purchases for gamers
    3) They're hoping to provide chips for the nextgen xbox / playstation / wii, hoping that their chips will be the ones to make gaming interesting again.
  • Re:Not Gonna Work (Score:5, Interesting)

    by Dr. Spork ( 142693 ) on Wednesday September 06, 2006 @02:48PM (#16054280)
    I think this is the right question, but it may have an interesting answer. Maybe the way they picture future game AI is similar to present-day chess AI: Chess games evaluate many potential what-if scenarios several moves into the future, and select the one with the best outcome. Clearly, the more processing power is made available to them, the more intelligently they can play.

    Maybe future RPG AI could have some similar routines regarding fight/flight decisions, fighting methods and maybe even dialogue. But that would require a pretty universal processor, which would just speak for getting a second CPU. I don't have much hope of this catching on, but I'd welcome it. For one thing, writing AI that can run in a separate process from the rest of the game is something I'd love to see. I want something to keep that second core busy while I'm gaming!

    Plus, it would be pretty cool for hardware manufacturers if AIs really got smarter with better hardware (be it CPU or add-in card). That would require big coding changes from the way AI is written now, but I do think those would be changes for the better.

  • by j1m+5n0w ( 749199 ) on Wednesday September 06, 2006 @02:56PM (#16054339) Homepage Journal
    why program your game for a dedicated AI card if you're just going to have to make it work on computers without one?

    Perhaps the card could be most useful not on the client, but in dedicated mmorpg servers. I know WoW could definitely use some smarter mobiles. Sometimes I think whoever designed the AI was inspired by the green turtles from Super Mario 1. I'd like to see games with smarter mobs and NPCs, and any game with a realistic ecology (for instance, suppose mobs don't magically spawn, they procreate the old fashioned way, and must eat food (a limited resource) to survive) would require many more mobs than a WoW-like game in order to prevent players from destroying the environment. Simulating millions of intelligent mobs would likely be very expensive computationally.

  • by MBGMorden ( 803437 ) on Wednesday September 06, 2006 @03:08PM (#16054437)
    Ok, link speak is annoying. Don't do that ;).

    The systems you mention though are all scaled down computers. They don't really have any extra hardware that a standard computer doesn't have. The GP's comment seemed to be talking about putting all the "extra" hardware out of the case, which doesn't fit your model of just making a smaller and more focused computer.
  • Stupid question... (Score:4, Interesting)

    by UbuntuDupe ( 970646 ) on Wednesday September 06, 2006 @03:11PM (#16054463) Journal
    ...that's I've always wanted the answer to from someone who knows what they're talking about:

    For the application you've described, and similar ones, people always claim it would be cool to be able to handle massive dataprocessing so you could have lots of AI's, and that would get realistic results. However, it seems that with *that many* in-game entities, you could have gotten essentially the same results with a cheap random generator with statistic modifiers. How is a user going to be able to discern "there are lots of Species X here because they 'observed' the plentiful food and came and reproduced" from "there are lots of Species X here because the random generator applied a greater multiple due to more favorable conditions"?

    I saw this in the game Republic: the Revolution (or was it Revolution: the Republic?). It bragged about having lots and lots of AI's in it, but in the game, voter support in each district appeared *as if* it were determined by the inputs that are supposed to affect it, with a little randomness thrown in. The AI's just seemed to eat up cycles.

    Long story short, aren't emergent results of a large number of individual AI's essentially the same that you would get from statistical random generation?
  • by Targon ( 17348 ) on Wednesday September 06, 2006 @03:16PM (#16054502)
    PCI is on it's way out, PCI Express is the next stage, or HTX(HyperTransport slot).

    Dedicated co-processors are a good idea, the problem is the costs involved. AMD is pushing for these companies to just make their new co-processors work with an existing socket type so instead of trying to sell cards(which cost more money to make due to the PCB), we will buy just the chip itself.

    To be honest, this is a better way to go since if a GPU were implemented in this way, you could easily just buy a GPU, toss it on the motherboard, and bingo, easy upgrade without the expense of buying a new graphics card and memory. Sure, you might see generational jumps in the memory used for the graphics and how it connects to the GPU, but that problem could be solved in multiple ways.
  • by Anonymous Coward on Wednesday September 06, 2006 @05:07PM (#16055302)
    A lot of you seem to think that neither this AI card, nor the Physics card are going to be of any use, but I don't see it that way.

    The physics card for example, is really just a processor dedicated to vector processing. Couldn't you use this to simulate light bounces? Then you might be able to up the graphics further with a sort of partial ray tracing. Additionally, this'll help with intersection detection, so if you swang your sword, it wouldn't hurt the other guy until it actually intersected with him. It may also have implications for software based synthesizers... rendering audio is much more intensive than video, perhaps a physics card will be able to help. How about water that ebbs and flows realistically? Clothe caps and realistic hair? Besides, having largely mutable worlds in video games is just sweet. A second processor will definitely help in this regard, but a CPU dedicated only to vectors will very quickly outpace a single additional core in only a few generations.

    The AI card on the other hand is similar, except its optimized for tree searches. This could be used in a number of applications -- chess and Go for example. But also perhaps for more fuzzy strategy, like RTS games. This won't only help with moving an entity in a 3d world, perhaps it can also help those entity's think better, by allowing them to consider more alternatives before making a decision.

    Certainly you can do all this on a dual core system 'because its fast enough' but then why not turn down the graphics quality on your games, get a quad core machine and run it through an OPENGL Emulator? If these catch on, they'll eventually out pace generic CPUs for speed in their purpose, and the games will use them. Not having hardware AI/physics support then will be like not having hardware graphics support now.
  • by Dekker3D ( 989692 ) on Wednesday September 06, 2006 @05:59PM (#16055650)
    ah, finally someone who thinks my way. yes, if we were to have a choice between various ai/physics cards somewhere in the near future, like we do with graphics cards nowadays, they would automatically become affordable in an effort to become the cheapest and/or best of all similar cards. that way, we could have those great features. game developers and game modellers would rejoice in the removal of the barrier in animation (vertex displacement function for muscle simulation, anyone?) and physics (massive physics-based puzzles and arenas?) to those who said a separate ai card would make ai too difficult... it would also allow the programmers to limit the ai so it only knows what it should know, like a normal human does. stealth would become better, more enjoyable and more rewarding as you've finally gotten the feeling of playing against smart enemies.
  • Re:hm (Score:4, Interesting)

    by SnowZero ( 92219 ) on Wednesday September 06, 2006 @07:47PM (#16056281)
    The problem with the state of AI today is not that the algorithms are too processor-intensive, it's that they flat-out suck.

    Please don't take what you see in games to be state of the art. Watch this video [youtube.com] of my RoboCup team and tell me that AI still completely sucks. You'll see two teams playing 5 on 5 soccer with zero human input, i.e. fully autonomous. Game AIs may suck, but that's because their AI programmers are graphics people trying to do AI. The result looks about as good as me trying to make a state of the art graphics engine.

    The only reasonable application of hardware AI acceleration that I can think of would be a massively parallel chip that runs thousands or millions of neural net nodes at once... but this would mainly be a benefit for academic AI research, not for games.

    Neural nets died down as a fad in academic circles almost 10 years ago. There's a common saying that "Neural nets are the second best way to do everything." ... meaning that if you analyze a problem, some other approach almost always turns out to work better. They are a reasonable approach for unstructured classification problems that aren't fully understood, but after some analysis other approaches almost always take over. This has been the case with things like OCR and face recognition.

    I'm pretty sure that most games use simple "heuristic" algorithms for AI, rather than anything complicated like neural nets or Bayesian learning or SVM.

    NNs, Naive Bayes, and SVMs are all classifiers (and often slow ones at that). They aren't really directly applicable for defining policies for an agent, so they don't get used much (as well they shouldn't). Most agent decision systems use a combination of heirarchical finite state machines (FSMs), planning on a small set of high level actions, and motion planning.

    Games tend toward the absolute simplest of FSMs with only binary inputs, and yield the expected highly rigid behavior in a given situation. For the most part, they don't even use randomness, which is absolutely necessary in any kind of situation where one player is trying to outguess another. I've heard that non-RTS games only budget about 1% of the CPU to AI, and it shows. Rich FSMs, action-based planning, and proper motion planning get swept aside, and that is unfortunate. However the coming multi-core revolution may offer some hope. Game programmers are having trouble splitting up graphics routines, so it might be that AI can get the core or two that it deserves when we hit quad-core CPUs. Due to the many algorithms, AI benefits from general purpose CPUs, and parallelizes quite well.

    Whether enough real AI people will ever get hired to do games right remains to be seen. At the moment it seems even primarily systems companies like Google are more interested in AI people than game companies.
  • by miach ( 32249 ) on Wednesday September 06, 2006 @10:30PM (#16056935)
    Having actually written some game AI (about 7 years back now), we actually used both depending on whether the player could see what was going on or not.

    Thus if you were following the animal it would wander through the forest in the expected manner (unless you got in it's way, etc), but if you just wandered into the forest there would be an expected number of animals "from stock" (and if you killed them all, there would be none left).

    I'd write something considerably more complex for the visible parts these days (having more CPU to hand), but for the invisible parts, as long as it looks correct to the player, it doesn't matter if you simplify things.

    In relation to the original post, we already had multi-level AIs and would ramp them up in a similar way to the graphic level of detail (ie. the more CPU cycles you had to throw at the renderer, the more detail was displayed - all the models had several level of detail variations depending on range and we'd change the distance at which we switched level of details according to the frame rate).

    Similarly with the AIs - the further away the more we did statistically and the less we did with the more complex routines. Having a dedicated processor for some of the AI would just mean we could ramp the complexity of the routines a little sooner.
  • Re:hm (Score:3, Interesting)

    by MoxFulder ( 159829 ) on Thursday September 07, 2006 @02:00AM (#16057616) Homepage
    The problem with the state of AI today is not that the algorithms are too processor-intensive, it's that they flat-out suck.
    Please don't take what you see in games to be state of the art.

    Sorry, didn't mean to cast aspersion on AI in general... merely game AI. My point was that game AI algorithms are hopelessly lame, so speeding them up isn't going to help. I'm very much aware of more impressive AI efforts, such as in RoboCup. In games, AI often seems to be an afterthought, whereas in RoboCup AI is pretty much the whole interesting part of the problem.1

    Neural nets died down as a fad in academic circles almost 10 years ago. There's a common saying that "Neural nets are the second best way to do everything." ... meaning that if you analyze a problem, some other approach almost always turns out to work better.

    Well, I wouldn't say that Neural Nets have completely died out. But your point that neural nets are a fuzzy second-best solution is a good one. Basically, neural nets are way too expressive, so they basically become a brute-force route to an "intelligent" solution. Since they're basically a brute-force massively parallel approach, I gave them as an example of a type of AI that might benefit from hardware acceleration.

    the coming multi-core revolution may offer some hope. Game programmers are having trouble splitting up graphics routines, so it might be that AI can get the core or two that it deserves when we hit quad-core CPUs.

    I'm pretty skeptical that this would make any difference! As you rightly pointed out, most game AI is based on extremely simple FSMs and such. There's no way such algorithms can be easily expanded to use a whole core's worth of CPU power.

    Game AI uses little processing power NOT because it can't be spared, but because game programmers typically lack the imagination and expertise to use more complex, CPU-intensive AI algorithms in their games.
  • Re:Not Gonna Work (Score:3, Interesting)

    by Dr. Spork ( 142693 ) on Thursday September 07, 2006 @02:32AM (#16057696)
    If you're advocating that AI only work with the information and "reflexes" available to human players, I wholeheartedly agree. I'm not as scared as you of AI that's too good, because it's always easier to make something worse than better. AI that's too good can be downgraded in many interesting, humanlike ways - like simulated dispositions to panic, or freeze, waste ammo, needlessly conserve ammo, get too close before firing, etc. Basically, you just need to observe what imperfect players do and tell an outstanding AI program to copy our shortomings. On higher levels the shortcomings would be diminished. Again, the Chessmaster series does that. You can set it to play very open and bold games - which makes it somewhat easier to beat, but much more fun to play (I still lose most of the time).

    I like the idea about training AI, but in all this, the topic of discussion is about how the AI could act significantly smarter with more real-time processing power. As you mentioned, the game company can pre-train AIs on their own big computers while the game is in production. So in this case too, heavy real-time AI processing does not extra smartness.

    Maybe AI code could include some general outlines of "strategies" on various levels of granularity, and the real-time decisions would be about choosing the strategy that best fits the situation. Essential to this is an estimation of the chances that the strategy succeeds, and of the cost of failure. Of course, there would be many ways to implement a strategy like "retreat" or "seek cover and throw grenades," and which implementation best fits the situation would again have to be computed in real time. So if the AI were programmed in this manner, the decisions it would make would improve with increased processing power. For example, evaluating how good a piece of cover a certain overturned table provides depends on your evaluation of the location, equipment and probable future movement of the enemies. What would the PC do if he were controlled by a smart AI? And how well would my cover protect me from that? These are all things the AI might have time to calculate. This is what I meant by "what if" scenarios - there would be many strategies and implementations the AI could consider at any given time, more than what any hardware can calculate. But the more you calculate, the better your decisions are likely to be. If I were writing a game that would have its own processor for AI, I would first develop maybe four intricate AI routines for the PC. Of course the PC won't be controlled by an AI, but the NPCs can consider what the PC would do if he were controlled by any of the 4 AIs and decide on a course of action which best counters what the PC might do.

    Well, if game AI worked anything like this, I might really want an AI coprocessor. But so far this is all a pipe-dream. Game AI isn't always hopelessly naive (I programmed some myself!) but it really is just a script of hand-written nested conditionals. A good scripter can get interesting behavior out of NPCs, but not in any way that would be improved by extra AI processing hardware.

  • Re:hm (Score:3, Interesting)

    by beaverfever ( 584714 ) on Thursday September 07, 2006 @09:36AM (#16058739) Homepage
    "I dont think we are going to get any good AI until it has some method of "learning""

    That would be great, but in the meantime other approaches are available. Going back several years, Unreal Tournament had the capability of customising individual bots' gameplay style with several different parameters beyond mere difficulty levels, and I found that this was very much worth the effort. A large library of custom characters could be saved, and they could be selected randomly to add unpredictability to gameplay with bots. If you notice one bot is an idiot (or unbeatable), you can adjust that specific character.

    Admittedly I am not a super-hardcore gamer, but I have played several different titles. UT is the only game I have seen this bot customisation feature in.

One way to make your old car run better is to look up the price of a new model.

Working...