Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Valve's New Direction On Multicore Processors 80

illeism writes "Ars Technica has a good piece on Valve's about face on multithread and multicore application in programming. From the article: '...we were treated to an unveiling of the company's new programming strategy, which has been completely realigned around supporting multiple CPU cores. Valve is planning on more than just supporting them. It wants to make the absolute maximum use of the extra power to deliver more than just extra frames per second, but also a more immersive gaming experience.'"
This discussion has been archived. No new comments can be posted.

Valve's New Direction On Multicore Processors

Comments Filter:
  • GPU Death (Score:2, Insightful)

    by Thakandar2 ( 260848 ) on Monday November 06, 2006 @03:25PM (#16739211)
    I realize this was brought up in the story, but I really do think that when AMD bought ATI, they were looking forward at stuff like this. If AMD started adding ATI GPU instructions to their cores, and you get 4 of them on one slot along with the memory controller, what kind of frame rates and graphical shenanigans will happen then?

    Of course, the problem is that my AMD 64 3200 will do DirectX 8 with my old GeForce, and will do DirectX 10 if I buy the new ATI card after Vista ships since its the video card handling that. But if I buy the new AMD multicore CPU w/ GPU instructions, will I have to upgrade my processor rather than my video card to get new features? And if I do, what will the processor price points be since new Intel/AMD extreme chips cost $1000 at launch, and the bleeding edge graphics cards cost $500?

    As of right now, I get a big boost in game performance if I just upgrade my old video card and buy a new one.
  • Big deal... (Score:2, Insightful)

    by HawkingMattress ( 588824 ) on Monday November 06, 2006 @03:40PM (#16739461)
    Sounds like pure hype. Basically they're saying they have a 2 years technical advantage because they've been working on better multithreading schemes and avoiding deadlocks to put the multi cores at better use. How groundbreaking... Never mind that most game companies obviously have been working on that too but simply don't talk about it. They also make it sound like they invented distcc... which reinforce the impression that they're just trying to impress people and show how high tech they are (not). Almost sounds like a stock increase scheme or something on those lines if you ask me.
  • Too Bad... (Score:3, Insightful)

    by ArcadeNut ( 85398 ) on Monday November 06, 2006 @06:14PM (#16742863) Homepage
    Too bad as I'll never buy another Valve game because of Steam.
  • by SnowZero ( 92219 ) on Monday November 06, 2006 @11:22PM (#16747049)
    Yes, this method is an obvious way to get some benefit from a small number of extra hardware threads. But it is *not* future proof. This approach may give good CPU utilization up to perhaps 10-20 threads, but after that it will start to take a big dive.

    While they are obviously not doing anything that supercomputer programmers didn't invent 30 years ago, they are leading their industry into the future, err, present. Though the article details are pretty weak, its clear that they've already gone beyond the module-level threading (sound, AI, graphics), to something that sounds more like work queues. If done right, those can get you to hundreds of threads as seen on early supercomputers, although it doesn't sound like Valve is dealing with cache-sharing problems yet, which could cause problems far sooner. I'm hoping hardware + language extensions will help mitigate that somewhat, at least on the read-sharing side.

    Personally, my bet is on languages like Haskell. Purely functional programming makes multithreading easy, and for the imperative bits it has transactional memory (no locks, no deadlocks, and finally composability even with multithreading). Haskell itself may be too slow, so we may need to find a non-lazy version (laziness is basically the biggest performance problem Haskell has) for it to be practical.

    I think OCaml [inria.fr] has a lot better chance of becoming mainstream than Haskell. For one thing, I know of programs written in OCaml that aren't written by members of the PL community. For Haskell, outside of compilers and libraries, the only thing I can think of is Darcs [darcs.net], and that project is having all sorts of issues with determining and fixing performance problems (I love darcs though!). I'm not sure a pure-lazy language will ever map well to a soft-realtime media app such as a game. Also, when you really need delayed evaluation (i.e. laziness), ML derivatives have closures and higher-order functions which allow you to implement it easily enough.

All seems condemned in the long run to approximate a state akin to Gaussian noise. -- James Martin

Working...