Valve's New Direction On Multicore Processors 80
illeism writes "Ars Technica has a good piece on Valve's about face on multithread and multicore application in programming. From the article: '...we were treated to an unveiling of the company's new programming strategy, which has been completely realigned around supporting multiple CPU cores. Valve is planning on more than just supporting them. It wants to make the absolute maximum use of the extra power to deliver more than just extra frames per second, but also a more immersive gaming experience.'"
GPU Death (Score:2, Insightful)
Of course, the problem is that my AMD 64 3200 will do DirectX 8 with my old GeForce, and will do DirectX 10 if I buy the new ATI card after Vista ships since its the video card handling that. But if I buy the new AMD multicore CPU w/ GPU instructions, will I have to upgrade my processor rather than my video card to get new features? And if I do, what will the processor price points be since new Intel/AMD extreme chips cost $1000 at launch, and the bleeding edge graphics cards cost $500?
As of right now, I get a big boost in game performance if I just upgrade my old video card and buy a new one.
Big deal... (Score:2, Insightful)
Too Bad... (Score:3, Insightful)
Re:Why is this news? (Score:4, Insightful)
While they are obviously not doing anything that supercomputer programmers didn't invent 30 years ago, they are leading their industry into the future, err, present. Though the article details are pretty weak, its clear that they've already gone beyond the module-level threading (sound, AI, graphics), to something that sounds more like work queues. If done right, those can get you to hundreds of threads as seen on early supercomputers, although it doesn't sound like Valve is dealing with cache-sharing problems yet, which could cause problems far sooner. I'm hoping hardware + language extensions will help mitigate that somewhat, at least on the read-sharing side.
Personally, my bet is on languages like Haskell. Purely functional programming makes multithreading easy, and for the imperative bits it has transactional memory (no locks, no deadlocks, and finally composability even with multithreading). Haskell itself may be too slow, so we may need to find a non-lazy version (laziness is basically the biggest performance problem Haskell has) for it to be practical.
I think OCaml [inria.fr] has a lot better chance of becoming mainstream than Haskell. For one thing, I know of programs written in OCaml that aren't written by members of the PL community. For Haskell, outside of compilers and libraries, the only thing I can think of is Darcs [darcs.net], and that project is having all sorts of issues with determining and fixing performance problems (I love darcs though!). I'm not sure a pure-lazy language will ever map well to a soft-realtime media app such as a game. Also, when you really need delayed evaluation (i.e. laziness), ML derivatives have closures and higher-order functions which allow you to implement it easily enough.