Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel

Intel's Dual-core strategy, 75% by end 2006 306

DigitumDei writes "Intel is moving ahead rapidly with their dual core chips, anticipating 75% of their chip sales to be dual core chips by the end of 2006. With AMD also starting to push their dual core solutions, how long until applications make full use of this. Some applications already make good use of multiple cpu's and of course multiple applications running at the same time instantly benifit. Yet the most cpu intensive applications for the average home machine, games, still mostly do not take advantage of this. When game manufacturers start to release games designed to take advantage of this, are we going to see a huge increase in game complexity/detail or is this benifit going to be less than Intel and AMD would have you believe?"
This discussion has been archived. No new comments can be posted.

Intel's Dual-core strategy, 75% by end 2006

Comments Filter:
  • by zzmejce ( 756372 ) on Wednesday March 02, 2005 @08:43AM (#11822497) Homepage
    I hope sol.exe will become dual-core aware soon.
    • Me too (Score:5, Funny)

      by imrec ( 461877 ) on Wednesday March 02, 2005 @08:57AM (#11822606) Homepage
      Those bouncing cards STILL leave trails at the end of a game! REFRESH! GAWDAMNIT! REFRESH!!
    • Yeah, I have to admit that there is a super geeky little kid somewhere inside me that thinks a multithreaded version of solitare, or freecell, or heart, would be REALLY cool.

      • Yeah, I have to admit that there is a super geeky little kid somewhere inside me that thinks a multithreaded version of solitare, or freecell, or heart, would be REALLY cool.

        FWIW I wrote a version of Tetris that was multi-threaded, but it was in Occam which makes that sort of thing trivial. (Of course, the fact that Occam doesn't have any data structures other than the array makes it a PITA to do anything more complex :( )
  • Dual Core Gaming (Score:2, Interesting)

    by carninja ( 792514 )
    One has to wonder if this is going to provide Intel with a competitive edge against Sony's Cell processor in the gaming front...
    • Since no plans have yet been announced to use the Cell in PCs-- so far it seems only PS3 game systems and very high-end IBM POWER business workstations will be taking advantage of it-- that wouldn't seem to make a whole lot of sense.
  • dual cores (Score:3, Insightful)

    by lkcl ( 517947 ) <lkcl@lkcl.net> on Wednesday March 02, 2005 @08:44AM (#11822506) Homepage
    less heat generated. more bang per watt.
    • Re:dual cores (Score:3, Informative)

      by gl4ss ( 559668 )
      not automatically.

      all else equal.. two cores, two times the power, two times the heat..

      • Re:dual cores (Score:5, Informative)

        by sbryant ( 93075 ) on Wednesday March 02, 2005 @09:09AM (#11822691)

        all else equal.. two cores, two times the power, two times the heat..

        You haven't been paying attention! Go back and read this article [informationweek.com] again (about AMD's demo of their dual core processor). While you're at it, read the related /. article [slashdot.org].

        The dual core processors use nowhere near double the power and produce nowhere near double the heat.

        -- Steve

      • Re:dual cores (Score:3, Insightful)

        if the 2 cores can share L1 and L2, then it's less than "twice the power"...and given the close distances between the 2, it's not hard to create a high-speed connect that will equate Shared L1 to Local L1 speeds.
  • by nounderscores ( 246517 ) on Wednesday March 02, 2005 @08:45AM (#11822507)
    It's just that it's called a GPU, sits on a special card, on a special slot and is sold to you regularly about once every six months for an ungodly amount of money.

    It would be interesting if games were rewritten to run with the game logic on one core, the graphics on another core and the networking code on a third core of a multicore chip...

    Hey. You could even have a mega-multicore chip and do first person shooters with realtime raytracing... each core is responsible for raytracing a small area of the screen. I'm sure that there's a company working on this. I saw a demo video in a computer graphics lecture. I'll have to check my notes.
    • Maybe split AI and Physics into seperate threads... networking really doesn't need it yet though :)
    • Hey. You could even have a mega-multicore chip and do first person shooters with realtime raytracing... each core is responsible for raytracing a small area of the screen. I'm sure that there's a company working on this. I saw a demo video in a computer graphics lecture. I'll have to check my notes.

      You will see this when the processing power of a current A64 or P4 goes for around $2! There is a reason that current GPUs look the way that they do -- it is a LOT more efficient than ray-tracing.

      What you spe

    • It isn't really the game itself that needs to be written to take advantage of a second CPU (or whatever), it's the code that's always being reused (either something inhouse, or the engine that they're using).

      People are lazy, and when things will work today as it is, most companies will rather focus on releasing the game asap than spending alot of time recoding what they've already got...

      It comes down to how much money they can make in as little time as possible.

      But, of course, once a company starts pushi
    • Actually, for what it's worth I'm writing a game in my free time which already splits rendering and (physics/game logic) into two threads. The idea being that the physics runs while the rendering thread is blocking on an opengl vsync. While the behavior is synchronous, it runs beautifully on both single and dual processor machines.

      In principle this should have detrimental effects on single processor machines but my relatively meager 1.3 ghz powerbook plays beautifully at 30 fps and 60 physics frames per se
      • Multithreading IS hard if you are sharing any state between the threads. And the difficulty in debugging multithreaded issue in a large/complex application (i.e., any commercial game these days) goes up by at least an order of magnitude with the introduction of true multi-threading via 2 cores. On a single processor machine, you can get away with more because you don't have true concurrency. And in your particular case, you actually have no concurrency so it is even easier. But if you want to be truely
    • by JSBiff ( 87824 ) on Wednesday March 02, 2005 @09:11AM (#11822710) Journal
      I would like to see a more multi-threaded approach to game programming in general, and not all the benefits would necessarily be about performance.

      One thing that has bugged me a long time about a lot of games (this has particular relevence to multi-player games, but also single player games to some extent) is the 'game loading' screen. Or rather, the fact that during the 'loading' screen I lose all control of, and ability to interact, with the program.

      It has always seemed to me, that it should be possible, with a sufficiently clever multi-threaded approach, to create a game engine where I could, for example, keep chatting with other players while the level/zone/map that I'm transitioning to is being loaded.

      Or maybe I really want to just abort the level load and quit the game, because something important in Real Life has just started occuring and I want to just kill the game and move on. With most games, you have to wait until it is done loading before you can then quit out of the game.

      In other words, even ignoring performance benefits for a moment, if a game engine is correctly multi-threaded, I could continue to have 'command and control', and chat, functionality while the game engine, in another thread, is loading models and textures.
      • by nounderscores ( 246517 ) on Wednesday March 02, 2005 @09:29AM (#11822865)
        In other words, even ignoring performance benefits for a moment, if a game engine is correctly multi-threaded, I could continue to have 'command and control', and chat, functionality while the game engine, in another thread, is loading models and textures.

        That would put the pressure back where it should be - on the level designers - to make sure that each segment was challenging enough so that a player couldn't pass through two loadzones simply by running so fast that the first zone hasn't fully loaded yet and wind up in a scary blank world full of placeholder objects.
        • Yea they need to take some ideas from http://maps.google.com load as needed. If they loaded it far enough out that you didn't have to worry about getting from one point to the next to fast as to be there before it loaded. I remember a 3d chat world called alpha world that would load as you go and that was 10 years ago. Why hasnt' this stuff improved sence?
      • One thing that has bugged me a long time about a lot of games (this has particular relevence to multi-player games, but also single player games to some extent) is the 'game loading' screen. Or rather, the fact that during the 'loading' screen I lose all control of, and ability to interact, with the program.

        It has always seemed to me, that it should be possible, with a sufficiently clever multi-threaded approach, to create a game engine where I could, for example, keep chatting with other playe

    • I don't know how valuable would be that. For networking, I think that it's much better to do everything in one CPU, much better than using two CPUs. With 2 CPUs, if you're doing different tasks in the same data, cache coherency mechanism will update CPU caches as needed, and you might be _losing_ performance

      (I don't know if it's exactly like that, it's one of the reasons why SMP is bad if you want to route traffic, unless you"attach" the IRQ the network card is using to a single CPU)
    • by Rhys ( 96510 ) on Wednesday March 02, 2005 @09:20AM (#11822773)
      Beyond the GPU, any intensive computation application gets benefits from the second CPU.

      Our local (to UIUC) parallel software master, working on the turing xserve cluster is pulling about 95% (I think, don't quote me) of theoretical peak performance in linpack running on 1 cpu on 1 xserve. Bring that up to both cpus in one and he said it dropped to around 50%.

      Why? The OS has to run somewhere. When it's running, that processor is stuck with it. The other processor is stuck waiting for the OS, and then things can pick up again.

      Now, we haven't yet finished tuning the systems to make the OS do as little as possible. (they're still running GUIs, so we can remote desktop into them amoung other things.) But still that's quite a performance hit!

      He said two machines running 1 CPU each over myrinet were still in the 90%ish of theoretical peak.

      So can we quite rehashing this stupid topic every time dual core CPUs comes up? Yes it'll help. No it won't double your game performance (unless it's written for a dual-core cpu), and probably it won't even double it then, because there's still teamspeak/windows/aim/virus scan/etc running that need cpu time.
    • It would be interesting if games were rewritten to run with the game logic on one core, the graphics on another core and the networking code on a third core of a multicore chip...

      Isn't that what IBM/Sony are propsing with the Cell architecture? Lots of seperate cores running dedicated chunks of code?

    • by magarity ( 164372 ) on Wednesday March 02, 2005 @10:13AM (#11823278)
      networking code on a third core

      The CPU waiting on networking, even 1Gbits/sec, is like waiting for a raise without asking. It's so little overhead to a modern CPU that using an entire core to do it is an exercise is silliness. If you are worried about any overhead associated with network encryption, etc, you can just spend $45 on an upgraded NIC with that capability built in to its own logic. The CPU never need be bothered.
  • by Anonymous Coward on Wednesday March 02, 2005 @08:45AM (#11822509)

    or is this benifit going to be less

    how long will it be before dual core CPUs boost slashdot editor's ability to spell-check?

  • by Siva ( 6132 )
    Wasn't Quake 3 supposed to be able to take advantage of SMP?
    • Re:Quake 3? (Score:2, Informative)

      by Anonymous Coward

      It did. It was dropped in Doom 3, as it really wasn't that much of a win for the effort.

      Modern games are really limited by bandwidth or by GPU power. CPU power is only really used for game logic, which isn't terribly complex compared to the other parts of a game.

      • One has to wonder then, why hasn't anyone introduced a video card that uses SMP?

        It makes sence to me.

        If they can't release a GPU that's fast enough, use more GPUs.

      • CPU power is only really used for game logic, which isn't terribly complex compared to the other parts of a game.
        Game logic (I'm talking about actual KI and physics and economical systems and RPG-Code, not "aim -> shoot") is complex, compared to all graphics operations. It's just that high end graphics needs so much more brute force to get through all this data. The hard part is getting the most out of your hardware. The actual complexity is laughable in most cases.
  • Well... (Score:3, Insightful)

    by Kn0xy ( 792482 ) * <knoxville@xMOSCOWpd8.net minus city> on Wednesday March 02, 2005 @08:46AM (#11822521) Homepage
    If their going to be that ambitious with their sales, I hope the are concidering pricing the the chips in a price range that anyone could afford and is willing the pay.
    • As always, I'm looking forward to dropping prices on last weeks' tech. Getting a nice single core CPU on the cheap, as they fall out of fasion.
    • Re:Well... (Score:3, Informative)

      by Jeff DeMaagd ( 2015 )
      Just like many other advancements in CPU, yes, people will be able to afford them, if not right away, pretty quickly.

      I think the initial pricing for a dual core 2.8 GHz chip is about $250. 3.0 & 3.2GHz will be available at higher prices, I think an extra $100 per step.
  • by rastan ( 43536 ) * on Wednesday March 02, 2005 @08:46AM (#11822524) Homepage
    AFAIK memory latency/bandwidth is currently the limiting factor in conmputation speed. Dual core processors will not change this, but make the gap even bigger.
    • by MindStalker ( 22827 ) <mindstalker AT gmail DOT com> on Wednesday March 02, 2005 @09:07AM (#11822671) Journal
      Not nessesarly, as both cores share the same memory controller and registered memory, latency from core to core is essentially zero. I wonder if someone could write some really smart code that has one core doing all memory prefetching and the second core doing the actual computations. Could be interesting.
      • Sorry, I was speaking of the AMD design of course, and Intels memory controller is off chip (and I believe not shared as well).
      • The idea of using a second core to prefetch is not really a new idea, and actually (at least for Intel chips) is not really a smart idea.

        A more useful practice is the use of speculative prefetching on SMT (i.e. Hyper-Threading) cpus, where one thread runs the code, and the other thread speculates ahead issuing prefetch instructions. Of course to really support this well you need to have a compiler optimized for generating a speculative thread to run ahead of the primary thread.

        All this makes programming
    • Budy, some of us have AMD.

      Come on in the HyperTransport is fine. Care for a pinya-Onchipmemorycontroller?
    • by Lonewolf666 ( 259450 ) on Wednesday March 02, 2005 @11:27AM (#11824129)
      Not as much as you imagine.
      Compare an Athlon64/Socket939 to an Athlon64/Socket754 with the same clock speed. The Socket939 version has twice the memory bandwidth, but on average only 10% better performance according to AMD's P-Rating.
      Now consider a dual core Athlon64/Socket939 with the same clock speed, where the two cores share the higher memory bandwidth. I would expect this chip to be as fast as two Athlon64/Socket754, or 80% faster than a single core Socket939 model.
      Actually, clock speed will be a greater limitation:
      AMD has announced that the dual core versions will run at 400-600MHz less to reduce the heat output.

  • by Reverant ( 581129 ) on Wednesday March 02, 2005 @08:47AM (#11822531) Homepage
    When game manufacturers start to release games designed to take advantage of this, are we going to see a huge increase in game complexity/detail
    No, because most games depend more on the gpu rather than the CPU. The cpu is left to do tasks such as opponent AI, physics, etc, stuff that the dedicated hardware on the graphics card can't do.
    • by EngineeringMarvel ( 783720 ) on Wednesday March 02, 2005 @08:57AM (#11822602)
      Your statement is true, but I think you missed the point the article poster was trying to get across. Currently games are writeen to use computer resources that way. If the code was written differently for games, they could allocate some of the graphic responsiblities to the 2nd CPU instead of all of it going to the GPU. The 2nd CPU could be used to help the GPU. Allocating more of the now available (2nd CPU) resources to graphics allows more potential in graphics. That's what the article poster wants to see, that game resoure allocation written in the games code be changed to use the 2nd CPU to help enhance graphics in the video game.
      • GPUs are at least one order of magnitude faster at doing the kind of operations required for surface-based rendering than current CPUs. Adding a CPU to the mix will make very little difference.
      • Technically, depending on whose definition you follow the GPU is a second CPU in the computer. One that is dedicated to graphics, and like the above poster said, will usually be way better than a general CPU.

        On the other hand, why does everyone see an "increase in game complexity/detail" as purely a Graphics issue?

        A second CPU could be devoted to handling Physics, or AI as you point out, which could also improve the games complexity and detail. While its ramifications might not be directly visual "eye c
    • In my experience doing performance tuning, most games tend to be CPU (and/or memory-bandwidth) bound on their common configurations. Sure, you can always concoct cases where this isn't true (e.g. slow video card in super-fast PC, insane resolutions or pathological scenes), but it does tend to be broadly the case.

      This is partly because it's much easier to tune to a GPU budget. On the PC you can recommend different resolutions and/or anti-aliasing modes and instantly have a dramatic impact on fill-rate requi
    • The cpu is left to do tasks such as opponent AI

      Funny that you call that a "basic task."

      Game AI can easily use all the computing power you can throw at it. Look at how much CPU it takes to beat the best players at chess... And that has signifigantly less potential computational strategy involved than, say, a realistic tactical war sim...

      The problem is that most current games these days are tests of reflexes and memory. Few games employ adaptive strategy. Of the games that do, I can't think of any that us
    • But there are some tasks that can be done by both CPU and GPU but are generally assigned to the GPU. For instance, you can generate a stencil shadow volume in a vertex shader... it's just very wasteful of the GPU. You can also animate characters on the GPU, but they have to be retransformed to do multi-pass effects. So if the game is GPU-bound, a good idea is moving these tasks to the CPU.

      Honestly, working on a dual-core CPU, you could create 2 threads-- 1 that just does character animation and silhouet

  • relevant article (Score:5, Informative)

    by antonakis ( 818877 ) on Wednesday March 02, 2005 @08:48AM (#11822537)
    I don't know if it has been referenced here before, a very interesting and enlightening article : http://www.gotw.ca/publications/concurrency-ddj.ht m [www.gotw.ca]
    • Re:relevant article (Score:3, Informative)

      by prezninja ( 552043 )

      I don't think the parent did enough to sell this article to the masses reading through, although it is an excellent reference.

      The article linked to by the parent ("The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software") should be read, and is of particular interest to developers.

      The article draws a very good picture of how the trend towards mutli-core systems will require developers to rethink the way they design their applications if they want to continue taking advantage of future

  • by Gopal.V ( 532678 ) on Wednesday March 02, 2005 @08:48AM (#11822541) Homepage Journal
    AMD demo'd [amd.com] their dual core x86 a year ago. Also from what I read, the Pentium extreme is NOT going to share the memory controller - which means unlike the AMD, we might need a new motherboard for the dual core ones (well, AMD promised that we wouldn't). So this is costlier, uglier and more power hungry.

    All in all I see that Intel is going down unless they do something quick. And remember Competition is good for the Customer.

  • Pretty soon (Score:2, Insightful)

    by PeteDotNu ( 689884 )
    Once multi-core chips start getting into home computers, the game developers will have a good justification for writing thread-awesome programs.

    So I guess the answer to the question is, "pretty soon."
  • by Anonymous Coward on Wednesday March 02, 2005 @08:52AM (#11822563)
    I find this interesting, every machine Apple sells except at the definite low end is dual CPU SMP now, and it's been this way for awhile. Now Intel/AMD seem to be realizing "oh yeah, dual cpus, maybe that's something we should start targeting for the mass market instead of just the high end" (though AMD seems to be pretty comfy with the idea already). I wonder why Apple doesn't seem interested in dual cores though. Intel/AMD seem to be treating multicore tech as their way of getting SMP out of the power-user range, Apple doesn't seem to want to have anything to do with it even though POWER has had multicore ability for a really long time. What's up with this, is there something I'm missing?
    • What makes you think Apple isn't interested in dual core? They haven't released any machines with dual core CPUs, because none are available. I don't know what IBM's plans are with dual core PPC970 derivatives, but FreeScale are expected to launch a dual-core G4-class CPU Real Soon Now(TM), and I wouldn't be at all surprised to find it appearing in the next PowerBook revision.
    • Apple has offered Dual-CPU systems for a long time, but they are more than just a company for teachers to buy computers from. They also sell systems to graphic artists, publishing houses and many other places that benefit from dual-CPU systems. It's just the Apple shotgun approach, they are aming at their market which includes many levels of users. It's not their intention that Grandma should have a dual CPU 64bit system (unless she is a Lightwave user looking to decrease render times). Multiple core CP
    • I dont mean to nitpick but apple only sells 3 Duel procesor machines to the consumer (not including Xservers) , as opposed to , 14 machines with single core, if you say only high end then you still have something like 6 or 7 single to 3 duel .
      However you are right , OS X really does love Multiple CPUs ,Apple developers have for a long time been able to justify coding for this .
      plus IIRC IBM is working on a duel core Power/pc chip right now , which i expect Apple will be more than happy to take advantage
  • ... and more everwhere else. Games continue to get most of their good stuff from the GPU, not the CPU. It aint that the CPU isn't important, but it's not going to make a huge difference all by itself.

    What I hope to see, but don't expect, is better prioritization of CPU requests. If you have something high-priority going on, like a full screen video game, recording a movie or ripping a CD, I'd like to see the antivirus and other maintenance tasks handled by the other core, or even put on hold. My person
  • Hmm? (Score:5, Insightful)

    by Erwos ( 553607 ) on Wednesday March 02, 2005 @08:53AM (#11822573)
    "how long until applications make full use of this"

    Full use? Probably never. There's always improvements to be made, and multi-threaded programs are a bitch and a half to debug, at least in Linux. Making "full use" of SMP would _generally_ decrease program reliability due to complexity, I would imagine.

    But, with an SMP-aware OS (Win2k, WinXP Pro, Linux, etc.), you'll definitely see some multi-tasking benefits immediately. I think the real question is, how will Microsoft adjust their licensing with this new paradigm? Will it be per-core, or per socket/slot?

    I'm going to go out on a limb and predict that Longhorn will support 2-way SMP even for the "Home" version.

    -Erwos
    • Not much of a limb considering they've already announced that they are considering a CPU package 1 processor for licensing even if it has dual cores. So yes, "Home" versions will assuredly support 2 processors.
  • For example on the Intel HT processors, all I have to do is write my applications to use multiple threads for operations that are CPU intensive and voila! I have almost doubled the speed of my app. Otherwise, a single thread app will only use one of the cores.

    Often, it's almost trivial to write an app as a multi-threaded app. The only difficult part is when a the problem your application is solving does not lend itself well to paralellization. So sequential problems don't really benefit from it.

    However,
  • It's just _Dual_ (Score:2, Insightful)

    by infofarmer ( 835780 )
    Oh, come on, it's just dual, it's just a marketing trick. Speed has been increasing in a logarithmic manner for years on end, and now we're gonna stand still at the word "Dual"? If intel/amd devise a way within reason to logarithmically increase the number of cores in a CPU (which I strongly doubt), that'll be a breakthrough. But for now - it's just a way to keep prices high without inventing anything at all. WOW!
  • Complexity/detail (Score:4, Insightful)

    by Glock27 ( 446276 ) on Wednesday March 02, 2005 @08:59AM (#11822620)
    "are we going to see a huge increase in game complexity/detail?"

    If you consider a factor of about 1.8 (tops) "huge".

  • by bigtallmofo ( 695287 ) on Wednesday March 02, 2005 @08:59AM (#11822621)
    Check your licensing agreements before you buy one of these dual-core processors. Make sure that your software vendor isn't going to double the price on you.

    Oracle and others [com.com] have announced plans to increase their revenue by charging people for multiple cores in their single processor.
    • Ouch .. do you mean I'll have to provide *twice* the sourcecode of my progs ? Will a symlink do fine ?
    • by Anonymous Coward
      When dual-core procs become the norm, Oracle will wonder why everybody stopped buying their software, and will adjust their pricing accordingly. Oracle has made a science out of accurately determining what price the market is actually willing to bear, just a smidgeon short of the market telling them to "F--- Off" and that's what their pricing structure will be. Oracle keeps the "riff raff" out of their customer base that way, and only wishes to deal with the serious players who must have their database when
  • OpenGL Performer (Score:2, Interesting)

    by Anonymous Coward
    This problem has already been solved by OpenGL Performer [sgi.com]

    Applications, even 'games', written using Performer, will immediately benefit from multiple CPUs.
  • Fairly simple... (Score:4, Insightful)

    by Gadgetfreak ( 97865 ) on Wednesday March 02, 2005 @09:03AM (#11822643)
    I think as long as the hardware becomes established, people will write software for it. From time to time, hardware manufacturers have to push the market in order to get the established standard to jump to the next step.

    It's like what Subaru did when they decided to make all their vehicles All Wheel Drive. It was a great technology, but most people at the time just didn't care to pay extra for it. By making it a standard feature, the cost increase is significantly reduced, and provided that the technology is actually something functional, the market should grow to accept it.

  • Games and multi core (Score:5, Interesting)

    by Anonymous Coward on Wednesday March 02, 2005 @09:03AM (#11822645)
    As already mentioned games already do make use of the GPU and the CPU so we're fairly used to some mutliprocessor concerns.

    To say that most PC games are GPU bound however is a mistake - most games I've come across (and worked on as a games core technology/graphics programmer) are CPU bound - often in the rendering pipeline trying to feed that GPU.

    Anyhow games are already becoming dual-core aware. Most if not all multiplayer games make use of threads for there network code - go dual core (or hyperthreading) and you get a performance win. Again most sound systems are multi threaded often with a streaming/decompression thread, again a win on multi core. These days streaming of all manner of data is becoming more important (our game worlds are getting huge) and so again we will be (are) making use of dual core there too.

    I personally have spent a fair amount of time performance enhancing our last couple of games (mostly for HT but the same applies to true dual core) to make sure we get the best win we can. For example on dual core machines our games do procedural texture effects on the second core that you just don't get on a single core machine and still get a 20% odd win over single core. I'm sure most software houses take this as seriously as us and do the same. It's very prudent for us to do so - the writings been on the wall about multi processors being the future of top end performance for a while now.

    At the end of the day though us games developers have little choice but to embrace multi core architectures and get the best performance we can. We always build software that pushes the hardware to the full extent of it's known limits because that's the nature of the competition.

    Just think what the next generation of consoles is going to do for the games programmers general knowledge of concurrent programming techniques. If we're not using all of the cores on our next gen XBox or PS3 then our competition will be and our games will suck in comparison.
  • by jbb999 ( 758019 ) on Wednesday March 02, 2005 @09:07AM (#11822666)
    Do these new chips share the highest speed cache? I can think for several ways to make use of them without using traditional threads. For example: Set up a pool of threads each one of which just reads a function address from a queue of work and then calls that function, waiting when there is no work. The main program can then just push function pointers onto the queue knowing that a thread will pick up the work.
    I'm thinking that instead of writing something like
    for(int i = 0; i < NumberOfModels; i++) {
    UpdateModelAnimation(i);
    }
    you could write
    ThreadPool* pool = new ThreadPool();
    for(int i = 0 ; i < NumberOfModels; i++) {
    pool->QueueAsyncCall(UpdateModelAnimation, i);
    }
    pool->WaitForAllToFinish();
    The queueing of work could be made pertty low overhead and so if there were only a few thousand CPU instructions in the call you'd get a big speed up, but only if each processor already had the data they were working on in cache. If each core has a separate cache this would be a lot less efficient. Does anyone know?
    • Yeah right.

      How are you going to process these function results? Do you think they can all write to the video ram at the same time? And how will you address thread exceptions?

      Sure, firing a thousand threads is easy..
      handling the thread results and errors, and communication between threads are the hard part.

      For instance, launch a thousand threads simultaneously to add a new item to a listbox, and make it prone to errors, for instance by making each thread iterate over all items in the listbox while the oth
    • Last I heard, the first dual cores out of the door won't share cache and will be more like two individual dies sharing the same packaging. But they'll be moving onto shared cache a la POWER at a later date.

      Tried to find the Reg link I read a while back, and then found this one: http://news.zdnet.co.uk/hardware/chips/0,39020354, 39164618,00.htm [zdnet.co.uk]
  • I'm not convinced that dual-cores are the answer to the problem both Intel and AMD are now having scaling up CPU performance.

    Using dual-core for games for example will certainly allow developers to make some enhancement to their games by parallelising non-dependant parts of their engine e.g. split A.I. and physics up, but at the end of the day once you've broken the game down to these parts you're going to be limited by processor speed again. Things can only be sub-divided into smaller tasks so much, once
  • I am going to wait for at least quad core 64bit processors ;)
  • Boon for Game AI (Score:3, Insightful)

    by fygment ( 444210 ) on Wednesday March 02, 2005 @09:10AM (#11822698)
    A lot of posts have quite rightly pointed out that the GPU is currently how games use a "pseudo" dual core. But it seems that what games could be doing now is harnessing the potential of dual core not for graphics, but for game enhancement i.e. better physics and true AI implementations. Realism in games has to go beyond tarting up the graphics.
    • I think that would definitely make a great improvement to most games. One thing that I wonder about, though, is would it be possible (and worthwhile) to have the game (or other application) running on one core whilst the rest of the OS (or environment) is running on the other?

      Having a game (or music/media player, or security application, etc) running on a seperate core (if available) sounds like it would bring some sort of improvement. Would it?

    • Re:Boon for Game AI (Score:3, Interesting)

      by tartley ( 232836 )
      While I agree very strongly with the sentiment that improvements in games have to go beyond tarting up graphics, if considererd carefully this exposes a fundamental problem.

      Any aspect of a game may be programmed to scale with the hardware upon which the game is run (eg. graphics get more detailed, framerates improve, physics is more realistic, AI gets smarter)

      However, the problem here is that if these improvements to the game are in any way substantial rather than superficial - if they actually affect the

  • by mcbevin ( 450303 ) on Wednesday March 02, 2005 @09:16AM (#11822740) Homepage
    The average system is already running a number of different processes at once. Even if most individual applications aren't multithreaded, a dual-core will help not only make the system technically faster but also help hugely on the response of the system (which is often a far more important factor for the 'feel' of how fast a system is as the user experiences it) whenever process are running in the background.

    While one might ask whether it makes much useful difference to the 'average' home user, one might ask the same about say 4ghz vs 2ghz - for most Microsoft Word users this makes little difference in any case. However, for most users who really make use of CPU-power in whatever form, the dual-core will indeed make a difference even without multi-threaded applications, and it won't take long for most applications where it matters to become multi-threaded, as its really not that hard to make most cpu-intensive tasks multi-threaded and thus further improve things.

    I for one am looking forward to buying my first dual-CPU, dual-core system (i.e. 4x the power) once the chips have arrived and reached reasonable price levels, and I'm sure that power won't be going to waste.
  • by barrkel ( 806779 ) on Wednesday March 02, 2005 @09:17AM (#11822749) Homepage
    I believe that we're going to see a performance plateau with processors and raw CPU power for the next 5 years or so.

    The only way CPU manufacturers are going to get more *OPS in the future is with many cores, and that's going to require either slower or the same kind of speeds (GHz-wise) as things are today. To get programs to run faster under these circumstances you need some kind of explicitly parallel programming.

    We haven't seen the right level of parallelism yet, IMHO. Unix started out with process-level parallelism, but it looks like thread-level paralellism has beaten it, even though it is much more prone to programmer errors.

    On the other end of the scale, EPIC architectures like Itanium haven't been able to outcompete older architectures like x86 because the explicitly parallel can be made implicit with clever run-time analysis of code. Intel (and, of course, AMD) are their own worst enemy on the Itanium front. All the CPU h/w prediction etc. removes the benefit of the clever compiler needed for EPIC.

    Maybe some kind of middle ground can be reached between the two. Itanium instructions work in triples, and you can effectively view the instruction set as programming three processors working in parallel but with the same register set. This is close (but not quite the same) to what's going to be required to efficiently program multi-core CPUs, beyond simple SMP-style thread-level parallelism. Maybe we need some kind of language which has its concurrency built in (something sort of akin to Concurrent Pascal, but much more up to date), or has no data to share and can be decomposed and analyzed with complete information via lambda calculus. I'm thinking of the functional languages, like ML (consider F# than MS Research is working on), or Haskell.

    With a functional language, different cores can work on different branches of the overall graph, and resolve them independentantly, before they're tied together later on.

    It's hard to see the kind of mindset changes required for this kind of thinking in software development happening very quickly, though.

    We'll see. Interesting times.
    • I do not for one moment believe that Amdahl's law won't affect these systems. Dual-core won't be a problem, quad-core probably won't, but I can't see that stuffing in more cores will solve the scalability problem in the long run.
    • by shapr ( 723522 ) on Wednesday March 02, 2005 @10:27AM (#11823460) Homepage Journal
      This is discussed in great detail in this thread on lambda-the-ultimate.org The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software [lambda-the-ultimate.org]. The summary as I see it is
      • declarative parallelism will always scale better than threads or whatever else
      • micro-optimizations will always be faster than declaractive parallelism
      Manual parallelism won't scale well from one core to sixty-four cores, but will be faster in static situations like running on one Cell CPU in the PS3 where the configuration is known at design time of the app.
      This is the same trade-off as manual memory allocation versus garbage collection. Garbage collection is easier and more automatic than manual memory control in C, but if you put enough effort in a C program will be more efficient than a GC-using program.
      So the essence of the answer is that declaractive parallelism gives you development speed, but manual parallelism gives you execution speed. You choose what you want.
      I have a two CPU machine right now, and I'm very much looking forward to the rumored SMP version of Haskell that uses Software Transactional Memory [microsoft.com]. That's gonna rock!
  • anticipating 75% of their chip sales to be dual core chips by the end of 2006

    global warming expected to increase by 75% by the end of 2006
  • Dual core is a godsend!
    As anyone who works with number crunching apps will tell you, having two cores seriously improves your work quality.
    Not because number crunchings apps are taking advantage of dual cores.

    It's becasue now I can set one core to work on those wicked hard numerical calculations while I kick back and watch movies and play music for a few hours. Bliss!

    Nevertheless it would be nice if there was an easier way to make apps use multiple cores. I'd love to be able speed up my crunching by getti
  • Even if the app you're running isn't multithreaded you'll see some benefit from dual processor. After all, you're always running the OS as well as your application, including your wireless drivers and what have you. If nothing else you're saving a context switch by giving the application its own processor.

  • by gelfling ( 6534 ) on Wednesday March 02, 2005 @09:53AM (#11823070) Homepage Journal
    Seriously. 75%? What do they think that much power will be used for? Do they dream that everyone will suddenly run out and plunk down $2500 for a machine that can Doom 3 faster than than plutonium doped lightening?

    I think all that power will used in the same way it always is. Malcontents will write more sophisticated malware. MS will release more shiny glittery gewgaws that do nothing except open up more security holes and antimalware vendors will write more complex and unwieldy antimalware applications. In the meantime all the corporate suits will demand more cumbersome elaborate corporate apps that are specifically written for dual core systems thereby requiring parallel track applications to be maintained while the old machines the suits abandoned still get cycled through the organization for 3 years. And for the first 12-16 months hardware vendors will experience hardware QA and BIOS screw-up hell as they try to appease the 15 year olds in the focus groups who demand 1337 dual core hawtness!!! It will suck and Intel will make make billions.

To stay youthful, stay useful.

Working...