Forgot your password?
typodupeerror
Intel

Intel's Dual-core strategy, 75% by end 2006 306

Posted by CmdrTaco
from the thats-a-lotta-core-ap dept.
DigitumDei writes "Intel is moving ahead rapidly with their dual core chips, anticipating 75% of their chip sales to be dual core chips by the end of 2006. With AMD also starting to push their dual core solutions, how long until applications make full use of this. Some applications already make good use of multiple cpu's and of course multiple applications running at the same time instantly benifit. Yet the most cpu intensive applications for the average home machine, games, still mostly do not take advantage of this. When game manufacturers start to release games designed to take advantage of this, are we going to see a huge increase in game complexity/detail or is this benifit going to be less than Intel and AMD would have you believe?"
This discussion has been archived. No new comments can be posted.

Intel's Dual-core strategy, 75% by end 2006

Comments Filter:
  • Re:dual cores (Score:3, Informative)

    by gl4ss (559668) on Wednesday March 02, 2005 @09:46AM (#11822514) Homepage Journal
    not automatically.

    all else equal.. two cores, two times the power, two times the heat..

  • by rastan (43536) * on Wednesday March 02, 2005 @09:46AM (#11822524) Homepage
    AFAIK memory latency/bandwidth is currently the limiting factor in conmputation speed. Dual core processors will not change this, but make the gap even bigger.
  • relevant article (Score:5, Informative)

    by antonakis (818877) on Wednesday March 02, 2005 @09:48AM (#11822537)
    I don't know if it has been referenced here before, a very interesting and enlightening article : http://www.gotw.ca/publications/concurrency-ddj.ht m [www.gotw.ca]
  • Re:Dual Core Gaming (Score:4, Informative)

    by mcc (14761) <amcclure@purdue.edu> on Wednesday March 02, 2005 @09:56AM (#11822592) Homepage
    The XBox2 and Gamecube are both already known to be using POWER/PowerPC derivatives. Besides which, chip contracts for new consoles are the sort of thing that get worked out an amount of time in advance measured in years, and they're usually not bought from quite the same stock that PC OEMs are buying from. Intel's plans for their mass market "by late 2006" lineup really couldn't have any impact on the console world at all at this moment.
  • Re:Quake 3? (Score:2, Informative)

    by Anonymous Coward on Wednesday March 02, 2005 @09:58AM (#11822615)

    It did. It was dropped in Doom 3, as it really wasn't that much of a win for the effort.

    Modern games are really limited by bandwidth or by GPU power. CPU power is only really used for game logic, which isn't terribly complex compared to the other parts of a game.

  • Re:dual cores (Score:5, Informative)

    by sbryant (93075) on Wednesday March 02, 2005 @10:09AM (#11822691)

    all else equal.. two cores, two times the power, two times the heat..

    You haven't been paying attention! Go back and read this article [informationweek.com] again (about AMD's demo of their dual core processor). While you're at it, read the related /. article [slashdot.org].

    The dual core processors use nowhere near double the power and produce nowhere near double the heat.

    -- Steve

  • by raygundan (16760) on Wednesday March 02, 2005 @10:26AM (#11822826) Homepage
    I would think so! All the "big" chess computers (Deep Blue, etc...) have just been massively parallel systems, and chess is one of those things that people have been coding and refining for years. I'm not much of a chess player myself-- computers have been kicking my ass since the 1MHz era, but it appears that multiprocessor chess software is already available for end-users:

    Deep Junior 9 and Deep Shredder 9 [chessbase.com] support multiple processors, and should have no trouble on a multicore system.

    Each core doubles how many moves it can evaluate in a given time-- and searching possible moves is primarily how chess algorithms work.

    Plus... Shredder renders a fancy 3D glass chess set for you, making sure your GPU doesn't get lonely with nothing to do.
  • by tc (93768) on Wednesday March 02, 2005 @10:30AM (#11822879)
    In my experience doing performance tuning, most games tend to be CPU (and/or memory-bandwidth) bound on their common configurations. Sure, you can always concoct cases where this isn't true (e.g. slow video card in super-fast PC, insane resolutions or pathological scenes), but it does tend to be broadly the case.

    This is partly because it's much easier to tune to a GPU budget. On the PC you can recommend different resolutions and/or anti-aliasing modes and instantly have a dramatic impact on fill-rate requirements without sustantially altering how your game plays. You can also add or remove polygons from models, and swap out shader effects until you get something that fits your budget on your target platform.

    Tuning for CPU is more difficult, because making a sweeping change is likely to have gameplay impact and is harder to do. Changing how often or how deeply the AI thinks, or the level of sophistication of your physics system, is going to have an impact on gameplay, and is certainly a lot more programmer work than just telling your artists to remove a couple of lights. Coming up with more efficient algorithms that deliver identical results requires a lot more hard thinking - and time for that is limited.
  • Re:relevant article (Score:3, Informative)

    by prezninja (552043) on Wednesday March 02, 2005 @10:44AM (#11822976) Homepage

    I don't think the parent did enough to sell this article to the masses reading through, although it is an excellent reference.

    The article linked to by the parent ("The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software") should be read, and is of particular interest to developers.

    The article draws a very good picture of how the trend towards mutli-core systems will require developers to rethink the way they design their applications if they want to continue taking advantage of future increases in processing power.

    I was referred to this article yesterday, and it is so good and motivating that I imagine it will be the feature of or featured in future Slashdot articles.

    It will be appearing in Dr. Dobb's Journal [ddj.com] later this month.

  • Re:Well... (Score:3, Informative)

    by Jeff DeMaagd (2015) on Wednesday March 02, 2005 @10:45AM (#11822987) Homepage Journal
    Just like many other advancements in CPU, yes, people will be able to afford them, if not right away, pretty quickly.

    I think the initial pricing for a dual core 2.8 GHz chip is about $250. 3.0 & 3.2GHz will be available at higher prices, I think an extra $100 per step.
  • by Anonymous Coward on Wednesday March 02, 2005 @10:48AM (#11823019)
    When dual-core procs become the norm, Oracle will wonder why everybody stopped buying their software, and will adjust their pricing accordingly. Oracle has made a science out of accurately determining what price the market is actually willing to bear, just a smidgeon short of the market telling them to "F--- Off" and that's what their pricing structure will be. Oracle keeps the "riff raff" out of their customer base that way, and only wishes to deal with the serious players who must have their database when no others will do. It's kinda like the world of business jet aircraft... hideously expensive, but there is still enough market demand out there such that the vendors are barely able to keep up with it.
  • by philipgar (595691) <[ude.hgihel] [ta] [2gcp]> on Wednesday March 02, 2005 @10:54AM (#11823078) Homepage
    The idea of using a second core to prefetch is not really a new idea, and actually (at least for Intel chips) is not really a smart idea.

    A more useful practice is the use of speculative prefetching on SMT (i.e. Hyper-Threading) cpus, where one thread runs the code, and the other thread speculates ahead issuing prefetch instructions. Of course to really support this well you need to have a compiler optimized for generating a speculative thread to run ahead of the primary thread.

    All this makes programming much more difficult. My approach to software prefetching (Im currently involved in research in using SMT and software prefetching in databases) follows a different model, and shows that using a software-pipelining approach works remarkably well to hiding main memory latency due to cache misses.

    Of course this is in a database world where things are ordered. .. . However this approach could also be applied to a game where instead of iterating over objects running every detail for the object you do work on object 1, prefetch object 1's next memory reference, do the second stage of object 2 etc etc. It needs special consideration to be done properly, but if its the core of your algorithm, it can have dramatic effects on performance (particularly if your objects do not fit entirely within L2 cache, and require different parts of them to be loaded each iteration).

    While a speculative thread can prefetch these objects the performance benefits of this thread yield roughly the same performance boost as simply using software-pipelining techniques, and leave the resources for a second thread to run simultaneously with the first. Although if a sufficient speculative prefetch thread compiler is created then that approach is much easier to use for everyday applications.

    Phil
  • by Anonymous Coward on Wednesday March 02, 2005 @11:44AM (#11823674)
    The problem is not the architecture but the OS. The perfomance impact on context switches is mostly in changing the memory space (TBL, cache flush). Just two people developed an OS [jxos.org] that runs all programs in the same space, so the processor keeps running at full speed. It can do this because it's written in a safe language (Java, but could be C# or other) so nothing can write to arbitrary addresses.

    Despite being written in Java by just two people instead of the thousands that wrote the Linux kernel and optimizing C compiler, it is 50% the speed doing actual work. For comparison, commercial JVMs generate code that ranges from 2x-5x faster than gcj [shudo.net] (gcc's java compiler) so this OS could easily be much faster than Linux. The only hold-up is drivers and support for archaic C/UNIX style programs (they should put it into the linux kernel as a module and gradually replace linux code with sane OO code).

  • by Lonewolf666 (259450) on Wednesday March 02, 2005 @12:27PM (#11824129)
    Not as much as you imagine.
    Compare an Athlon64/Socket939 to an Athlon64/Socket754 with the same clock speed. The Socket939 version has twice the memory bandwidth, but on average only 10% better performance according to AMD's P-Rating.
    Now consider a dual core Athlon64/Socket939 with the same clock speed, where the two cores share the higher memory bandwidth. I would expect this chip to be as fast as two Athlon64/Socket754, or 80% faster than a single core Socket939 model.
    Actually, clock speed will be a greater limitation:
    AMD has announced that the dual core versions will run at 400-600MHz less to reduce the heat output.

  • by betaguy9000 (863878) on Wednesday March 02, 2005 @04:06PM (#11826526)
    The Gameboy uses a Z80. The Xbox uses a Celeron.

"Consistency requires you to be as ignorant today as you were a year ago." -- Bernard Berenson

Working...