Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AMD Intel Hardware

Intel And AMD's Dual-Core CPUs Investigated 243

Hack Jandy writes "Anandtech has a bunch of insider information concerning Intel and AMD's move to dual-core CPUs. The article has lots of great information on how the move to dual-core processors affects modern computing - in particular, Anand sees more promise in multiple CPU cores that perform different operations, rather than just stamping two identical cores on the same processor like AMD and Intel are doing now."
This discussion has been archived. No new comments can be posted.

Intel And AMD's Dual-Core CPUs Investigated

Comments Filter:
  • by DarkHelmet ( 120004 ) * <mark&seventhcycle,net> on Sunday October 24, 2004 @05:18AM (#10613163) Homepage
    The idea of putting two cores, one fast and one slow, in a CPU has already been proposed numerous times

    Look Ma! I got a Ferrari that when you press a button becomes a Yugo!

    • by Anonymous Coward on Sunday October 24, 2004 @05:20AM (#10613166)
      Yay reintroduce the TURBO button then.
    • I think they should dynamicly change the clock speed based on heat content. Have a max hz, then have it slow down the hotter it gets. Then you could remove the cpu fan and not worry about it, save the fact that it would be slow as dirt.

      I think the CPUs would be the same speed sorta. Just have one tweaked for say floats and the other something else. If you have a float heavy process you use core 0 and otherwise core 1. You can end up with the same CPI for standard loads but with some programs would do bette
      • by Stalks ( 802193 ) on Sunday October 24, 2004 @05:59AM (#10613257)
        I think they should dynamicly change the clock speed based on heat content.

        The P4 already does this. It will turn down the speed and even disable individual cpu components in order to save its life if it begins to overheat.

        TomsHardware produced this video [tomshardware.com] a while ago, detailing what happens when the heatsink and fan is removed during workload. They test both AMD and Intel processors from back then.

        • Re: (Score:2, Informative)

          Comment removed based on user account deletion
          • by Anonymous Coward on Sunday October 24, 2004 @09:21AM (#10613690)
            I remember that the older AMD proccessors would start smoking and then effectively stop working.

            An engineer I used to work with figured it all out (through much first hand experience). He deduced that chips were really just plastic capsules of compressed smoke, since when the smoke came out, they didn't work any more. He was planning a start-up company to re-inject the smoke and make them work again.
        • In other news, Napoleon Bonaparte was defeated in the battle of Waterloo.
  • Faster processors... (Score:5, Interesting)

    by Anubis333 ( 103791 ) on Sunday October 24, 2004 @05:20AM (#10613168) Homepage
    I would rather have faster processors than multiple cores, as it is not enough is multi-threaded. Even the highest end 3D apps, their render engines are SMP capable, but all geometry translation/deformation is not. That would be one core right? Unless multiple cores could show up as one single core/proc in the OS..
    • by Xoro ( 201854 ) on Sunday October 24, 2004 @05:43AM (#10613230)

      I would rather have multiple cores than a faster processor. The combined clocks of my old dual processor system ran just over half that of my current (similar core) processor, yet the feel of it on the desktop was far better. None of the little hitches, glitches and rogue processes that plague me on the uniprocessor system. I'm very curious to see how these dual cores stack up against dual processor systems in terms of cost and power consumption, as those are the factors keeping me from going back to a dual proc system.

      You are right that many individual applications would not benefit from the additional core but for overall system performance, the dual setup can't be beat.

      • by Monster Zero ( 58806 ) on Sunday October 24, 2004 @06:00AM (#10613259) Homepage
        The main benefit is not so much raw power, although cache coherency certainly benefits (so multiple threads & non-NUMA scheduling will benefit), as the fact that now I can have a 4 "CPU" system (2 dual-core chips) in a blade, or 4 CPUs in a 1U system. My work has already planned ahead for this by chosing a motherboard (in their newest 1U server based cluster) that will support the new AMD dual core chips due next year. We are going to upgrade as soon as they are available. The space/power/cooling benefits and the ratio of MPI tasks to CPUs to onboard interconnect is just too great to pass up.
      • by jsebrech ( 525647 ) on Sunday October 24, 2004 @08:32AM (#10613573)
        I would rather have multiple cores than a faster processor. The combined clocks of my old dual processor system ran just over half that of my current (similar core) processor, yet the feel of it on the desktop was far better. None of the little hitches, glitches and rogue processes that plague me on the uniprocessor system.

        Usually dual-cpu systems have better bandwidth on the motherboard, which impacts performance in any but the most cpu-bound tasks a lot more than a faster cpu does. For years the bottlenecks on most systems have been the hard disk, the motherboard/memory bandwidth, and the video card. A fast cpu just does not matter that much if you don't spend all your time compiling or rendering 3D art.

        They mention in the article specifically how intel's design foolishly decreases bandwidth per cpu to make the dual-core magic happen. Since the xeon's will arrive so much later that leads me to conclude they know performance is going to be abysmal, but they're going for the "dual" buzzword because amd is, and at the same time they're re-engineering their bus tech for the xeon line to improve bandwidth so the dual core nature actually becomes useful.
      • I agree. There is nothing better than just putting that b0rked process to the 2nd CPU while you find out how to shut it down (which can be next to impossible on a uniprocessor system)...
      • I agree: although most individual applications won't beenfit much, at any given time I might have four applications going, plus all the miscellaneous OS stuff in the background. Plus, during processor-intensive tasks, like audio or video encoding, one core could perform that task while the other core keeps the rest of the system from slowing down.

        I've already experienced the difference to some degree. I use a 1.5 Ghz PowerBook on a day-to-day basis, but a friend has a DP 800 Mhz Quicksilver, which was rele

    • by Alwin Henseler ( 640539 ) on Sunday October 24, 2004 @06:40AM (#10613335)
      I would rather have faster processors than multiple cores

      The way I see it, every CPU package has essentially a 'thermal envelope' that you can't go beyond without drastically changing case designs or cooling methods. For passively cooled CPU's this would be in the order of 10W, for actively cooled CPU's the ~100W figures for some desktop Pentium 4's are pushing the limit.

      Instead of pushing things like BTX cases or watercooling, I'd rather see chipmakers use new technology to improve thermal/power ratio of their chips. I don't need a CPU that's 3 times as fast, upping power consumption once again. Give me a CPU that does twice the work using a smaller amount of energy.

      There's lots of room for improvement here. Examples: when a CPU sits idle, does that mean a drastic drop in power consumption? In many cases: no. Win9x systems drop into a full power no-op running loop, and 'halt' state power consumption only works well with newer CPU's when chipsets are configured to enable a low-power state. Often, this isn't the case, for whatever reason.
      Then take mobile CPU's (in same physical package), and features like varying core voltage with CPU load (Speedstep, PowerNow! or whatever). Nice, but many desktop motherboards or BIOS'es don't support it, or have it disabled. IMHO, chipmakers like AMD or Intel would better focus on improved motherboard/chipset/BIOS support for these things (through co-operation with mobo makers), than just making their CPU's faster.

      And yes, I do know AMD is on the right track here with their x86-64 chips ('Cool 'n Quiet'). Maybe one reason their desktop market share is doing so well lately? I'd go for it, anyway.

      Treat mod points like diseases - get rid of them as quick as you can.

      • by evilviper ( 135110 ) on Sunday October 24, 2004 @09:09AM (#10613665) Journal
        I'd rather see chipmakers use new technology to improve thermal/power ratio of their chips.

        They are. They're throwing every last bit of power-saving they can at the chips. Intel's P4 can't go any faster because of heat, and they can't do anything about it. Doesn't that maybe tell you that they're on the very edge of the technology?

        If you look at processor power specs, you'll see that they are continually improving on a MHz/watts basis, and each new chip, if underclocked to it's predicessor's speed, would use up less power.

        I don't need a CPU that's 3 times as fast, upping power consumption once again. Give me a CPU that does twice the work using a smaller amount of energy.

        Those two theoretical chips are one in the same... Essentially just marketed diffently.

        when a CPU sits idle, does that mean a drastic drop in power consumption?

        With an Intel chip, hell yes. It drops down to nothing.
        With an AMD chip, no. They screwed the pooch with their S2K issues. If you're lucky, and your motherboard is supported, fvcool will get your AMD processor to drop to very little power when idle.

        Interesting note though. I bought a KT800 mobo to get the built-in S2K feature, but got screwed, because the mobo chipset uses up so much power, it still uses more power than my old mobo, even when the chip is idle. The KT133 is the only AMD mobo chipset I've found that works well.

        chipmakers like AMD or Intel would better focus on improved motherboard/chipset/BIOS support for these things (through co-operation with mobo makers), than just making their CPU's faster.

        Intel doesn't have any problems in this department. Their CPUs idle to low power just fine. It's AMD that really needs to kick some ass. Even with Cool n Quiet, many motherboard makers just aren't implimenting the feature.

        Also, the same features found in Cool-n-Quiet can be used on your x86 processors right now, through either a Windows program, or the 2.6 kernel's cpufreq drivers (hope you have an nforce mobo).

        • Intel's P4 can't go any faster because of heat, and they can't do anything about it.

          The hell they can't. Three words would fix Intel's heat situation easily: Desktop Pentium M. Where can I buy such a motherboard?

        • by smartdreamer ( 666870 ) on Sunday October 24, 2004 @01:22PM (#10614608)
          I think you should revise your thoughts about mighty Intel. They just suck when it comes to power consumption and they always did. P4 always been power hungry CPU, approximativly 10 to 20% more than AMD for similar performance.

          You can refer to recent story [slashdot.org] on Slashdot Particuly Anandtech comparison [anandtech.com]. If you want to compare performance : AnandTech [anandtech.com] (same article) or ExtremeTech [extremetech.com].

          So don't think Intel had any interest in low power consuption, they were for the gagihertz race. Now tings are changing, they canceled everything (think of 4Ghz) to work "around" the CPU. They surrender to AMD. Race for Gigahertz is over. Dual core is the way to go, particularly specialysed ones.

          If you want to reduce your CPU temperature about 20deg C try Athcool [home.ne.jp] on GNU/Linux. It shuts down northbridge went idle. Obviously, you lose 5% performance, but it's your choice. It can be activated at will!

          By the way, I'm talking about desktop.

    • by Colin Smith ( 2679 ) on Sunday October 24, 2004 @07:22AM (#10613429)
      On a single CPU system, the X client and server compete for time. It can sometimes be faster to run certain apps over a fast network than locally on the same machine.

      On a dual machine or multi-core machine the client and server can both be given time on separate CPUs or presumably different cores on the one CPU.

    • But until multi-CPU/multi-core machines are more common-place, there's no incentive for developers to make their apps multi-threaded.
    • Maybe I'm out of touch with reality, but isn't memory bandwidth/speed the real limiting factor as to how well a modern CPU performs these days? Won't dual core just make this problem much much worse?

      --
      Simon

      • Faster processors are still increasing performance in a lot of cases, but memory bandwidth and bus speed would certainly help in some of the other cases.
      • The buses have lots of bandwidth, but poor response times. Running lots of threads on lots of decoders (in this case, cores) means that while the latency is still horrid, there's programs running on the data that arrived earlier...
      • Yes and no.

        Yes, what you're saying is true. Memory accesses are still one of the slowest things your CPU does, and it does it quite a lot.

        No, the advent of massive L2 caches, onboard memory controllers, and bigger/faster buses has reduced this problem somewhat in recent times. Thermal issues are starting to become the real limit to CPU performance.

        If you read the article, you'd notice that, in general, AMD is going to have fewer memory problems when it comes to dual core. Intel is still on a shared bus,
    • I would rather have faster processors than multiple cores, as it is not enough is multi-threaded. Even the highest end 3D apps, their render engines are SMP capable, but all geometry translation/deformation is not. That would be one core right? Unless multiple cores could show up as one single core/proc in the OS..

      If dual core processors become the new high-end norm, those cpu intensive apps are going to migrate to use them effectively. Currently, there is not a huge market for software that works across
  • by mind21_98 ( 18647 ) on Sunday October 24, 2004 @05:29AM (#10613187) Homepage Journal
    The applications simply aren't there, as AnandTech mentions. Hyperthreading, for instance, did not cause sudden and dramatic speed improvements. The only benefits we're going to see are with applications specifically written for multiprocessor systems. These can take full advantage of the strengths of dual core CPUs.
    • If you
      - are running two or more CPU intensive tasks (multiple httpd proceses, database servers)
      - have an SMP capable OS (eg linux)
      of course multpile cores are an improvement.
    • by argent ( 18001 ) <peter@slashdot . ... t a r o nga.com> on Sunday October 24, 2004 @06:56AM (#10613378) Homepage Journal
      Multiprocessing doesn't give you speed improvements for a single-threaded application, but it sure as hell makes a system a lot smoother and more responsive when it's running multiple applications concurrently.

      And don't forget, hyperthreading is like adding a second CPU that's always partly loaded. It's not the same as adding another core.
      • I must admit that the computer I have at work with hyperthreading already feels more responsive then my machine at home. Even though the other specs are mostly the same (up to date 7200 RPM hdd, 1 GB memory etc) the P4 2600 HT clearly beats my Athlon 2400+. And from experience, that's not because of the 200 MHz (comparitive) MHz. I haven't done any testing on the systems, but the feel is clearly there. Lets hope that multi-core processors will provide the same feel, but even better.
    • This is why open source is so good. Have a time hog application that is single thread? Rewrite it to take advantage of multiple threads. Not an easy job, but you'll gain an education in writing tricky code.
  • by MeridianOnTheLake ( 691931 ) on Sunday October 24, 2004 @05:41AM (#10613225)
    Its seems that Intel have lost their technology edge. Early in Intel's life, the company direction was driven by the engineers, but it over the last few years, highlighted by the mhz race, all tech R&D has been driven by marketing managers. This was probably to be expected. Marketers and non-tech managers are usually very good with people, very good at playing politics, and hence very good at influencing company direction; far better than most engineers. Intel is now paying the price for their incompetence by loosing out to smaller, more hungry competitors.

    I don't know where the Itanic fits into this theory. I guess if it wasn't so late, and was made available during the tech bubble, Intel would now be on a fundamentally different track, rather than playing catch-up (poorly) with more innovative companies.

    Now, onto multi-core chips. This is actually a very exciting direction. Sun has already demonstrated an 8 core, quad-hyperthreading 32-way chip http://blogs.sun.com/roller/page/jonathan/20040910 [sun.com] (Project Niagra). Intel certainly has much catching up to do, but its time for a new race and hopefully they'll get their arse into gear and show us some exciting things in the years to come, that is, if the marketoids can be somehow dethroned from their positions of power.
    • by FireBook ( 593941 ) on Sunday October 24, 2004 @05:53AM (#10613249)
      as per a previous post, bear in mind that this is not the route Intel wanted to take, but their hoped for 10ghz P4 AMD killer topped out unexpectedly, so they're having to try and find another way.
    • by orlinius ( 181137 ) on Sunday October 24, 2004 @07:20AM (#10613423) Homepage
      There is a very interesting article in the last edition of Fortune. I think AMD got it right this time around.

      My favorite quote :
      AMD CFO Rivet explains
      "As hard as we tried to win the hearts and minds of CIOs, with the desktop as our focus we were going to fail. They made their decisions with the server on down. When Intel had 100% of the x86 server market, it could charge whatever it wanted and use that money to beat us on desktops. We had to be in the profit haven".

      Ruiz (CEO of AMD) calls the server-led approach "do or die" for AMD: "If we hadn't pulled this off I would have shut the door"

      From the Fortune article:
      AMD: Chipping Away at Intel
      CEO Hector Ruiz came from humble roots to propel AMD into the big leagues.
      http://www.fortune.com/fortune/technology/articles /0,15114,724543,00.html [fortune.com]

      You need to be a subscriber to read the whole article :(
    • Marketers and non-tech managers are usually very good with people, very good at playing politics, and hence very good at influencing company direction; far better than most engineers.

      But by the very nature of a company... the ones on top aren't supposed to be able to get played by marketers, and are supposed to make a good decision on the direction to take.

      I guess if it wasn't so late, and was made available during the tech bubble, Intel would now be on a fundamentally different track, rather than playi

    • Some people use instructions per cycle to claim that AMD64 is better than netburst. If you want to take that route, Itanium2 has about a 33% higher IPC than Opteron, a 1.5GHz Itanium often holds its own performance-wise against a 2GHz Opteron, from the dual CPU workstation to the Top500 clusters.

      Where Itanium fails is that the chips still cost a bit too much. They used to be waaaay too expensive, now I think they are only marginally more expensive than an equivalent Opteron system. Itanium doesn't have
  • I hope. (Score:2, Interesting)

    by ceeam ( 39911 )
    I hope that w/ multicore CPUs speach recognition (you shout "archers to the big tower!" and they do) and maybe camera tracking of player's movement will be more commonplace in games. I guess that would be pretty cool stuff until 3d-without-glasses-or-helmets displays come to life.
  • Hrmm... (Score:5, Funny)

    by Arcanix ( 140337 ) on Sunday October 24, 2004 @06:08AM (#10613287)
    Is just me or does it seem odd they are using x20, x30, and x40 for names? I guess x20 + x30 + x40 does make an x90, slightly better than my x86.
  • by taxevader ( 612422 ) on Sunday October 24, 2004 @06:51AM (#10613364)


    My old PC had this, it was called a turbo button.
  • by imsabbel ( 611519 ) on Sunday October 24, 2004 @07:02AM (#10613390)
    At least yesterday they were still in.
    Amds dual core chips dont use a local HT link to for core-core communication. They have both cores linked to a crossbar, which also has ports for the HT-links and the memory controller.
    So a dual core chip still has 3 outgoing ht links, allowing to use 8 dual core chips in one system without "glue"
  • by tekrat ( 242117 ) on Sunday October 24, 2004 @07:13AM (#10613412) Homepage Journal
    In terms of "marketing speak", this is a good opportunity for Sparc and PowerPC chips to catch up to the X86 architecture.

    Thanks to Intel's own marketing, most users are used to seeing that Mhz = power, and Apple suffers from the fact that the G5 tops out at 2.5Ghz, while Intel chips cruise along at 3+Ghz. Sun's SPARC architecture suffers from the same illusion, although comparably, both the Sparc and PPC architectures are quite close to X86 in terms of actual horsepower (not so much with Sparc, but Sun's true power is total throughput and reliablity and scalability, not flops).

    With Intel "stuck" at around 4Ghz, IBM/Apple could figure out how to ramp up the G5 (or it's successor) to 4+Ghz, and beat Intel at it's own marketing game.

    Similarly, this bump in the roadmap for Intel could be the opportunity for other/alternative CPU architectures to gain some marketshare.

    (Posted as someone very, very tired of the Wintel Monopoly)
    • >With Intel "stuck" at around 4Ghz, IBM/Apple could figure out how to ramp up the G5 (or it's successor) to 4+Ghz, and beat Intel at it's own marketing game.
      Yeah. Or they could start selling quantum computers....
      Face it: P4 is at 3.6Ghz with 0.9um, G5 is at 2.5Ghz, watercooled.
      Do you really think that if they could get another Ghz or 2 out of the design that they wouldnt do it?
    • most users are used to seeing that Mhz = power, and Apple suffers from the fact that the G5 tops out at 2.5Ghz

      AMD chips don't even match that MHz rating, yet they are doing quite well.

      No, this is a case of Apple fans trying to find an excuse why Apple isn't more popular.

      (Posted as someone very, very tired of the Wintel Monopoly)

      I know the feeling. My Alpha system is getting old now, and the new ones are rather expensive, while being a dead-end anyhow...

      I'd still rather see a completely different and

  • by Bruha ( 412869 ) on Sunday October 24, 2004 @07:13AM (#10613414) Homepage Journal
    It looks like IBM chose the right direction to go with their line of processors. With things like the power 5 chip, and altivec processing units combined you get more bang for the buck vs a dual core x86 chipset running at a higher clock speed.

    However I dont see a mass migration to the power platform due to the entrenchment of the desktop market. BUT if they can proove they have the more powerful upgrade path we may be seeing more powerPC type servers in the farms as businesses upgrade and look for that power for price. With PPC linux this will be possible and Microsoft will be sitting around wondering what the hell happened.
    • I wanted a PPC system. But where do you get the motherboard? That is not apple, and in the under $300 level.

      /went with socket 939 AMD64 3500+
    • However I dont see a mass migration to the power platform due to the entrenchment of the desktop market.

      I don't see a mass-migration to the power platform because Windows doesn't run on it. End of story. Then again, I don't think IBM's Power goal is to take over the desktop world.

      IBM's real strength comes from their SOI and other chip-making technology, which they've cross-licensed with AMD -- but not Intel. The parent poster may want to read Hannibal's CPU articles at Ars Technica [arstechnica.com]. They go into some of th

  • by hattig ( 47930 ) on Sunday October 24, 2004 @07:39AM (#10613455) Journal
    Certainly about how AMD do dual-core, which as it has been detailed since 2001 (and talked about since 1999) I think is extremely poor for a large website like Anandtech to get wrong.

    See comments 50, 51 and 54 that go with the story to see how AMD actually do dual-core (they don't 'fuse' hypertransport links together, like the article says they do)

    What is sadder is that they haven't corrected the story even though the incorrectness has been pointed out to them in the feedback, and presumably via e-mail as well. Nothing in the article can be trusted in any way because if basic facts are ignored, then what about the rest?

    I certainly do not think that such poor articles should be linked from Slashdot. Why should AnandTech get rewarded for such shoddy work?
  • CPU+GPU (Score:4, Insightful)

    by EvilIdler ( 21087 ) on Sunday October 24, 2004 @07:46AM (#10613468)
    Maybe we'll see dual-core CPUs where the second core does some
    of the 3D-calculation today's graphics chipsets do?
    That would certainly be useful for some fields of math.
  • by Anonymous Coward
    I think Anand was suggesting that in his article. While the schedulers of Linux and some of the other OSes may be able to handle that, I don't think you want to go that way given the hacks that are used in schedulers, e.g. the hack that Linux uses when running a high priority and a low priority thread on the same hyperthreaded processor. All system accounting is done in terms of processor run time and on an ASMP system, run times aren't going to be equal.
  • Gatekeeper crisis? (Score:4, Insightful)

    by ewe2 ( 47163 ) <ewetoo@gmail . c om> on Sunday October 24, 2004 @07:56AM (#10613493) Homepage Journal
    This is a tricky time for hardware manufacturers - how to promote upgrades which are essentially placeholders for a new hardware generation - and hope like hell that Microsoft will actually promote applications that will use that new functionality. Because Microsoft can afford to lose their R&D money, Intel and AMD cannot.

    Don't get me wrong, I'm looking forward to true 64-bit dual core architectures on the PC platform, but unless something amazing happens in the next 12 months, Microsoft will again be the gatekeeper to the mass uptake of that hardware, geek rage and linux notwithstanding. The shark will get it's DRM when the makers are appropriately terrified, and even then they may not make their money back.

    From a manufacturer/reseller point of view, it's not looking all that certain. Uncertainty is deadly to the CPU/mainboard market, and I'm seeing it in the hedged bets of computer swapmeets and resellers. The explosion of mp3 players, digital cameras, dvd burners and the astonshing fall in solid state memory might take up the slack for now, but that still means those crucial early-adopters aren't looking at the new goods.

    We live in interesting times.
  • Been there done that (Score:3, Interesting)

    by tanveer1979 ( 530624 ) on Sunday October 24, 2004 @08:45AM (#10613604) Homepage Journal
    This dual core thingi may be new to general purpose processors, but many DSL/Signal processing chips come with 2 cores. One is the signal processing one and the other is housekeeping, such as MIPS.

    In general purpose computing it would be nice to have one core dedicated to mathematically intensive tasks and one for the housekeeping. So that while you compile your X does not hang.

  • by youknowmewell ( 754551 ) on Sunday October 24, 2004 @08:59AM (#10613638)
    Would it be possible to have a dual core processor with both a PPC and a x86 core?
  • Tarantula (Score:2, Interesting)

    by YH ( 126159 )
    There was interesting paper at ISCA a few years back that proposed vector extensions to the Alpha ISA (called Taranula) and then making a dual core processor with the second core a vector core. The vector core would still be dependent on the scalar core for certain functionality (eg, supplying scalar arguments, renaming) and they proposed a 16MB!! L2 cache to feed the beast, but the performance numbers (especially the performance/power numbers) were pretty impressive.
    • Tarantula isn't a separate core since it doesn't have a separate program counter. It's more like a coprocessor or a big frickin' extra functional unit.
  • This month's Sci. American outlines the way dualies ease the chip maker's problems in keeping pace with Moore's law...and its not just physics. Economics kills proposed new microprocessors just as dead as insurmountable heat dissipation problems.
    Sci Am does not put up current content on its site for free. Go to the library.
  • Dual vs Uni (Score:3, Informative)

    by Thaelon ( 250687 ) on Sunday October 24, 2004 @12:13PM (#10614294)
    I have a dual Athlon MP 1200 board, and before that, an Abit BP6 (dual celeron). There are advantages to having dual CPUs. One of them is, if a rogue process suddenly starts using up an entire processor (a situation that would bring single cpu systems to a hard-lock) you might not even notice a performance problem until you try and use that process. You can run twice as many processes and won't see a performance hit (provided you have the RAM). For example: I can run about 4 instances of Diablo II Expansion, Firefox with about 10 pages open, and tons of other little things in the background. I'm currently running 46 processes, including 3 diablos, Firefox with 7 pages open, AIM, Rapidbackup, Google desktop search, gmail notifier, getright, Ultraedit, TrayIt, Windows Sniper, Clipomatic, Transtext, Tclock, stickies, powermenu, winbar and all the usual system processes. This is the normal state of windows for me and it runs just fine.

    However there are disadvantages too. Good luck finding a soundard with lots of features that gets along with dual CPUs. Creative has awful drivers and I'd almost swear they don't bother testing them, most other soundcards do just as bad or worse and offer fewer features. I built this machine back in fall of '01 and it wasn't until about a year ago that they released a set of drivers for the Audigy that I couldn't cause a BSOD at will with. If I ran Winamp using the directsound out and seeked around within a song repeatidly really fast it would BSOD 100% of the time. Not to mention you have to buy TWO processors rather than one, and the board was ~$500, is E-ATX, barely fits in an Antec SX1200 (HUGE case). In fact the hds stick out over the DIM slots and almost over the 2nd CPU. My case is gigantic and its too small for this motherboard.
  • by Animats ( 122034 ) on Sunday October 24, 2004 @01:40PM (#10614722) Homepage
    Having two non-identical CPUs in the same package, or in the same machine, isn't that useful. Typically, the "wierd" ones sit idle unless whatever application that specifically uses them is running. The operating system usually has no idea what to do with the "wierd" processor, so it gets managed as a peripheral, which doesn't work very well.

    There were some wierd Mac variations in the 1980s with a second CPU on a plug-in board. They could run Photoshop faster, but otherwise were useless.

    There are really only two multi-CPU architectures that are generally useful: shared-memory symmetrical multiprocessors, and networked clusters with no shared memory. Many other architectures have been tried - partially shared memory machines, shared-memory machines where some CPUs lacked some features like floating point, hypercubes, single-instruction-multiple-datastream machines, and dataflow processors. None has achieved lasting success.

    About the only unusual architecture ever sold in volume is the Playstation 2, with two vector processors. Even there, the vector processors are mostly used as a GPU. (Although one major game physics engine actually runs in the PS2 vector processors, an impressive achievement.)

    Programming for wierd architectures is hard, requires much tool development, and results in programs tied to specific hardware. So it doesn't happen much. That's why the wierd architectures fail. They're never that much faster, and by the time the software works, the hardware market is somewhere else.

Saliva causes cancer, but only if swallowed in small amounts over a long period of time. -- George Carlin

Working...