Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AMD Intel Technology

Dual Caches for Dual-core Chips 342

DominoTree writes "The dual-core chips that AMD and Intel plan to bring to market next year won't be sharing their memories. A version of Opteron coming in 2005 and Montecito, a future member of Intel's Itanium family also slated for next year, will both have two processor cores, the actual unit inside a processor that performs the calculations, and each core will have separate caches."
This discussion has been archived. No new comments can be posted.

Dual Caches for Dual-core Chips

Comments Filter:
  • mmmm cores (Score:3, Insightful)

    by zaqattack911 ( 532040 ) on Thursday August 26, 2004 @06:10PM (#10082913) Journal
    Can I have a 64bit OS too please? (no not linux)
    • Re:mmmm cores (Score:2, Informative)

      by Anonymous Coward
      Here you go [hp.com]. Works on dual-core, seperate cache chips already. (HP PA-8800)
    • by bburton ( 778244 ) on Thursday August 26, 2004 @06:15PM (#10082962)
      Can I have a 64bit OS too please? (no not linux)

      Didn't you hear? According to SCO, Linux doesn't even exist!

      • Re:mmmm cores (Score:3, Interesting)

        by cfuse ( 657523 )
        Didn't you hear? According to SCO, Linux doesn't even exist!

        No doubt a dual core processor will incur a dual cpu license fee as well.

    • Re:mmmm cores (Score:4, Informative)

      by EvilTwinSkippy ( 112490 ) <yoda AT etoyoc DOT com> on Thursday August 26, 2004 @06:22PM (#10083024) Homepage Journal
      OS X, or if you hate Apple, NetBSD.

      Solaris.

      The Playstation 2 is actually 128 bit. But that doesn't really count as an OS...

      • Re:mmmm cores (Score:4, Informative)

        by kennedy ( 18142 ) on Thursday August 26, 2004 @06:30PM (#10083086) Homepage
        wrong. the ps2 has a 64bit MIPS cpu with *128bit extentions*. Think MMX or SSE.
      • Re:mmmm cores (Score:4, Interesting)

        by yamla ( 136560 ) <chris@@@hypocrite...org> on Thursday August 26, 2004 @06:31PM (#10083090)
        Apple isn't scheduled to release the first 64-bit version of OS X until the first half of next year [com.com] and even then, it is not guaranteed to be fully 64-bit (though this is what most people, including me, believe).
        • Re:mmmm cores (Score:5, Informative)

          by shawnce ( 146129 ) on Thursday August 26, 2004 @08:13PM (#10083820) Homepage
          Pulling in a post of mine from a completely different forum...

          The G5 is a 64 bit processor and OSX Panther is a 64 bit OS. :)

          Panther is not a true 64 bit OS in the traditional sense of the word. It does not support 64 bit addressing[1]. It does however support the use of 64 bit math operations and the saving of related registers on the CPU.

          Tiger (Mac OS 10.4) will have the first steps towards a true 64 bit OS by allowing 64 bit addressing [apple.com] (virtual addressing) to be used for libSystem only based tools (command line applications, no GUIs, etc.). At least that is all that Apple has so far committed to doing in Tiger at this time (cannot say more because of NDA).

          [1] Note the Panther kernel has support for 64 bit physical addressing so the system can utilize greater then 4 GBs of RAM (hardware wise supporting up to 16 GB of RAM) but it does not support 64 bit virtual addressing (what applications use) at this time.
      • Comment removed based on user account deletion
    • Re:mmmm cores (Score:5, Informative)

      by iNiTiUM ( 315622 ) on Thursday August 26, 2004 @06:26PM (#10083054) Homepage
      Sure [hp.com] you [sun.com] can [freebsd.org]
      Oh you want one for the AMD64?
      How [netbsd.org] about [freebsd.org] these [openbsd.org]?
    • Re:mmmm cores (Score:2, Interesting)

      by puddpunk ( 629383 )
      Can I have a 64bit OS too please? (no not linux)

      Why not Linux? Most 64-bit ready OS's these days are Linux (SUSE 9.1, FC2, Gentoo) or Unix-ey (MacOS X).

      So it's pretty much tough shit for you then. Microsoft has abandoned you, their 64-bit OS will not be out until late 2005 (but you can have their crummy beta for free). Bahahahaha.
    • Sure, OS/400 (Score:5, Insightful)

      by Shivetya ( 243324 ) on Thursday August 26, 2004 @06:35PM (#10083123) Homepage Journal
      Been that way for many years. Is rock stable and secure.

      Granted it is on a mini, but we have enjoyed 64bit computing for nearly nearly 10 years. Even have some power5s in production.

      There are great OSes other than the ones used on PC hardware... too many "geeks" forget that.

    • VMS [hp.com] went 64-bit at least a decade ago.

      Great OS for English-speaking folk, despite Linus's hatred for it.

    • 64-bit (Score:3, Interesting)

      by mr_burns ( 13129 )
      It's not a question of if there will be 64-bit OS's to go with these things. Eventually, it's sure to happen in multiple flavors.

      The real question is what ELSE will be on the motherboards and in the chip by the time these things hit the market? Specifically, what DRM hardware will come with these things? What will the BIOS look like?

      That's why I think that the current generation of 64-bit desktops are probably one of the best values for a machine you might be using 4 years from now. It's risky to wait
  • by Anonymous Coward on Thursday August 26, 2004 @06:10PM (#10082916)
    In case it's not obvious to those who didn't read the article all the way through, it's a better thing when the memory is shared (single cache) rather than separate (dual cache). But that is harder to design, so for these first-generation dual-core chips from Intel and AMD, they are using separate caches for each core. (IBM's dual core Power4 [ibm.com] processor has a unified cache.) At some point down the road, they will likely unify them to increase performance.
    • At some point down the road, they will likely unify them to increase performance.

      In the meantime, they should just put a bright red sticker on the box that says "DUAL CACHE!" It is documented, so it's a feature, not a bug.
    • by skribble ( 98873 ) on Thursday August 26, 2004 @06:16PM (#10082971) Homepage

      Thanks for pointing that out, I'm sure a number of people were things "Ooooo Cool two caches" when they should have been thinking "Awwww Damn, two caches!"

    • by mrchaotica ( 681592 ) on Thursday August 26, 2004 @06:16PM (#10082976)
      Hmm... the Power4 is dual-core and unified cache? I wonder if this has implications for future Macs to compete with these new x86 processors...
      • by EvilTwinSkippy ( 112490 ) <yoda AT etoyoc DOT com> on Thursday August 26, 2004 @06:29PM (#10083068) Homepage Journal
        Compete? What part of spank them and stole their lunch money does x86 fail to understand.

        We have a dual p4 server, the damn thing sounds like a gas turbine when it's on. Really, I've used quieter air compressors.

        Our dual-G5s from apple are quiet, sleek, and each processor gets it's own block of RAM. Granted, the ASIC for the memory controller gets it's own heat sink. But man, you crack it open and you wonder where the rest of the server is. It's literally 2 giant blocks for the processors, the ASIC that handles memory management, and a wee little chip on the end of the mobo that looks like a bus controller.

        • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Thursday August 26, 2004 @06:49PM (#10083210) Homepage Journal

          The Hammer-core processors with dual-channel memory controllers have more memory bandwidth than the best G5, and the memory is accessed directly by the processor. Hypertransport is really quite an excellent interconnect. Hammer is NUMA-architecture and each processor gets its own block of ram. Finally, the Opteron dissipates much less energy as heat than the intel offerings - only about 46W max. I believe this is still a bit more than the G5, of course, but it's really not that bad.

          So yes, the proper term is compete.

          • by Ayanami Rei ( 621112 ) * <rayanami&gmail,com> on Thursday August 26, 2004 @07:07PM (#10083356) Journal
            hence the block of RAM per CPU.
      • What does IBM's Power4 chip have anything do with Macs?

        Even the G5 PowerPC chips only implement a fraction of the full POWER architecture. I wouldn't expect to see dual-core/single-cache CPUs in Apple Desktops any time soon. Maybe in 8 or 10 years...

    • Wish I had some mod points to bump this up higher :-)
    • Are there situations where two caches might be better? For example, a multi-threaded application with two memory-intensive threads, each locked down onto a specific CPU?
      • by spuzzzzzzz ( 807185 ) on Thursday August 26, 2004 @06:49PM (#10083213) Homepage
        Are there situations where two caches might be better? For example, a multi-threaded application with two memory-intensive threads, each locked down onto a specific CPU?

        Not really. The problem with 2 caches is duplication. It is quite probable that both cores will want to work on the same thing, in which case cache space will be wasted. It also creates timing complications when one core wants to write to its cache because the other core will have to be told to invalidate its relevant cache entry. On the other hand, you could create a single cache with double the size. This would make sharing memory between CPUs simpler and it wouldn't significantly increase access times (so the situation you mentioned wouldn't be affected). The argument for double caches is about cost, scalability and design simplicity, not performance.

    • by spuzzzzzzz ( 807185 ) on Thursday August 26, 2004 @06:34PM (#10083122) Homepage

      The dual cache simplifies things emormously, especially taking the design of the Opteron into account. Opterons are incredibly scalable--each one has three HyperTransport links that can be connected to memory, I/O or another processor. In order to make dual-core chips, all AMD has to do is take two Opterons, put them in the same package and hard-wire a HT link from one processor to the other.

      Of course, they also need to worry about things like size and power consumption but the simplified architecture really makes things a lot easier and will probably contribute to lower prices. It will also accelerate the introduction of multi-core (ie more than two) processors...

      If they were to implement a unified cache design, they would have to make significant changes. They would need to implement cache snooping and complicated memory management. Given that the new dual-core processors (AMD ones, at least) are meant to be pin-compatible with current processors, this would be a bit much to ask. Maybe they'll have unified caches sometime, but I don't see it happening anytime soon.

      • by hattig ( 47930 ) on Thursday August 26, 2004 @07:07PM (#10083355) Journal
        No no no no.

        That's all wrong.

        The Opteron has always supported dual cores, and it isn't via "internal hypertransport", the internal crossbar connects to the SysReq that supports two cores attached directly. You cannot attach a shared cache dual core to this design. Each core must have its own individual L2 cache. This is why you could have an 8 processor Opteron system with dual-cores for 16 cores in total despite the fact that the current Opteron can only do 8 processors at the most glueless. Oh, and Hypertransport doesn't connect to memory either, the memory controller is something else connected to the internal crossbar.

        And for the Opteron this is a good design. As the cores are on the same chip, cache coherency will be done at the speed of the processor and not be limited by inter-processor bandwidth. It really isn't a problem at all that the cores each have their own individual cache. At least they aren't competing with each other for cache bandwidth. The only bad point is that a core cannot have the option of using up to 2MB of shared cache - not as big a problem as it might sound, 1MB is doing very well for Opteron, and the on-die memory controllers negate a lot of the latency penalty for main memory access.
    • FTA: Keeping the cache as one single unit theoretically allows each processor core to access more data in a rapid fashion. Dividing the cache, however, also cuts down on some design work.

      In case it's not obvious to those who didn't read the article all the way through, it's a better thing when the memory is shared (single cache) rather than separate (dual cache).

      Yes, it's better to have a single cache for performance reasons (cache "hit" rates would theoretically be higher with a single larger cache
    • It's not entirely true that single is better. It depends on what the system is used for. If both cores are accessing the same memory (likely the case in a multi-threaded webserver for instance), then they can benefit from sharing a cache and effectively doubling the cache size. However, if both cores are accessing different memory (almost any situation where different applications are running on the different cores), then sharing a cache could have devastating effects on performance. As each process running
    • by jackb_guppy ( 204733 ) on Thursday August 26, 2004 @07:05PM (#10083331)
      The PPC4 does not have single cache...

      There a L1 caches for both cores.

      There are 3 L2 caches hooked to cross bar switch for speed flowing data into and out of the L1

      There is a single L3 controller overseeing 2 L3 external memory banks.

      Then there is two busses to 2 main memory.

      And 3 interconnects to 3 other dual core chips that make a single 8way processor block.

      And 4 busses inter connecting 4 of these 8way to make a 32way machine, with dual IO channels to hardware!
    • it's a better thing when the memory is shared (single cache) rather than separate (dual cache).

      Yeah, if the dual cache could be shared and still run without added latency or decreased bandwidth. That doesn't mean a different chip with a unified cache would be faster though.

      Also, the same is true of dual cores in the first place. It would be better to have a single processor (without dual cores) if it could be twice as fast. Unfortunately, chip designers seem to be running out of ways to usefully empl

  • Confused (Score:3, Interesting)

    by Shard013 ( 530636 ) <shard013&hotmail,com> on Thursday August 26, 2004 @06:11PM (#10082923)
    I'm not a hardware pro, but is this basically the same as having two seperate chips, or am I missing the point here?
    • Re:Confused (Score:5, Informative)

      by dougmc ( 70836 ) <dougmc+slashdot@frenzied.us> on Thursday August 26, 2004 @06:16PM (#10082965) Homepage
      No, you're not missing the point.

      The benefit is that you get two CPUs in less space. You might even be able to get two CPUs in a system designed to support only one (because it has only one slot.) And if your system already has two CPU slots, this might give you four CPUs.

      It might also use less power than two CPUs, but I wouldn't hold my breath on that one.

      • Re:Confused (Score:2, Informative)

        by Anonymous Coward
        I doubt the dual core processors will be socket compatible with existing single core processors, so you will be unlikely to be able to upgrade an existing motherboard to dual processor just by dropping in a different CPU. It is possible they will come out with new socket designs which can accomodate either dual or single core CPUs, but I wouldn't bet heavily on it.

        The benefit, as you say, is in space, with possibly a small amount in power consumption, but I'd agree not to hold your breath, and even if it
    • Re:Confused (Score:4, Interesting)

      by eddy ( 18759 ) on Thursday August 26, 2004 @06:16PM (#10082972) Homepage Journal

      Yes. Actually, I would have thought that the reverse (shared cache) would have been news instead.

      The point is that you can have very fast inter-CPU communication, the moderboard gets cheaper to produce, you don't have to double the cooling machinery... and they're probably cheaper to produce also (one package instead of two).

      I assume the cores are actually produced one-by-one or it'd get big and very expensive.

    • Re:Confused (Score:5, Informative)

      by ERJ ( 600451 ) on Thursday August 26, 2004 @06:17PM (#10082987)
      Kinda. I could see a couple advantages though:

      1) Fast interconnect between chips. Instead of having to transfer data over the bus, if the CPU needed info from the other CPU it could transfer over a high speed connection without having to involve other parts of the machine (bus). AMD already has a sort of high speed interconnect to their multi-cpu motherboards instead of splitting like intel does but I would imagine that this would still be faster.

      2) Less motherboard room needed. You don't need dual cooling fans, dual power / interface lines and have more room overall on the motherboard.
    • Re:Confused (Score:3, Informative)

      by Lord Kano ( 13027 )
      I'm not a hardware pro, but is this basically the same as having two seperate chips, or am I missing the point here?

      Pretty much the same thing as having two processors, but once things are running at proper capacity, it will be cheaper to put two cores on one chip. In part because you won't have to reproduce the underlying electronics. The motherboards will also be cheaper. One socket means less money spent on R&D. If and when someone releases a dual socket/quad core motherboard it will be cheaper to
  • Licensing Issues? (Score:5, Interesting)

    by xeon4life ( 668430 ) <devin.devintorres@com> on Thursday August 26, 2004 @06:12PM (#10082931) Homepage Journal
    What will happen to those who must pay a royalty fee per CPU? Will companies that charge for each CPU begin to charge for two, or will it still be viewed as one...?
    • I sure hope they charge for each core. It'll help extract more money from stupid customers who refuse to leave vendors that treat them poorly.
      • You're absolutely right. My next PC will be a dual-dual core, with a pair of video cards. Whichever OS supports it best (drivers vs licensing) will be the one I use.
    • Re:Licensing Issues? (Score:5, Informative)

      by Ianoo ( 711633 ) on Thursday August 26, 2004 @06:17PM (#10082979) Journal
      When hyperthreading was released, the industry had to cope with similar issues. Those of us using operating systems with artificial limits imposed on the number of possible processors used in a system had to wait for software updates to fix detection. I'm sure that the same thing will happen again, undoutedly there will be some flag in a register somewhere that identifies whether a processor is part of a dual-core chip or just a single CPU on its own. The OS or software can just read this in and work out whether there is sufficient licensing to use them.
      • When hyperthreading was released, the industry had to cope with similar issues.

        Not really. Hyperthreading just `sort of' works like another CPU -- it's not really another CPU, and certainly it doesn't perform like a complete other CPU. So they really shouldn't charge extra for it.

        But having two CPUs on one die, that is a second *real* CPU, and therefore something that they could legitimately charge `two CPU' prices for. But even these aren't brand new, so it's not a new question, and it's probabl

      • Re:Licensing Issues? (Score:2, Interesting)

        by Anonymous Coward
        The theory behind charging per cpu is that you pay for the value, or at least the work (valuable or not) that the software does. With hyperthreading, it really isn't doing any more work, in theory you could get similar speed-ups (if you are getting any) by improving the memory subsystem, and similar architectural changes for a single-threaded system. So it doesn't make sense to pay a per cpu licensing fee for those "virtual" cpus because they are not actual cpus.

        With a multi-core system, you really do hav
        • "The theory behind charging per cpu is that you pay for the value, or at least the work (valuable or not) that the software does"

          I disagree. The theory behind charging per CPU is much closer to the "how much milk you can squeeze from the cow before you get kicked" theory.

    • What will happen to those who must pay a royalty fee per CPU?
      You'll have to ask those who charge such a royalty fee, or read through your contract carefully. Having two CPUs in one chip is nothing new (I think there's some IBM and maybe HP boxes using chips like that already), so you should be able to get an answer now -- ask what they're charging the users of those chips.
      • Re:Licensing Issues? (Score:3, Informative)

        by elmegil ( 12001 )
        A typical vendor, Oracle, when talking about a different chip (the newest SPARC chips) says "yes you must pay for each core". I would be surprised if many vendors with such licensing schemes have any other answer.
    • Two cores are two CPUs and have the same performance as two separate CPUs. Thus you will be charged for two CPUs.
    • by name773 ( 696972 ) on Thursday August 26, 2004 @06:28PM (#10083063)
      when the wind is blowing westward on odd days of the week you pay for one. when there are clouds on an even day, you pay for two. during leap year, when a west wind blows clouds away at midnight on an even day, you pay for four processors, two computers, a camel, three pci slots, and a partridge in a pear tree.
  • by SIGALRM ( 784769 ) * on Thursday August 26, 2004 @06:12PM (#10082932) Journal
    The dual-core chips that Advanced Micro Devices and Intel plan to bring to market next year won't be sharing their memories
    As I understand it, the rationale behind Opteron's "Direct Connect" dual-core architecture is to make it easier to place two processor cores on the same silicon die. It's also a power-consupmtion issue, as the two processors can run at lower clock speeds. However, unlike Intel's design, Direct Connect features an integrated memory controller and hypertransport interconnects that connect the processor to the I/o port or directly to another processor.
  • "Montecito" (Score:5, Funny)

    by Mateito ( 746185 ) on Thursday August 26, 2004 @06:13PM (#10082934) Homepage
    "Montecito", a spanish word, literally translates as "a small monte".

    Thus I predict that this will be followed by a quad-core chip called the "monte", an 8-core chip called the "montote" (the big monte), and finally a 16-core chip known as "The Full Monte".

  • yeah, (Score:5, Interesting)

    by pb ( 1020 ) on Thursday August 26, 2004 @06:14PM (#10082947)
    You probably don't want to have both chips fighting over the cache, and slowing things down; I'm sure doing The Right Thing[tm] will take a while for them to work out. Until then, just pretend that they're mostly separate chips on the same silicon.

    Maybe in the future they'll come up with some more advanced cache designs that can share some cache and improve performance. But until then, expect to see it in the next generation of value chips. (Overclocked dual-core Celerons? Nifty!)
    • Re:yeah, (Score:2, Insightful)

      by laudney ( 749337 )
      The cause for cache conflict is not a hardware but a software one. Suppose there is one process/thread running on each core. When the two processes have incompatible instruction/data streams that evict each other out of the cache, performance is seriously reduced. This requires an intelligent enough OS scheduler.
  • Non-news event (Score:4, Informative)

    by doormat ( 63648 ) on Thursday August 26, 2004 @06:15PM (#10082958) Homepage Journal
    I've saw this article at another website earlier today, and I though this wasnt really important. Each core should have its own cache, thats exactly what a dual core chip is. Not twice as many execution units crammed into the same space, or some other funny configuration, its two seperate chips on the same die, perhaps some modifications for inter-processor communication, but thats about it. With AMD's core design, you have the physical layer only of the hypertransport bus to connect the chips, and the integrated memory controller has one or two ports to talk to memory (single/dual channel) and two ports to talk to two seperate chips. It will be interesting to see if AMD couples dual-core chips with DDR2-667 or DDR2-800, that would make the most sense, as to keep the memory controller from being the bottleneck, as opposed to the system bus on the intel side.
  • by spirit_fingers ( 777604 ) on Thursday August 26, 2004 @06:19PM (#10082999)
    Actually, the left core will be verbal, creative and be really good at procesing visual information, while the right core will be logical, good at number crunching and have no style sense whatsoever.
  • I don't understand all the hype around dual core. Maybe I'm being stupid. Two chips on one core seems like a great idea, and I'm sure it will improve performance.

    But Intel has already demonstrated there is surely a better solution - something like SMT, hyperthreading.

    Wouldn't it be saner to build a chip with double the number of execution units and double the number of instruction fetch/decode units and a larger reorder buffer that would appear, say, as four logical processors to a system? Surely you
    • Wouldn't it be saner to build a chip with double the number of execution units and double the number of instruction fetch/decode units and a larger reorder buffer that would appear, say, as four logical processors to a system?

      That's like the Alpha EV8. It costs way too much to design and it's questionable whether you could build it at all.
    • If you add an extra execution unit to a CPU, you have to add all sorts of logic to decide what is pairable with what else (i.e. allocation of execution units such that they don't collide in terms of input or output registers).

      At somepoint you reach a limit where you can't use extra execution units because you don't know the input values to an instruction because the previous instructions upon which it depends are still in the pipline of other execution units.

      Dual core avoids that... plus if you validate o
    • by NerveGas ( 168686 ) on Thursday August 26, 2004 @06:30PM (#10083085)
      The benefits of HT, as currently implemented, are pretty insignificant compared to the benefits of multiprocessing, as the possible performance boost is very small, it certainly doesn't give you the ability to handle more interrupts, and it doesn't let you decrease the number of context-switches.

      As for building a more intelligent core to take advantage of the extra transistors, that just might make sense - but it would also take hundreds of millions (or billions) of dollars in development, and the chip wouldn't appear for a good number of years (look at the Itanium). It's a lot easier and cheaper to slap two cores on the same die and call it done. Because Intel is scurrying to try and play catch-up to AMD in the high-end market, time-to-market is critical for them.

      steve
    • Well, they are skipping on of the main ones.

      One of the costliest things is a cache miss, and if one were able to share the caches between two cores it would greatly decrease the number of misses. (no need to have everything in their twice)
    • Hyperthreading is not a better solution, particularly when dealing with the Intel implementation. Unless it's very carefully done, all it does is keep the cache from working effectively. Linux and FreeBSD actually got performance improvements from leaving one of the virtual processors idle when there were more processes scheduled to run. When there's two threads of the same process, they let them both run because those tend to have better locality of reference and therefore don't thrash the cache so much.

      P
    • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Thursday August 26, 2004 @06:55PM (#10083260) Homepage Journal

      Hyperthreading is simply a second context. It lets you run a second thread at the same time by using the unutilized capacity of existing functional units and is largely useful only when intel's branch prediction fails and the chip would otherwise be paying the ultimate penalty for its long, long, LONG pipeline.

      In other words, HT is an ingenious method for making up for the fact that the pentium 4 is horribly inefficient.

      It would be better to stick a whole bunch of simple cores on a single chip at a lower clock rate and have them work cooperatively, if only we used more multithreading. This is pretty much where intel is planning to go, with their multiple-core chips based on the Pentium-M. Or, so the rumors say.

  • by leathered ( 780018 ) on Thursday August 26, 2004 @06:24PM (#10083033)
    Luckily for AMD, the Opteron/A64 was designed with dual-core in mind. As I understand it both cores will talk to each other via an internal Hypertransport link and (as with current Opertons) together with the internal memory controller will eliminate the need for an external northbridge. It is also expected that upon release they will drop directly into existing motherboards with nothing more than a BIOS upgrade.

    Intel will find things more challenging. Both cores will have to contend the GTL bus, currently the Achilles heel of their MP solutions, by communicating via an external northbridge.
  • by Skulker303 ( 11304 ) on Thursday August 26, 2004 @06:29PM (#10083072)
    Daul core microprocessors are not a new development. IBM with their POWER4 and POWER5, HP and the PA-RISC 8800, and TI with their OMAP processors are definitive proof that multi-core solutions are not just a stop gap in increasing the performance delta of modern silicon.

    Daul core processors are a natural evolution in the development of general purpose and even specialized computing devices. SMT was to be a boon for the EV8, but later found its way into the Pentium4. Multiple logical processors were just a first step.

    It should be interesting to see just what AMD can do with both SMT and a daul core design.

    It just had better run BSD. = )
    • Daul core microprocessors are not...
      Daul core processors are...
      It should be interesting to see just what AMD can do with both SMT and a daul core design.

      You keep using that word. I do not think it means what you think it means.

  • by NerveGas ( 168686 ) on Thursday August 26, 2004 @06:34PM (#10083119)

    The downside is that as the AMD chips are going to be backward-compatible with older boards, I imagine that the dual-core chip will still only have the single 128-bit memory controller.

    While that will still give you twice as many available CPU iterations, that means that the two cores will be fighting for memory bandwidth. In the case of Intel's chips, that's business-as-usual: But for the Opterons, where each processor brings its own memory controller, it just doesn't feel right. : (

    steve
    • From what I understand they will have a dual-core chip in which one processor is connected to memory via a dual channel memory controller, and the other processor is connected only to the first processor, via hypertransport.
  • by mcraig ( 757818 ) on Thursday August 26, 2004 @06:35PM (#10083126)
    Kernel Panic Core Dumped... Still Panicking Dumping Second Core...
  • What about cache sync? Educate me here but I would have thought that a double-sized shared cache would be faster than two seperate caches that have to be synced all the time. Am I an idiot?
  • by Locutus ( 9039 ) on Thursday August 26, 2004 @06:51PM (#10083221)
    A friend purchased a 3GHz( yes 3 ) Intel Pentium 4 with HyperThreading a few months back. I asked why he didn't purchase an AMD CPU and he said he needed x86 compatibility... So much for informed hardware engeers. Anyway, I recently asked him about the system since I just built an AMD 2600+ based system and wanted to know if he had some code he wanted to compare/test. Well, he told me that his 3GHz CPU really only runs most applications at 1.5GHz except if they are multi-threaded or hyperthread aware.

    Is this true? Does Intel put a 3GHz label on 1.5GHz dual/core CPU's or whatever this hyperthreading is? Sounds dual/core-ish to me...

    It's funny how that 1.5GHz number shows up again in Intel product. I remember when they could not build anything faster than 7xxMHz and then all of a sudden, they had a "new technology" that got them 1.5GHz( 2x 750MHz ) and it was found out later that only PART of the CPU was running at 2x. This all happened when AMD beat Intel passed the 1GHz barrier. Are they again playing "tricks" to get a big GHz label on their parts?

    So any of you people up on this dual-core and hyperthreading thing and feel like explaining to the rest of us what's going on? TIA.

    LoB
  • by gillbates ( 106458 ) on Thursday August 26, 2004 @07:15PM (#10083411) Homepage Journal

    While dual cores on a chip might be nice, it won't produce any serious performance increases.

    The underlying problem with Intel and AMD's processors is that they are at the mercy of the architecture:

    1. These chips must share a relatively slow memory bus with other devices.
    2. Currently, the fastest FSB to date is 1033MHz - almost 1/3 of the max clock speed of the processor. Given that Intel's integer units operate at twice the clock speed, the fastest parts of the chip operate at 6 times faster than memory.
    3. The monolithic, synchrous, central-processing-unit design of the architecture prohibits optimizations such as using memory controllers for block moves and having dedicated IO processors. Contrast this with Mainframes in which the CPU passes off IO instructions to ancillary processors and continues to work. In PC-land, when the IDE controller seizes the bus for a transfer from disk into memory, the CPU has to execute out of its cache for ~256 instruction cycles, or risk stalling.

    The ironic thing is that even though AMD and Intel are out-clocking mainframe processors by factors of 2 and 3, mainframes still get more work done simply because they aren't choked by a slow and overcrowded system bus .

    • by owlstead ( 636356 ) on Thursday August 26, 2004 @07:28PM (#10083501)
      True, and someday every IO process will probably be handled by a dedicated processor. A distributed operating system will run processes on each, making it easy to reprogram the tasks. Fast interconnects will make a NUMA architecture possible.

      Currently however that future is far off. It's simply much cheaper to centralize processing, so the bus will remain an issue for some time to come. For most situations this will be fine, for specialized situations where a single (fast) real time process is needed, or when IO is more important than CPU power...it sucks.

      (listening to my integrated audio which takes about 7% of my processor, and I don't care a bit)
    • by kscguru ( 551278 ) on Thursday August 26, 2004 @11:36PM (#10085091)
      These chips must share a relatively slow memory bus with other devices.

      No... on AMD chips the memory bus is dedicated. Intel chips have a very different system architecture (which does saturate at ~2 CPUs), but AMD gives each chip its own memory controller and memory - scales perfectly. (By the way, this isn't new ... big iron (e.g. Sparc) has been doing this for years).

      Currently, the fastest FSB to date is 1033MHz - almost 1/3 of the max clock speed of the processor. Given that Intel's integer units operate at twice the clock speed, the fastest parts of the chip operate at 6 times faster than memory.

      That's why modern processors use pipelining (in x86, since 486's) and caches (since, uh, 8086s ?). FSB only comes into play in 1-2% of the memory accesses. But those memory accesses are pipelined, interleaved, with multiple outstanding requests issued by the out-of-order pipeline ... processor designers have been working around a slow bus for years, and the FSB is only the bottleneck in extreme, pathological cases.

      The monolithic, synchrous, central-processing-unit design of the architecture prohibits optimizations such as using memory controllers for block moves and having dedicated IO processors

      Ever heard of DMA? A DMA controller does that memory transfer ... there are 2 DMA controllers with 8 channels on your current x86 PC. Heck, high-end PCI cards even have their own onboard DMA engines (it's called bus-mastering). I/O offload? You've obviously never written a device driver... modern drivers issue a few "start" instructions, then sleep; eventually the device completes the I/O and issues an interrupt to inform the CPU it's done. The last computer I had that stalled on disk I/O was running MS-DOS - nine years ago.

      In all fairness, I thought exactly the same things four years ago. Then I learned about modern computer architecture. And in today's world (and, in fact, all PCs for the past ten years), your points are completely - and utterly - irrelevant.

  • Yield question (Score:3, Interesting)

    by Michael Woodhams ( 112247 ) on Thursday August 26, 2004 @07:24PM (#10083471) Journal
    Are the dual cores on the same piece of silicon? This would require both cores to be defect free. If only one core is defect free, is it possible to disable the dud and sell it as a single core CPU? This would make it a much more attractive proposition for the manufacturers.

    E.g. if a single core has a yeild (probability of being defect free) of 80%, then the dual core chips will have a yeild of 0.8^2 = 64%. (Actually slightly lower, because whatever interconnect they have also has to be free of defects.) 64% will have two good cores, 4% will have two bad cores, the remaining 32% will have one good core. The manufacturer would obviously like to make use of that 32% if they can.
    • Re:Yield question (Score:4, Informative)

      by mercuryresearch ( 680293 ) on Thursday August 26, 2004 @08:30PM (#10083953) Journal
      The manufacturers have the choice of using multichip module packaging (common in notebook graphics controllers, for example) or single die, however it is my current understanding we're talking a single die.

      They very likely WILL disable the dud and sell them as single core CPUs. This is how the "value" brands (Celeron, ex-Duron, and now Sempron) are typically created -- when there's a defect in the processor cache (which is a very large area of the die, and thus more likely to have a defect), the faulty bank(s) are turned off via fusing, creating a CPU with a smaller cache.

      This is all pretty standard yield management.

      Also, your calculations are very close to being correct, while the manufacturers closely guard their yield information, you're in the ballpark -- and it's interesting to note according to my estimates Intel's Celeron volumes approximately mirror your computed single-core yield percentage... meaning it will likely be business as usual in our dual core future.

      BTW, if you're interested in computing yield values there's an excellent model to be had in one fo the chapters in Henessy and Paterson's _Computer Architecture, a Quantitative Approach_
    • If only one core is defect free, is it possible to disable the dud and sell it as a single core CPU?

      Yes, it is possible, in most cases. (Although there are a few types of defects that would prohibit this, such as power shorts).

      For example, hypothetically, Intel could sell a single core version of Montecito called the Half Monte and a dual core version called the Full Monte.

"God is a comedian playing to an audience too afraid to laugh." - Voltaire

Working...