


Dual Caches for Dual-core Chips 342
DominoTree writes "The dual-core chips that AMD and Intel plan to bring to market next year won't be sharing their memories. A version of Opteron coming in 2005 and Montecito, a future member of Intel's Itanium family also slated for next year, will both have two processor cores, the actual unit inside a processor that performs the calculations, and each core will have separate caches."
mmmm cores (Score:3, Insightful)
Re:mmmm cores (Score:2, Informative)
Re:mmmm cores (Score:5, Funny)
Didn't you hear? According to SCO, Linux doesn't even exist!
Re:mmmm cores (Score:3, Interesting)
No doubt a dual core processor will incur a dual cpu license fee as well.
Re:mmmm cores (Score:4, Informative)
Solaris.
The Playstation 2 is actually 128 bit. But that doesn't really count as an OS...
Re:mmmm cores (Score:4, Informative)
Re:mmmm cores (Score:4, Interesting)
Re:mmmm cores (Score:5, Informative)
The G5 is a 64 bit processor and OSX Panther is a 64 bit OS.
Panther is not a true 64 bit OS in the traditional sense of the word. It does not support 64 bit addressing[1]. It does however support the use of 64 bit math operations and the saving of related registers on the CPU.
Tiger (Mac OS 10.4) will have the first steps towards a true 64 bit OS by allowing 64 bit addressing [apple.com] (virtual addressing) to be used for libSystem only based tools (command line applications, no GUIs, etc.). At least that is all that Apple has so far committed to doing in Tiger at this time (cannot say more because of NDA).
[1] Note the Panther kernel has support for 64 bit physical addressing so the system can utilize greater then 4 GBs of RAM (hardware wise supporting up to 16 GB of RAM) but it does not support 64 bit virtual addressing (what applications use) at this time.
Re: (Score:2)
Re:mmmm cores (Score:5, Informative)
Oh you want one for the AMD64?
How [netbsd.org] about [freebsd.org] these [openbsd.org]?
Re:mmmm cores (Score:2, Interesting)
Why not Linux? Most 64-bit ready OS's these days are Linux (SUSE 9.1, FC2, Gentoo) or Unix-ey (MacOS X).
So it's pretty much tough shit for you then. Microsoft has abandoned you, their 64-bit OS will not be out until late 2005 (but you can have their crummy beta for free). Bahahahaha.
Re:mmmm cores (Score:5, Interesting)
If you do indeed have files as big as DVDs, it would certainly help with editing those files. You CAN break those up into chunks, only having 2GB or less in memory at any given time, and for the most part this works ok, however it does tend to be a bit of a kludge at the best of times, and sometimes it just flat out doesn't work.
As you correctly guess, servers are the first situation where this really makes sense. If you've got a database that is more than 2GB in size, you REALLY want a 64-bit system, otherwise you'll tend to take a big performance hit. Many high-end workstations require 64-bit systems as well to process all the data.
So, where is the benefit for the end-user? Well that depends on the user. First off, having more than 2GB of physcial memory on a 32-bit processor requires some really ugly hacks to make things work. They do work, but it is a really dumb idea. It was a annoying and crappy when we were forced to do it back in the 16-bit days, and it hasn't gotten any better. Secondly people are using bigger and bigger data files on their home PC, editing larger pictures and videos, playing games with more graphics and sound, some even run into issues with types of databases (I know my Usenet newsreader sometimes craps out when I'm downloading too much pr0n because of database limits). Basically you might not need it, but someone else might. The best part about it though is that 64-bits is "free".
Basically you've got a 64-bit CPU that is no more expensive than competiting 32-bit chips and Microsoft has said that 64-bit WinXP Pro will sell for the same price as 32-bit WinXP Pro, so really the question is not so much "Why" do we need 64-bit, but "why not?"
Sure, OS/400 (Score:5, Insightful)
Granted it is on a mini, but we have enjoyed 64bit computing for nearly nearly 10 years. Even have some power5s in production.
There are great OSes other than the ones used on PC hardware... too many "geeks" forget that.
There already is one. (Score:3, Informative)
VMS [hp.com] went 64-bit at least a decade ago.
Great OS for English-speaking folk, despite Linus's hatred for it.
64-bit (Score:3, Interesting)
The real question is what ELSE will be on the motherboards and in the chip by the time these things hit the market? Specifically, what DRM hardware will come with these things? What will the BIOS look like?
That's why I think that the current generation of 64-bit desktops are probably one of the best values for a machine you might be using 4 years from now. It's risky to wait
Note: Here, Single is Better (Score:5, Informative)
Re:Note: Here, Single is Better (Score:2, Funny)
In the meantime, they should just put a bright red sticker on the box that says "DUAL CACHE!" It is documented, so it's a feature, not a bug.
Re:Note: Here, Single is Better (Score:4, Funny)
Thanks for pointing that out, I'm sure a number of people were things "Ooooo Cool two caches" when they should have been thinking "Awwww Damn, two caches!"
Re:Note: Here, Single is Better (Score:5, Interesting)
Re:Note: Here, Single is Better (Score:4, Interesting)
We have a dual p4 server, the damn thing sounds like a gas turbine when it's on. Really, I've used quieter air compressors.
Our dual-G5s from apple are quiet, sleek, and each processor gets it's own block of RAM. Granted, the ASIC for the memory controller gets it's own heat sink. But man, you crack it open and you wonder where the rest of the server is. It's literally 2 giant blocks for the processors, the ASIC that handles memory management, and a wee little chip on the end of the mobo that looks like a bus controller.
Re:Note: Here, Single is Better (Score:5, Interesting)
The Hammer-core processors with dual-channel memory controllers have more memory bandwidth than the best G5, and the memory is accessed directly by the processor. Hypertransport is really quite an excellent interconnect. Hammer is NUMA-architecture and each processor gets its own block of ram. Finally, the Opteron dissipates much less energy as heat than the intel offerings - only about 46W max. I believe this is still a bit more than the G5, of course, but it's really not that bad.
So yes, the proper term is compete.
The G5 uses hypertransport... (Score:5, Interesting)
Re:The G5 uses hypertransport... (Score:5, Informative)
The G5 (PPC970/970FX) has a two 32 bit wide buses one going in each direction from the CPU and they have a data rate at half that of the CPUs clock rate. At a clock rate of 2.5GHz the bus is capable of a max theoretical throughput of 5GB/s each direction or 10GB/s in total (that is per CPU). Real world throughput is around 8 GB/s per CPU at 2.5GHz because of address/command overhead. Apple/IBM terms this the elastic bus and it is not HT based.
For more information see this block diagram [apple.com] referenced from this hardware tech note [apple.com].
Anyway the the post you are replying to is incorrect about each CPU having its own RAM. That is not true. Each CPU has it own independent bus to the memory controller (U3/U3H) and that controller has a dual channel connection memory capable of 6.4GB/s a second (DIMMS are required to be added in pairs to allow for a 128 bit wide path to memory). The U3 chip is basically cross bar like internally allowing for a few point-to-point connections to be taking place between its various interfaces (CPU to CPU, AGP to memory, etc.).
HT is used for as a secondary interconnect to relatively lower bandwidth devices in the IO chain.
Re:Note: Here, Single is Better (Score:2)
Even the G5 PowerPC chips only implement a fraction of the full POWER architecture. I wouldn't expect to see dual-core/single-cache CPUs in Apple Desktops any time soon. Maybe in 8 or 10 years...
Re:Note: Here, Single is Better (Score:4, Interesting)
They are aiming this at Mac Notebooks.
I beleive IBM have already planned a roadmap for the g5 that includes dual core.
Re:Note: Here, Single is Better (Score:3, Interesting)
Re:Note: Here, Single is Better (Score:2)
Re:Note: Here, Single is Better (Score:2)
Re:Note: Here, Single is Better (Score:5, Informative)
Not really. The problem with 2 caches is duplication. It is quite probable that both cores will want to work on the same thing, in which case cache space will be wasted. It also creates timing complications when one core wants to write to its cache because the other core will have to be told to invalidate its relevant cache entry. On the other hand, you could create a single cache with double the size. This would make sharing memory between CPUs simpler and it wouldn't significantly increase access times (so the situation you mentioned wouldn't be affected). The argument for double caches is about cost, scalability and design simplicity, not performance.
Re:Note: Here, Single is Better (Score:5, Interesting)
The dual cache simplifies things emormously, especially taking the design of the Opteron into account. Opterons are incredibly scalable--each one has three HyperTransport links that can be connected to memory, I/O or another processor. In order to make dual-core chips, all AMD has to do is take two Opterons, put them in the same package and hard-wire a HT link from one processor to the other.
Of course, they also need to worry about things like size and power consumption but the simplified architecture really makes things a lot easier and will probably contribute to lower prices. It will also accelerate the introduction of multi-core (ie more than two) processors...
If they were to implement a unified cache design, they would have to make significant changes. They would need to implement cache snooping and complicated memory management. Given that the new dual-core processors (AMD ones, at least) are meant to be pin-compatible with current processors, this would be a bit much to ask. Maybe they'll have unified caches sometime, but I don't see it happening anytime soon.
Re:Note: Here, Single is Better (Score:5, Informative)
That's all wrong.
The Opteron has always supported dual cores, and it isn't via "internal hypertransport", the internal crossbar connects to the SysReq that supports two cores attached directly. You cannot attach a shared cache dual core to this design. Each core must have its own individual L2 cache. This is why you could have an 8 processor Opteron system with dual-cores for 16 cores in total despite the fact that the current Opteron can only do 8 processors at the most glueless. Oh, and Hypertransport doesn't connect to memory either, the memory controller is something else connected to the internal crossbar.
And for the Opteron this is a good design. As the cores are on the same chip, cache coherency will be done at the speed of the processor and not be limited by inter-processor bandwidth. It really isn't a problem at all that the cores each have their own individual cache. At least they aren't competing with each other for cache bandwidth. The only bad point is that a core cannot have the option of using up to 2MB of shared cache - not as big a problem as it might sound, 1MB is doing very well for Opteron, and the on-die memory controllers negate a lot of the latency penalty for main memory access.
Re:Note: Here, Single is Better (Score:5, Informative)
That means you can get more bandwidth with silicon than a circuit board (each of reasonable size using modern components/processes.)
Also, it takes a lot less power to run lower-voltage drivers on low loads (little resistance and capactiance on die compared to a PCB.)
So, why not stack everything on onw chip? Cost of a chip rises exponentially with die size. Up to about 20mm^2, it's feasible (but pricy) bigger dice are very hard to make, result in lower yields, and hence cost a lot more.
Re:Note: Here, Single is Better (Score:3, Insightful)
In case it's not obvious to those who didn't read the article all the way through, it's a better thing when the memory is shared (single cache) rather than separate (dual cache).
Yes, it's better to have a single cache for performance reasons (cache "hit" rates would theoretically be higher with a single larger cache
Re: unified (single) is not always better (Score:3, Insightful)
Re:Note: Here, Single is Better (Score:5, Informative)
There a L1 caches for both cores.
There are 3 L2 caches hooked to cross bar switch for speed flowing data into and out of the L1
There is a single L3 controller overseeing 2 L3 external memory banks.
Then there is two busses to 2 main memory.
And 3 interconnects to 3 other dual core chips that make a single 8way processor block.
And 4 busses inter connecting 4 of these 8way to make a 32way machine, with dual IO channels to hardware!
Re:Note: Here, Single is Better (Score:3, Informative)
Yeah, if the dual cache could be shared and still run without added latency or decreased bandwidth. That doesn't mean a different chip with a unified cache would be faster though.
Also, the same is true of dual cores in the first place. It would be better to have a single processor (without dual cores) if it could be twice as fast. Unfortunately, chip designers seem to be running out of ways to usefully empl
Confused (Score:3, Interesting)
Re:Confused (Score:5, Informative)
The benefit is that you get two CPUs in less space. You might even be able to get two CPUs in a system designed to support only one (because it has only one slot.) And if your system already has two CPU slots, this might give you four CPUs.
It might also use less power than two CPUs, but I wouldn't hold my breath on that one.
Re:Confused (Score:2, Informative)
The benefit, as you say, is in space, with possibly a small amount in power consumption, but I'd agree not to hold your breath, and even if it
Re:Confused (Score:4, Interesting)
Yes. Actually, I would have thought that the reverse (shared cache) would have been news instead.
The point is that you can have very fast inter-CPU communication, the moderboard gets cheaper to produce, you don't have to double the cooling machinery... and they're probably cheaper to produce also (one package instead of two).
I assume the cores are actually produced one-by-one or it'd get big and very expensive.
Re:Confused (Score:5, Informative)
1) Fast interconnect between chips. Instead of having to transfer data over the bus, if the CPU needed info from the other CPU it could transfer over a high speed connection without having to involve other parts of the machine (bus). AMD already has a sort of high speed interconnect to their multi-cpu motherboards instead of splitting like intel does but I would imagine that this would still be faster.
2) Less motherboard room needed. You don't need dual cooling fans, dual power / interface lines and have more room overall on the motherboard.
Re:Confused (Score:2)
Re:Confused (Score:3, Informative)
Pretty much the same thing as having two processors, but once things are running at proper capacity, it will be cheaper to put two cores on one chip. In part because you won't have to reproduce the underlying electronics. The motherboards will also be cheaper. One socket means less money spent on R&D. If and when someone releases a dual socket/quad core motherboard it will be cheaper to
Licensing Issues? (Score:5, Interesting)
Re:Licensing Issues? (Score:2)
Re:Licensing Issues? (Score:2)
Re:Licensing Issues? (Score:5, Informative)
Re:Licensing Issues? (Score:2)
Not really. Hyperthreading just `sort of' works like another CPU -- it's not really another CPU, and certainly it doesn't perform like a complete other CPU. So they really shouldn't charge extra for it.
But having two CPUs on one die, that is a second *real* CPU, and therefore something that they could legitimately charge `two CPU' prices for. But even these aren't brand new, so it's not a new question, and it's probabl
Re:Licensing Issues? (Score:2)
Re:Licensing Issues? (Score:2, Interesting)
With a multi-core system, you really do hav
Re:Licensing Issues? (Score:3)
I disagree. The theory behind charging per CPU is much closer to the "how much milk you can squeeze from the cow before you get kicked" theory.
Re:Licensing Issues? (Score:2)
Re:Licensing Issues? (Score:3, Informative)
Re:Licensing Issues? (Score:2)
Re:Licensing Issues? (Score:5, Funny)
Different core models (Score:5, Informative)
"Montecito" (Score:5, Funny)
Thus I predict that this will be followed by a quad-core chip called the "monte", an 8-core chip called the "montote" (the big monte), and finally a 16-core chip known as "The Full Monte".
?Piensas que soy tonto o que? (Score:2)
Re:"Montecito" (Score:2)
The naked truth, as we know it.
Though I think Monty Python would be cooler. Though, maybe a bit too constricting.
Re:"Montecito" (Score:3, Funny)
You forgot to mention the low power edition for portables: The "three core monte".
yeah, (Score:5, Interesting)
Maybe in the future they'll come up with some more advanced cache designs that can share some cache and improve performance. But until then, expect to see it in the next generation of value chips. (Overclocked dual-core Celerons? Nifty!)
Re:yeah, (Score:2, Insightful)
Non-news event (Score:4, Informative)
Re:Non-news event (Score:3, Informative)
Re:Non-news event (Score:3, Insightful)
Inside the dual core (Score:4, Funny)
coming this fall on Fox... (Score:5, Funny)
I can't wait to see what they do to his nonorthogonal register file.
Dual core - what's the point? (Score:2)
But Intel has already demonstrated there is surely a better solution - something like SMT, hyperthreading.
Wouldn't it be saner to build a chip with double the number of execution units and double the number of instruction fetch/decode units and a larger reorder buffer that would appear, say, as four logical processors to a system? Surely you
Re:Dual core - what's the point? (Score:2)
That's like the Alpha EV8. It costs way too much to design and it's questionable whether you could build it at all.
Pipline Depth and Complexity in general (Score:2)
At somepoint you reach a limit where you can't use extra execution units because you don't know the input values to an instruction because the previous instructions upon which it depends are still in the pipline of other execution units.
Dual core avoids that... plus if you validate o
Re:Dual core - what's the point? (Score:4, Interesting)
As for building a more intelligent core to take advantage of the extra transistors, that just might make sense - but it would also take hundreds of millions (or billions) of dollars in development, and the chip wouldn't appear for a good number of years (look at the Itanium). It's a lot easier and cheaper to slap two cores on the same die and call it done. Because Intel is scurrying to try and play catch-up to AMD in the high-end market, time-to-market is critical for them.
steve
Re:Dual core - what's the point? (Score:2)
One of the costliest things is a cache miss, and if one were able to share the caches between two cores it would greatly decrease the number of misses. (no need to have everything in their twice)
Re:Dual core - what's the point? (Score:3, Informative)
P
Re:Dual core - what's the point? (Score:5, Informative)
Hyperthreading is simply a second context. It lets you run a second thread at the same time by using the unutilized capacity of existing functional units and is largely useful only when intel's branch prediction fails and the chip would otherwise be paying the ultimate penalty for its long, long, LONG pipeline.
In other words, HT is an ingenious method for making up for the fact that the pentium 4 is horribly inefficient.
It would be better to stick a whole bunch of simple cores on a single chip at a lower clock rate and have them work cooperatively, if only we used more multithreading. This is pretty much where intel is planning to go, with their multiple-core chips based on the Pentium-M. Or, so the rumors say.
AMD seems more promising (Score:3, Informative)
Intel will find things more challenging. Both cores will have to contend the GTL bus, currently the Achilles heel of their MP solutions, by communicating via an external northbridge.
Commodity hardware grows mature. (Score:5, Insightful)
Daul core processors are a natural evolution in the development of general purpose and even specialized computing devices. SMT was to be a boon for the EV8, but later found its way into the Pentium4. Multiple logical processors were just a first step.
It should be interesting to see just what AMD can do with both SMT and a daul core design.
It just had better run BSD. = )
Re:Commodity hardware grows mature. (Score:3, Insightful)
Daul core microprocessors are not...
Daul core processors are...
It should be interesting to see just what AMD can do with both SMT and a daul core design.
You keep using that word. I do not think it means what you think it means.
The down-side to this.... (Score:5, Informative)
The downside is that as the AMD chips are going to be backward-compatible with older boards, I imagine that the dual-core chip will still only have the single 128-bit memory controller.
While that will still give you twice as many available CPU iterations, that means that the two cores will be fighting for memory bandwidth. In the case of Intel's chips, that's business-as-usual: But for the Opterons, where each processor brings its own memory controller, it just doesn't feel right. : (
steve
Re:The down-side to this.... (Score:2)
So can it crash twice now ? (Score:5, Funny)
Ewww... (Score:2)
Sync? (Score:2)
what's the diff: dual core and hyperthreading? (Score:3, Interesting)
Is this true? Does Intel put a 3GHz label on 1.5GHz dual/core CPU's or whatever this hyperthreading is? Sounds dual/core-ish to me...
It's funny how that 1.5GHz number shows up again in Intel product. I remember when they could not build anything faster than 7xxMHz and then all of a sudden, they had a "new technology" that got them 1.5GHz( 2x 750MHz ) and it was found out later that only PART of the CPU was running at 2x. This all happened when AMD beat Intel passed the 1GHz barrier. Are they again playing "tricks" to get a big GHz label on their parts?
So any of you people up on this dual-core and hyperthreading thing and feel like explaining to the rest of us what's going on? TIA.
LoB
Re:what's the diff: dual core and hyperthreading? (Score:3, Insightful)
The gains definitely outweigh the losses, or they wouldn't do it. But the gains don't only come from CPU cost-per-core. There are lots of other factors, such as density, power efficiency, potential for core-to-core lockstep, etc.
I have no first-hand knowledge of AMD, but for Itanium, smaller process geometries do not increase yield
Re:what's the diff: dual core and hyperthreading? (Score:3, Informative)
Sorry, I live in a 64-bit world, to the point that I'm quite ignorant of X86 state of the art. I've been blindly (and wrongly) assuming a 64-bit context for this whole conversation.
Your posting reminded me that caches of only 512M still exist! Montecito has 24M between 2 cores. Also, re-reading your posts in the context
Day late, dollar short... (Score:3, Insightful)
While dual cores on a chip might be nice, it won't produce any serious performance increases.
The underlying problem with Intel and AMD's processors is that they are at the mercy of the architecture:
The ironic thing is that even though AMD and Intel are out-clocking mainframe processors by factors of 2 and 3, mainframes still get more work done simply because they aren't choked by a slow and overcrowded system bus .
Re:Day late, dollar short... (Score:4, Interesting)
Currently however that future is far off. It's simply much cheaper to centralize processing, so the bus will remain an issue for some time to come. For most situations this will be fine, for specialized situations where a single (fast) real time process is needed, or when IO is more important than CPU power...it sucks.
(listening to my integrated audio which takes about 7% of my processor, and I don't care a bit)
Re:Day late, dollar short... (Score:5, Informative)
No... on AMD chips the memory bus is dedicated. Intel chips have a very different system architecture (which does saturate at ~2 CPUs), but AMD gives each chip its own memory controller and memory - scales perfectly. (By the way, this isn't new ... big iron (e.g. Sparc) has been doing this for years).
Currently, the fastest FSB to date is 1033MHz - almost 1/3 of the max clock speed of the processor. Given that Intel's integer units operate at twice the clock speed, the fastest parts of the chip operate at 6 times faster than memory.
That's why modern processors use pipelining (in x86, since 486's) and caches (since, uh, 8086s ?). FSB only comes into play in 1-2% of the memory accesses. But those memory accesses are pipelined, interleaved, with multiple outstanding requests issued by the out-of-order pipeline ... processor designers have been working around a slow bus for years, and the FSB is only the bottleneck in extreme, pathological cases.
The monolithic, synchrous, central-processing-unit design of the architecture prohibits optimizations such as using memory controllers for block moves and having dedicated IO processors
Ever heard of DMA? A DMA controller does that memory transfer ... there are 2 DMA controllers with 8 channels on your current x86 PC. Heck, high-end PCI cards even have their own onboard DMA engines (it's called bus-mastering). I/O offload? You've obviously never written a device driver... modern drivers issue a few "start" instructions, then sleep; eventually the device completes the I/O and issues an interrupt to inform the CPU it's done. The last computer I had that stalled on disk I/O was running MS-DOS - nine years ago.
In all fairness, I thought exactly the same things four years ago. Then I learned about modern computer architecture. And in today's world (and, in fact, all PCs for the past ten years), your points are completely - and utterly - irrelevant.
Yield question (Score:3, Interesting)
E.g. if a single core has a yeild (probability of being defect free) of 80%, then the dual core chips will have a yeild of 0.8^2 = 64%. (Actually slightly lower, because whatever interconnect they have also has to be free of defects.) 64% will have two good cores, 4% will have two bad cores, the remaining 32% will have one good core. The manufacturer would obviously like to make use of that 32% if they can.
Re:Yield question (Score:4, Informative)
They very likely WILL disable the dud and sell them as single core CPUs. This is how the "value" brands (Celeron, ex-Duron, and now Sempron) are typically created -- when there's a defect in the processor cache (which is a very large area of the die, and thus more likely to have a defect), the faulty bank(s) are turned off via fusing, creating a CPU with a smaller cache.
This is all pretty standard yield management.
Also, your calculations are very close to being correct, while the manufacturers closely guard their yield information, you're in the ballpark -- and it's interesting to note according to my estimates Intel's Celeron volumes approximately mirror your computed single-core yield percentage... meaning it will likely be business as usual in our dual core future.
BTW, if you're interested in computing yield values there's an excellent model to be had in one fo the chapters in Henessy and Paterson's _Computer Architecture, a Quantitative Approach_
Re:Yield question (Score:3, Funny)
Yes, it is possible, in most cases. (Although there are a few types of defects that would prohibit this, such as power shorts).
For example, hypothetically, Intel could sell a single core version of Montecito called the Half Monte and a dual core version called the Full Monte.
Re:New Computer (Score:3, Insightful)
Re:New Computer (Score:4, Funny)
Is that a pun?
Re:How is this different from a two processor syst (Score:5, Informative)
Re:Itanium? (somewhat off-topic) (Score:5, Informative)
Today, in contrast, there _doesn't_ appear to a lull in demand for Itanium 2 machines, even though Montecito (Itanium 3) has been announced in a fair bit of detail. That's because for some applications (in HPC, high-end database work, certain EDA/CAD/CAE work, and ultra-high-reliability computing) Itanium 2 systems are basically unbeatable. They also run some OSes which are very important to some organizations, such as HP-UX and OpenVMS.
Long story short, the Itanium 1 was something of a flop, the Itanium 2 is really pretty decent, and everyone is expecting the Itanium 3 to offer pretty decent _price/performance_, in addition to best-bar-none performance when it is released next year.
It's nice but.. (Score:4, Insightful)
Now here are the problems:
32 bit (x86) perfomance sucks. All those apps you've spent years developing will need re-writing (A simple recompile is often out of the question).
HP (in collusion with Intel) killed perfectly good archs. in Alpha and PA-RISC in an effort to get people to migrate to IA-64. A few may have made the move but this has mostly served to push people towards the vasty cheaper x86. HP, and to a lesser extent Intel, should provide what their customers want, not what they think is best for them.
It still uses a shared bus architecture. There are diminishing returns as you add more processors.
Itanium requires massive caches to get the best from it. Cache = Silicon = Cost. It is clear that a large scale seeding exercise is still underway with Itanium systems being provided at or below cost. Looks like it will be a long time before there will be any return on the billions invested in Itanium.
Re:Itanium? (somewhat off-topic) (Score:2, Informative)
Re:You Insensitive Clod (Score:3, Funny)
Translated: Please mod me +5 funny.
-- n
Re:Yeah... (Score:2)
You know, if X-10 did get their asses sued off and all those popups stopped, you wouldn't be asking this question.
Re:Yeah... (Score:3, Informative)
Given that the power output of a single-core Prescott is 100 watts or more, a dual-core with separate caches will put out 200+ watts. Clock up the speed a bit more, and you'll be at about 300 watts.
I figure that's probably enough to boil a cup of coffee.