Intel's Big Chip 282
DeadBugs writes "News.com has an article about the size of the upcoming revision for the Itanium. The "McKinley" chip will be 464 square millimeters which would make it one of the largest ever produced. Most of this is due to the 64 bit registers and 3MB of Level 3 Cache. There is also a link to an article about "Chivano" an Itanium which will include concepts from the Alpha architecture"
Wow! (Score:2, Insightful)
Joe
It's how you use it (Score:4, Interesting)
How? The 16K board cache was four way set associative. This allowed for the CPU to determine in one clock cycle if the next instruction was in cache. The 64K cache design could not always do this. Thus it was often slower. Why not make the 64K cache 4 way set associative? Cost. The overhead in silcon and motherboard space made this impossible at the time.
Re:It's how you use it (Score:2, Informative)
If you actually want to know what you're talking about, I'd suggest reading "Computer Organization and Design : The Hardware/Software Interface" by Patterson and Hennessy.
Re:It's how you use it (Score:2)
For those who don't know, n-way set associativity means that a cache can retain n values that map to the same cache line, so a 4-way cache can hold 4 lines that map to the same cache position.
Re:I have the book (Score:2)
Re:It's how you use it (Score:2)
But as far as overall performance, yes I'd be willing to bet that the 4-way cache beat the larger 1-way.
Re:Wow! (Score:2, Funny)
Re:Wow! (Score:2, Interesting)
Re:Wow! (Score:2)
What intel (and any sane manufacturer who drop a significant amount of memory on die) will do is to add redundancy to regular structures. The l3 cache _will_ have redundancy, l2 most likely. l1 is more of a toss-up.
Redundancy is currently a standard engineering practice (Actually, most designs use IP modules bought from specialiced memory vendors who can integrate this kind of functionality for you).
Re:Wow! (Score:2)
They eat up more room on the hard drive, definitely. But in terms of memory in usage they are still extremely lightweight (as they should be).
Re:Wow! (Score:2)
Re:Wow! (Score:2, Informative)
NOT not wow! (Score:3, Informative)
But there are segments of today's market that are willing to pay almost any price for a high-performance chip. These people will fork over a $1000 without blinking an eye if they think it will speed up their business.
Look at any commercial server available today. They're priced around $15000 - $20000. If chip prices go to $1000 instead of the $400 they're probably paying, that makes a difference of $2400, or about 12%, in a 4 way box. Even if chip prices went to $2000, it's a $5600 difference, or a 28% difference. If your processors are your bottleneck, then you've gained a lot of improvement for not-very-much delta in money.
Sure, a $2000 chip is out of reach for most home users today, but there is always a market for just about anything faster they can produce.
And there are enough crazed overclockers [tomshardware.com] out there that'll spend whatever it takes to raise their frame rates on Quake III. It'll sell. It'll also drive the market to a new standard, which also sells chips.
Re:NOT not wow! (Score:2)
Heh. Maybe if by "server" you mean high-end Windows workstation. A midrange Sun server like a sunfire 4800 is around $350k list, say $300k. And that's just a test machine where I work - the production stuff runs on a combination of 4500s and 6500s, soon 4800s/6800s. 6500s are over a million IIRC.
But anyway, your point is still right on, and even more valid with datacenter-class servers, disk arrays, and so on. Paying $2k for the processor is nothing.
-Kevin
Re:NOT not wow! (Score:2)
Yes, definitely. You can even buy a low end v880 for 30k. I was teasing a bit. We still have 220s, 250s and 420s and 450s that are more reasonably priced boxes, though I personally don't like the 2xx machines. The bigger machines are used for applications where memory and/or processing requirements are really high.
We'll probably be getting more 880s though because they are pretty expandable and hold a lot of memory. They go up to 8 processors / 32 GB.
These Suns, while not the fastest in town (see IBM Power4), are a proven 64 bit architecture that has been around and doing real work longer than the Itanium hype. It bugs me to see so much attention given to the Intel machines. I understand the desire for good commodity machines, but these things are going to be pricey anyway. And I could give a rip about running Windows on servers.
-Kevin
big chip... big fan (Score:3, Insightful)
Re:big chip... big fan (Score:3, Funny)
We're going to start up a business modifying Sears deep freezers, providing a means of placing a PC directly into it.
Although your entire computer system will be the size of a bathtub and double your electric bill, you WILL be able to use PC's based on these new CPU's.
We're also going to figure out how to work rain forest defoliation into the process.
Re:big chip... big fan (Score:4, Troll)
Re:big chip... big fan (Score:3, Informative)
Intel can now demo a 5GHz chip using the .13 micros process that can run at room temperature.
Big deal. It has 12 instructions, is ~2mm^2, and consumes 267mW. This looks more like research than something that you would use for real work.
Re:big chip... big fan (Score:5, Informative)
If the die uniformly heats, then yes, this is true. But that's not always the case. The latest P3's are so low power that you just need a heatsink or fan-sink, depending on frequency. The first P4s had a head spreader that sat on the back of the die and connected to the fansink.
Plus heat in a die goes up/down easier then left/right because the thermal conductivity of the heatsink is much better than that of silicon, and is closer than the edge of the die. If you've got local hot spots on the die, a bigger die doesn't by you anything. The thermal properties and requirements of the heatsink are driven more by local heat density than by overall heat.
Tom Pabst had a good discussion about this a while ago, but I can't remember the article's URL.
Re:big chip... big fan (Score:2)
That doesn't mean they necessarily run cooler. My machine of choice (a laptop) has a SpeedStep PIII which runs blazing hot at peak CPU levels. True, this is mostly while playing Unreal Tournament ;) but the chip gets hot enough that I literally can't keep the machine on my lap without some serious pain.
Re:big chip... big fan (Score:2, Insightful)
Also, there is going to be horizontal thermal dissapation regardless of the thermal resistance of the materials involved. If I have a die that generates 1W of heat in a 200 mm^2 area, even if it is not generated uniformly, and I then increase the area to 300 mm^2 with the same total heat output value, that extra area IS going to be heated, probably substantially, regardless of whether it is generating it's own heat. That means that the overall average temperature has to go down somewhere.
The points you make are however perfectly valid and relevant, but the point of the original poster was supposing that it would be larger surface area of the die that would make it harder to cool, not any new thermal hotspots. I was simply saying that there aren't many ways that increasing the surface area can make something harder to cool.
Re:big chip... big fan (Score:2)
The logic that causes the hotspot does not necessarily scale with the size of the die. If you slap on a 3MB cache onto the same die, you have the same heat problem. Now add to that the die shrink from process and it is even worse.
I was simply saying that there aren't many ways that increasing the surface area can make something harder to cool.
Well, we're picking nits here, but you originally asked why the die wasn't easier to cool because it is bigger. But judging by your second reply, your first question wasn't worded as clearly.
Re:big chip... big fan (Score:2)
Sure, but that still says nothing about the hotspots.
When you increase the area from 200mm^2 to 300mm^2, you aren't increasing the size of the old core, you're just tacking silicon onto it. The same core is still sitting there, unchanged. It will still produce 1W over a 200mm^2 area, it's just surrounded by more silicon. The average temp is colder because the extra silicon isn't generating heat, and thus the average is lowered. But the hotspots haven't changed their local temperature at all (it may in fact be worse due to worse heat propogation in silicon vs air, but that effect is probably negligible).
And it gets worse. The article didn't quote relative power figures for the two chips, but depending on the way they did it, that new block of silicon may be -hurting-, as it's producing it's own heat. Large memory arrays are one of the worst heat producers. Being smart about how you access the array may save you from catastrophe, but it is definitely going to increase the total power output.
So, basically, just having more silicon doesn't help, and having more silicon that contains high-power memory arrays hurts.
Re:big chip... big fan (Score:2, Insightful)
Die size war? (Score:5, Funny)
Nice review. (Score:2, Informative)
Straight and to the point. Nice.
Intel's new marketing strategy (Score:2, Funny)
Re:Intel's new marketing strategy (Score:2)
Our new CPU is so big it will CRUSH the competition
WARNING: CPU will not dispense product.
$300 to produce? (Score:2, Funny)
hmmm... sorry Intel, I'll stick to AMD till I hit the lotto, or have some other good reason to spend money like it was going out of style.
Re:$300 to produce? (Score:2)
Re:$300 to produce? (Score:2, Funny)
like if you wanted to play Q3 in a VMWare session?
(or play D00M 3 at all)
Re:$300 to produce? (Score:2)
Going from $300 to $3000 per processor at retail seems a bit extreme if you ask me. And don't mod this as a flame, I actually LIKE Intel's work, but it's a joke how much they charge for their processors compared to how much it costs to make them.
Re:$300 to produce? (Score:2)
Re:$300 to produce? (Score:2)
Re:$300 to produce? (Score:2)
L3 on die? (Score:2, Interesting)
Either Intel has actually put research into this and discovered that it's a good tradeoff performancewise, or they've still got marketing driven engineering and someone said "wow! over 3 MB of on chip cache!"
Any guess on the wattage? Has Intel broken 100 Watts on their upward march of hot chips?
Of course. (Score:2)
Having the L3 on chip makes the same amount of sense as having the L2 on chip -- which is to say, lots. First, you can run the L3 at core clock speeds. No external bus is ever going to run as fast as pure silicon. This means that the latency is going to be much lower than for an off-chip L3. This means the average memory access time will be lower, which means better performance. Second, the bandwidth can easily be higher, since you don't have to pay with pins for extra data lines and, again, you're running at core speeds.
For those programs whose working sets fit into this amount of memory, the on-chip L3 is going to blow the doors off an otherwise equal off-chip L3.
Note, this is the *DIE* size (Score:4, Informative)
Anyone have any exact numbers for the chips? I didn't get a ruler out to measure it.
Amd competition. more numbers. (Score:5, Informative)
os.opinion article [osopinion.com]
news.com [com.com]
by the way, the amd hammer is expected to 105 mmm^2 [theinquirer.org] on 130 nanometer (.13).
the current amd MP (palomino) has a die size of 129mm [electronicstimes.com] on
the original P4 has a die size of 217mm and is now at 150 mm^2.(with a bigger cache)
Note that the original article does mention the 424 size is on
AMD Athlon (Score:2, Informative)
184 square mm die size (prior to Athlon 800)
102 square mm die size (Athlon 800) ... source [earthweb.com]
Note that this article also states that: Intel has also incorporated a substantial amount of redundant circuitry in the processor, Krewell said. Chipmakers often use redundant circuitry to boost yields. Sometimes, circuits come out scrambled on a finished chip. If the manufacturer has put in two sets of the same circuits, the chip will function properly because it can use the second set.
You could have a dual Pentium machine and not even know it :)
I guess this redundancy is why the chip has gone up 10% in size in the last couple of months ... (see this article) [com.com] which quotes:
One of the reasons for McKinley's bigger price tag, Krewell said, is that it will cover nearly 440 square millimeters in area--or more than twice that of the Pentium 4.
Re:What effects die size (Score:2)
Anyway, with 3MB of onboard cache, I doubt the I/O pads are to blame for the large die size of this CPU.
Die Photo and Size (Score:5, Informative)
"Slide 22 of the presentation features a die photo of McKinley. The large 3 MB L3 cache is notable, and according to the presentation, it consumes 20% less area than traditional designs and is overall 85% efficient (~70% for traditional designs)."
And here's a story with the photo [theinquirer.net] from that same article (no need to download 2.5 meg pdf...)
-Russ
A few minor points (Score:4, Insightful)
Thanks! Where would we be without clarifications? (Score:3, Informative)
What you meant to say (and what the article said), is that 464mm^2 is size of the actual die size of the processor This includes the CPU and the caches. The CPU is a relatively small portion of the processor die, and noting there is 3MB of L3, the total cache may amount to 2/3 of the die size. The square on top of the athlon is also the entire processor die: cpu, caches and all.
Also, L3 cache can never perform "equivalently" to L2 or L1 cache unless it runs at core speed. And I can tell you now, it doesn't -- or they wouldn't need L1 and L2. The L3 cache probably runs at something like 10 access cycles or more. It's not difficult to engineer 10 access cycles into any pipeline -- it's impossible. Which is precisely why it's not L1.
I'm quite sure the engineers at Intel have done their modeling homework and determined that however fast the L4 memory may be, the L3 will improve performance by that much more.
Remember, this processor is not meant to go on you or any other Joe Sixpack's desktop. It is meant to sit inside the workstations on the desks of engineers and in the racks of high-bandwidth servers. These platforms are specifically designed to run hundreds of tasks simultaneously and handle staggeringly high memory bandwidths. It has nothing to do with "complicated instructions." The L3 exists for swapping out large pages of memory in large bursts from a significantly larger sized L4 memory (think on the order of 100's of GB) from L5 memory (local drives and SANs) that has an incomprehendable virtual memory space.
This has absolutely nothing to do with mainstream. I'm quite certain an OS already exists that will run on the platform. An IA-64 Linux is well under way (try http://www.linuxia64.org [linuxia64.org]) and you can bet that Compaq, HP, Dell, and Intel have put a total of more than 100x your lifetime earnings into developing software for that platform.
Intel could not care less whether you or 99.9% of the /. readers out there ever buy an IA-64. They don't give a crap about your market segment, but I'm sure if you want to drop $10K+ on a IA64 workstation, be my guest. Your choices are limited. Either choose IA64 or UltraSparc. Or maybe if AMD ever gets a design win, you might get a chance to buy a Hammer box.
Cache Design 101. (Score:3, Informative)
A fundamental rule of building caches is that a larger cache is slower and dissipates more energy per lookup than a smaller cache. This is why multilevel cacheing exists in the first place (otherwise we'd just have a huge L1 cache - and before you mention it, due to architectural sneakiness, HP's giant L1 cache isn't really an L1 cache).
So, you can't just spend the L3 area on making a bigger L2. You'd end up with a slower, hotter L2, which could easily _degrade_ performance.
As long as the L3 cache is faster to access than main memory, it'll be useful for some things. Whether it'll be useful for *most* things is another issue. This depends on the "working set" of the applications you're running (how much memory they repeatedly access). I guess Intel's banking on working sets being larger than most caches.
Another possibility is that they're testing the cache architecture for use in future SMT or CMP designs (both of which would have multiple independent executioin contexts running). If you're running multiple *independent* contexts, the working set grows with the number of contexts.
Re:A few minor points (Score:2)
Itanium at 1.6 GHz in 2003 ? (Score:3, Insightful)
Madison is expected to come out in 2003 and run between 1.2GHz and 1.6GHz, according to sources.
I wonder how Intel expects people to adopt Itanium-based processors considering
that x86 processors will be running at 4GHz in 2003.
Re:Itanium at 1.6 GHz in 2003 ? (Score:2)
Re:Itanium at 1.6 GHz in 2003 ? (Score:2, Informative)
Do you have any other figures to substantiate your claim ?
Re:Itanium at 1.6 GHz in 2003 ? (Score:4, Interesting)
As people have pointed out the 800Mhz Itanium chips - the fastest you can buy - have an integer performance slightly less than an 800Mhz PIII.
From the article: "Applications will be about one and a half to two times faster than what you get on a (current) Itanium"
I'm assuming this is WITH the huge L3 cache in pilot systems if they are claimed actual application performance.
Let's compare this to the REAL competition: IBMs Power4.
IBM Power4 1.3GHz - shipping for a while now:
SPECint2000 = 814 SPECint_base2000 = 790
SPECfp2000 = 1169 SPECfp_base2000 = 1098
Even the best Itanium reported int numbers are:
SPECint2000 = 365 SPECint_base2000 = 358
(Same box) SPECfp2000 = 610 SPECfp_base2000 = 526
Even if the McKinley (which doesn't ship for 6 months or so) produces double the Itanium numbers (which it won't) it'll still lag the currently shipping Power4 chips.
And with only an clock speed increase of 60% over the next three years IBM can stay ahead simply by getting the 1.8Ghz models out the door in the next 24 months. (That's assuming that the 1.6Ghz McKinleys will even outperform the current Power4s.)
It looks like Intel has increased clock speed by 25% added a bunch of L3 cache and is claiming 150%-200% gain. I think Intel has a (big) dog on their hands and they're trying to dress it up. The P4 performance will probably continue to outrun their flagship "server" chip and because of AMD Intel can't afford to strangle the P4's performance as they might have been able to in the past.
Intel said, "Wait for Merced." - which we did for years. Then they said, "Well, the Itanium sucks, but wait for McKinley!"
=tkk
Re:Itanium at 1.6 GHz in 2003 ? (Score:3, Interesting)
That's true.
150-200% is a modest prediction for performance.
This was the prediction of an Intel representative. I can't imagine he was TOO conservative... Then again it's academic since no one is actually running software on an Itanium - who can compare their current results with future ones?
But seriously - the faster clock speed and cache (since Int operations are much more sensitive to cache changes) would account for a nice bump in performance. I'd expect nearly a 50% increase in speed simply from the changes I noted. Even if it is twices as fast then new chip arch is only reponsible for a small increase in that speed.
My point is that HP decided as early as 1996 that the Merced project would never surpass PA-RISC and essentially took their marbles and went home. McKinley was an attempt to get something out of the project after it was clearly headed for failure. Intel should have known they had a dog on their hands and yet the flogged the FUD for years and after billions of dollars they have yet to deploy a compelling technology.
You should also note in your SPEC marks that there's accusations that IBM "cheated" with their submissions.
Thank goodness Intel has never been accussed of anything so horrid!
I'm not sure on the details on it, but I was reading parts of it on www.realworldtech.com the other day.
Well if it's on the Internet it MUST be true...
Let me get this straight - because you "heard something" you can't back up I should note that IBM's officially submitted Spec results are faked? How do you figure?
=tkk
Re:Itanium at 1.6 GHz in 2003 ? (Score:2)
I think SPEC needs to release a new benchmark set every 6 months, with about 50 really large programs in it each time. Ok, that's a little silly, but otherwise "gaming" the SPECs is essentially inevitable.
Who cares about GHz... (Score:5, Insightful)
Intel x86 is restricted to 48-bit addressing (with segment registers), and practically 64GB with modern OSes. (http://linux-mm.org/)
If I want more than 64GB of addressable physical memory (which I do for some apps), then who cares if you can give me a 32-bit x86 running at 900GHz, it's not going to do diddly squat for me, since _going over the PCI bus_ for swap is going to kill me vs a 1.6GHz 64-bit processor. And since you need to go over the PCI bus just to get to a pseudo-disk stuffed with RAM, that solution is still bogus.
I see your point that this isn't what Joe Blow's gonna put on his desk. But the improved address space is definately a big win, and that's assuming that they can't ramp up the clock speed in a hurry.
Re:Itanium at 1.6 GHz in 2003 ? (Score:2)
different markets. There's nothing on any recent Intel roadmaps that
will have Itanic replacing x86 on the desktop. Conversely, 4ghz of
Hot P4 Action is meaningless to an application that requires more than
4gb of process address space.
A 1.6ghz McKinely ought to be a very competitive performer, especially
on floating-point intensive code.
Peace,
(jfb)
Re:Itanium at 1.6 GHz in 2003 ? (Score:3, Informative)
> will have Itanic replacing x86 on the desktop.
Which is really going to hurt them. The latest version of Everquest recommends 512MB of RAM. High-end gamers are going to need 64-bit addressing in a few years. AMD will be able to supply cheap 64-bit chips, Intel will be playing catch-up at best.
Re:Itanium at 1.6 GHz in 2003 ? (Score:2)
mores!
Yeesh.
(jfb)
Re:Itanium at 1.6 GHz in 2003 ? (Score:3, Funny)
I'm sorry, but I just got the mental image of the geek pr0n site that would use this tagline!
64 bit regs is new? (Score:4, Interesting)
My calculator's [google.com] processor has 64 bit registers. You think i'm trolling? Check it out for yourself:
google search [google.com]
There are a lot more (and more powerful) procs out there, but this one just seems more appropriate for intel bashing
Re:64 bit regs is new? (Score:2)
Re:64 bit regs is new? (Score:2)
Re:64 bit regs is new? (Score:3, Informative)
The Saturn processor is a propietary HP chip used in many of its calculators. It's generally considered a 4 bit chip (since this is the internal data bus size), but it has four 64-bit registers [zoy.org]. I think the coolest part of the chip is that each instruction can operate on various portions of these registers -- for example, only the upper nibble, or only the lowest 4 nibbles. Since this is a calculator, math is generally done in BCD format. Externally, the chip connects using an 8-bit data bus. The address bus width (and therefore the PC, too) is 20 bits wide, and each address refers to a nibble of data. Maximum addressable memory = 1 meganibbles = 512KB. Most of the calculator firmware (such as calculating the sine of a number or matrix manipulation) is interpreted RPL to allow code reuse code (to save time, and to ensure bug-free implementations)
HP did a great job with this calculator, including releasing internal documenation and development tools. More info here [hpcalc.org], or use google.
It's a shame that HP shut down thier calculator division. [slashdot.org]
It's the whole retro thing... (Score:2, Funny)
or maybe they're taking the term "big iron" a little too seriously..
Coffee warmer built-in! (Score:3, Funny)
Re:Coffee warmer built-in! (Score:3, Funny)
Now, someone needs to figure out how to mount it on the CD/DVD tray, so the cup-holder will be heated.
Re:Coffee warmer built-in! (Score:2)
For now, maybe. According to an article [theinquirer.net] linked in an above post, the chips will be cooling down
And Intel claims McKinleys in the future will run at
I don't drink coffee, but I was kinda hoping to have something to keep my feet warm.
That's Almost 3 bits per millimetre! (Score:3, Funny)
64bits/21.54mm=2.97 bits/mm
They've GOT to start using smaller wavelengths!
Nothing new here - take a look at the hp-pa 8800 (Score:5, Interesting)
Has 3Mbyte L1 cache and 32Mbyte L2 cache and
a transistor count of 300 million.
To quote:
"The HP PA-8800 L1 cache is probably the biggest L1 that ever existed so far with separate 750 KBytes of data and instruction cache for each core. This results in no less of 4 blocks of ¾ MB density each for a total of an unprecedented 3 MB L1 cache, physically twice as much as the combined L1+L2 on IBM's Power4. Accordingly, the transistor count of the HP-PA8800 is with 300 Million transistors almost twice as high as the 170 Million transistors of the IBM Power4 and results in a die size of 23.6x15.5 mm2 or 361 mm2. The L2 cache of the PA-8800 is off-chip and consists of four 72 Mbit "1 Transistor SRAM" chips developed by Enhanced Memory Systems.
http://www.cpus.hp.com/technical_references/PA-
has a roadmap of the hp-pa and Itanium chips so
really there is nothing new or exciting to report
that hasn't already been said 9 months ago.
Re:Nothing new here - take a look at the hp-pa 880 (Score:3, Informative)
Re:Nothing new here - take a look at the hp-pa 880 (Score:2, Informative)
Yes, but is Itanium going anywhere? (Score:2)
sPh
Re:Yes, but is Itanium going anywhere? (Score:2)
Technically, the Itanium architecture is a great idea, but with no real software available for the CPU (do we really have a native-register Linux distribution for this CPU?) the processor is not going to be very popular.
Re:Yes, but is Itanium going anywhere? (Score:2)
Re:It emulates the 32 bit instruction set (Score:2)
AMD's Response (Score:2)
Maybe they could offer some sort of conversion system, so that consumers can easily convert between centimeters and inches, and understand that AMD's new 1.5"+ chips perform about the same as Intel's 20mm McKinleys...
So... (Score:2)
What I've always wondered ... (Score:2, Interesting)
I've never been able to figure that out.
Re:What I've always wondered ... (Score:2)
Second, dumbness. VLIW probably isn't a good idea anyway to general purpose microprocessors, and while EPIC tries to address some VLIW shortcomings it makes for a pretty complex architecture which negates some of the VLIW proposed benefits. The picture gets still worse if you throw in IA32 compatibility.
Third, they own the design, but once again the best engineers left the company. No self-respecting engineer wants to work for Intel, they have a long history of abusing employees and imposing dumb management decisions on technicians for a long time now, in their branchs all over the world.
So the only sane architectures left with a future on the market are PowerPC and UltraSPARC. Sad but true.
Product names... (Score:2)
(The processor, not her little brother.)
Importance of 64-bit architectures (Score:2)
For most people, 64-bit arithmetic isn't critical - most applications don't deal with ints larger than a billion, though us crypto people who do lots of bignum math are happy to get a 4x speedup. Otherwise, the quality of floating point implementations is likely to be more important. So it would be possible to get by with 32-bit arithmetic and 64-bit addresses, like we did with the Motorola 68000's 16-bit-ints and 32-bit registers and addresses - but that was also somewhat tacky, and led to *lots* of bugs in code that assumed ints and pointers were the same size, though perhaps we've evolved enough past K&R C that newer software won't make that mistake as often.
A real problem this time around is that the C language and its relatives really do like 32-bit integers, and many of the Unix system calls also assume 32 bits. If you make the native int/pointer sizes 64 bits, there's a lot of stuff that will probably break. What kind of experience have people had running code on DEC Alphas and other real-64-bit chips?
What about the Alpha? (Score:2)
Tell you what. Since all the apps need to be fixed (or at least recompiled) to work on 64-bit processors anyhow, why not just go the route of porting everything to the Alpha? We could use this to finally get the hell away from Intel's terrible chipset.
And for all of you that think the Alpha will be dying soon, there are plenty of companies other than Compaq with Alpha products that are far better quality than Intel, and will likely be cheaper as well.
http://www.microway.com/products/ws/alpha_21164.h
Re:What about the Alpha? (Score:2)
Re:What about the Alpha? (Score:2)
Who would you like to be your trustworthy source? Someone from Intel? Someone from Alpha? Someone from AMD? Good luck getting a trustworth estimate of processor performance from anyone.
As far as spreading marketing fud, I haven't seen any marketing on the Itanium, so I really wouldn't know what to spread.
As far as Itanium being better than Alpha, nobody knows what alpha could have done, and Itanium is a processor of the future, a technology that is being hammered out. You have to give Intel credit for trying to build something as advanced and ambitious as the Itanium.
eetimes coverage (Score:2)
The article also talks about other intel innovations disclosed at the International Solid-State Circuits Conference
Re:Large? (Score:2)
Re:Large? (Score:2, Insightful)
Large? HUMONGOUS! Is Intel daft? (Score:2)
You're absolutely right. On top of that, the yield is going to be ridiculous. See, a wafer has defects. To get a good approximation, imagine a 6 or 8-inch target on which you shoot darts. The best wafer processes give you about half a dozen defects, and boy are these wafers expensive. Each time you have a defect, the chip that is engraved on this spot will be faulty and be rejected.
You can easily see that for a given defect density, the same wafer will have approximately the same number of bad chips (even if you split hair with the probability of getting two defects tiled by the same chip). With a small die, you can easily squeeze more good chips around defect spots. One more reason why a small die size is key to yield.
So this chip is going to cost a freaking fortune to manufacture, especially with the bleeding edge process they are boasting.
But wait, it does not stop here. 22 x 22 mm chips, huh? Assume that the clock tree (i.e., the tree-like circuit that distributes the clock signal in the chip) has a longest path of 10 mm. That's already the heck of a skew on the signal. And you can easily increase that longest path estimate by 30-50% because signals can't propagate in straigth lines, they have to be routed along structures. This alone guarantees the clock speed will never go as high as competing chip's frequencies.
This is a sheer waste of engineering resources. For a processor, such a size is just not practical.
Conclusion: This thing is a demonstrator. It will never fly. It's not meant to. And even for a demonstrator, it's too bulky.
Re:largest ever produced? (Score:2)
Re:largest ever produced? (Score:2)
Re:largest ever produced? (Score:2)
Re:Humm... aren't they a bit late? (Score:2, Insightful)
Re:"JESUS, that's big" (Score:2)
Re:Less Logic, More Cache? (Score:3, Interesting)
Nothing Moore (Score:2, Funny)
Through an informative article about a truly massive core,
While I nodded, the newsfeed was slashdotted, suddenly there came a tapping,
As if FedEx gently rapping, rapping at my chamber door,
"Prob'ly FedEx," I muttered, "with boxes of reminders;
reminders of the law of Gordon Moore."
Re:Not reccomended for use at (Score:2)
Re:Not reccomended for use at (Score:2)
Re:Not reccomended for use at (Score:2)
That's like advocating equal rights on Martin Luther's birthday. Sure Martin Luther King was named after him, but still...
Re:Straight from the article... (Score:2)
Re:Wow--- some of the stuff I've seen (Score:2)
Maya 4 runs on x86 and PPC boxes too.. How does the ability to run Maya 4 make the Itanium powerful.. I'd be much more impressed if you'd told me that Intel had Maya 4 running on a 500MHz Itanium and it blew the doors off a dual Athalon or Dual 1GHz G4 box..
I want to see numbers.. How does the Itanium compare to the Ultrasparc 3 or Power4 processors? Does it play nicely in SMP configurations like the Power4? etc..
Re:Intel is nuts!!!! (Score:2)
Well, that's a rather cold view. After all, where is progress going to come from if people don't try to make products out of research projects. Heck, we would still be using piles of stones to count our bushels of grain otherwise.