
Itanium Update 297
NegaMaxAlphaBeta writes: "For those of you interested in Intel's Itanium 64 bit processor, EETimes has a nice update article to let us know what's happening with this beast. With an 8 stage pipeline, as opposed to the 20 stage pipeline in the P4, clock frequencies are obviously not as high (~1 GHz). Other notable numbers extracted from the article: 130 Watts power consumption, 328 registers, 6 MB of onchip L3 cache ... quite nice (well, not the power thing). I'm sure many people can appreciate 64 bit integer ops; for me, it means single instruction xor for the 64 bit hash codes used in chess transposition tables."
What a dog (Score:1, Flamebait)
Re:What a dog (Score:3, Informative)
It is an interesting solution to the performance problem: Rather than just increase clock speed again, figure out the performance details at compile time and arrange the code to help the processor run it more efficiently.
For example, if you have an if statement and the compiler can determine that 95% of the time the TRUE block will be executed, the code can be arranged so the branch prediction will choose the more frequent route and the pipeline penalty won't need to be paid as frequently. (This is just a simple case of optimization, the IA64 will require insanely complex optimizations, but that is just expanding on what compiler writers have been doing for years.)
It makes the compiler orders of magnitude more complex, but it could potentially increase execution speed by a couple orders of magnitude too.
Re:What a dog (Score:2)
but that is just expanding on what compiler writers have been doing for years.
What they've been doing wrong for years.
Simplicity is the correct answer, Intel clearly didn't understand the question.
TWW
What? A dog? (Score:2)
That's assuming they were listening in the first place.
If Intel had big plans for the long run, they'd create a "simple" processor, let's take the original Pentium as a bad example:
Add MMX. Customers upgrade.
Change processor form factor. Upgrades galore.
Add SSE. More upgrades.
Change proessor form factor again. Upgrade.
Change form factor, add SSE2 and slap on a few marketing terms. Further upgrades.
The advantage is each time you can say the processor is "new and improved" so people will buy new ones. Does it really matter that a Pentium III 600 is more than enough power for 90% of computer owners? Of course not.
What makes me laugh, though, is how Intel switched to the Slot 1 form factor so it would be easier for customers to install processors (how often does that really happen?) and then switched back. I'll bet they were planning it all along.
Re:What? A dog? (Score:2)
put more cache on the chip because after all its the same structure on the die and get huge gains in performance
(as long as your MMU and cache lines are done right)
but takes up more die so more expensive
Charge an ARM and a LEG
oh come on you arnt serious ? I hear you cry
ask an intel engineer the diff between XEON Px and plain Px
answer cache
(yes I know that designing a decent cache is hard but compare it to a real change in arch
result (foolish) customers upgrade
fun of the fair
regards
john jones
Re:What a dog (Score:3, Insightful)
Well its fairly obvious that you are an expert on cpu design.
I've programmed about a dozen chips in both the games field and compiler-writing field, I don't design chips any more than Eddie Irvine designs racing cars. But I don't think I'll ever see him getting into a tractor for his qualifying lap.
Raw speed became less important for most applications, so intel added mmx to speed up multimedia.
What planet are you on? MS and Intel have conspired to make raw speed as important as possibe for years. I personally have been offered payment by Intel to produce slower software as part of their "everybody must upgrade" roadmap. MMX came as a direct response to the increasing performance of 3D boards which reduced the need for a faster CPU. Intel fear anything which reduces the need to upgrade so they tried to fight back with MMX. That fear led to the only sigificant addition to the instruction set since the 386.
Once a few quality compilers are around this won't even be an issue.
You grossly underestimate the difficulty of this instruction set. I doubt there will ever be more than one (ie Intel's) good compiler and I doubt there will ever be even one which is reliable and predictable.
Re:What a dog (Score:3)
> problem: Rather than just increase clock speed
> again, figure out the performance details at
> compile time and arrange the code to help the
> processor run it more efficiently.
That is neither interesting nor a solution. People (i.e. compiler writers) have been working on this for forty years with some (limited) success.
> the IA64 will require insanely complex
> optimizations, but that is just expanding on
> what compiler writers have been doing for years.
Just because the IA64 demands heroic compiler optimization to make up for its shortcomings doesn't mean that the ability to write such compilers will suddenly spring out of nowhere.
Compiler researchers haven't just been sitting on their butts for the last forty years.
> For example, if you have an if statement and the
> compiler can determine that 95% of the time the
> TRUE block will be executed, the code can be
> arranged so the branch prediction will choose
> the more frequent route and the pipeline penalty
> won't need to be paid as frequently.
This was a bad example. Dynamic branch predictors (such as you find in any modern fast CPU) do a great job in practice, better than any known static predictors.
Re:What a dog (Score:1)
Anyway, this thing is not garbage. I've wondered for a long time why chip designed couldn't do what intel is calling "hyperthreading". It will soon become a reality. I'm excited about it.
Re:What a dog (Score:2)
Re:What a dog (Score:2)
As for compilers, don't discount Intel so easily. They make incredible compilers. The features of ICL for x86 make compiler designers cream their pants. Read this article [intel.com] for some info about Itanium's compiler design.
Re:What a dog (Score:2)
Sure you have a ton of smart people but I just have a lack of faith in the whole architecture. You can have the brightest bunch of people in the world but if you make them cook burgers does it matter? Not to discredit your post on other means, but saying because Intel has smart people is kinda silly. They also have stupid people by the same token, does that make them less likely to suceed?
Jeremy
Re:What a dog (Score:2)
Re:What a dog (Score:2)
-- IA64
-- Rambus
-- The home wireless network standard they pushed that got beaten by 802.11
Re:What a dog (Score:2)
-- RDRAM was a mistake, but it wasn't just Intel. Nintendo bought into RDRAM, as did Sony and several graphics card makers. RDRAM fizziling was NOT something that could have been predicted. I kept up with the reporting back when RDRAM was still called nDRAM (as in unknown), and nobody expressed any objects.
Re:What a dog (Score:2)
> you can bet that they've fixed the compiler
> problem.
Your faith is touching. Another possibility is that the Itanium project was way behind schedule and that they had to ship something, anything, because their competitors and the rest of the industry were laughing at them. And so they shipped a CPU with the worst SpecInt number in the industry and even warned their customers that this was really just a development chip and 'real' hardware would have to wait for the next generation.
Re:High performance or what ? (Score:2)
Re:Compiler (Score:5, Informative)
And the compiler has far *more* information than the runtime hardware has. The scheduling hardware is only capable of looking a few instructions at a time to decide how to enhance ILP, whereas the compiler by its very nature has access to the entire program at once, and can perform optimizations not possible in hardware.
This is further enhanced by a development cycle that includes profiling. As you use the program during development, the compiler can use the same profiling information that is used to "manually" optimize code to perform its own optimizations. With an advanced OS, this become extremely powerful, as some of the registers on the processor actually keep track of profile data at runtime. Then, during page swaps to/from virtual memory, the processor has the opportunity to dynamically optimize and recompile the code.
Re:Compiler (Score:2)
It is dramatically reducing the complexity of the pipeline, thereby increasing throughput by orders of magnitude (see CISC vs. RISC).
Brought to us by the same people that told us the big pipeline would solve all our problems and that RISC was a deadend, that bought up and squashed the ARM, that thought that no one would need more than 8 registers or 640K of memory and all the other crap Intel have spouted since it invented the 4004 and then proceeded to get everything else wrong.
Intel has spent the last twenty years proving how little it knows and how much it depends on MS for a free ride onto the desktop.
TWW
Re:Compiler (Score:2)
Backwards compatibility did not require the retension of a tiny register set (no general purpose registers - Jesus Christ!) and was a fairly bogus concept anyway when the 386 came in.
The 386 family is a bad design and if you'd ever programmed it you'd know. There is nothing good about the design.
TWW
Re:Compiler (Score:2)
> because it's too difficult to implement. It is
> dramatically reducing the complexity of the
> pipeline, thereby increasing throughput by
> orders of magnitude
That's what they say, yet somehow decreasing the complexity of the pipeline hasn't produced many benefits in practice. The clock speed is low and the throughput (as measured by benchmarks) hasn't increased by orders of magnitude
> The scheduling hardware is only capable of
> looking a few instructions at a time to decide
> how to enhance ILP
This is quite false. Modern CPUs can have over 100 simultaneously executing instructions in flight. Furthermore modern CPUs take advantage of hardware such as branch predictors which records information on hundreds or thousands of instructions in order to make better execution decisions.
Profile-based optimization is a cool idea in theory but despite decades of research, it's seldom used. I suspect that one reason why is that (in C programs) reoptimization can reveal bugs in your code that were previously hidden (like an uninitialized variable that, by luck, always happened to be zero when the code was optimized a certain way). People don't like it when their system suddenly starts exhibiting new bugs that no-one else can reproduce.
Re:Compiler (Score:2)
> preclude having the same branch prediction and
> memory access data in hardware, as you imply.
> Itanium still has the ability to do branch
> prediction and handle memory latency the same as
> any modern processor.
No, it does not. In the quest for increased scalability they threw out "out of order" execution. All instructions must retire in order. This cripples its ability to tolerate unpredictable memory latencies.
Backyard Foundry - Intel Inside! (Score:2)
if you dont have the cash for the kilowatts,
Dude. 130 watts of power dissipation. My 17" monitor only draws 125 watts. What's the surface area of the packaged chip?
Forget the old 5V Pentiums (P60/66) being nicknamed "coffee warmers". They were known for all sorts of overheating problems, but they only drew 3.2 amps at 5V. P = I x E = 3.2 x 5 = 16 watts of power.
I could use one of these new chips for the heater in my backyard foundry.
There's soon gonna be a boom market for tungsten and ceramic heat sinks.
Sheesh.
328 registers??? (Score:1)
Did I read that right? 328 registers?
If that's what I think it is... that's an AWESOME improvement over previous x86 incarnations :-) Just imagine the extent of freedom your C++ compiler will have with register allocation ... this will cut down memory accesses by at least an order of magnitude!
Of course, this all depends on whether these registers are general purpose. They'd better be, 'cos I can't imagine needing 300+ registers for special purposes while still giving you the klunky ole EAX, EBX, ... & co. registers.
Re:328 registers??? (Score:1)
What's not commonly known is that the P3 and P4 also have dozens if not hundreds of registers. The trick is register renaming: the P3 and P4 speculatively execute instructions as fast as they can, and they assign the results to temporary registers. If the processor needs these results, they reassign them back to the real registers like EAX, EBX, and so on.
So, overall, I'm not sure where the 328 number comes from. :P
Re:328 registers??? (Score:1)
Re:328 registers??? (Score:3, Interesting)
A context switch happens one in a blue moon. Fast context switches are not going to make up for sluggish performance for the real work the machine is doing between context switchs. Registers are considerably faster than cache; the absolutely fastest cache in the world is P4's L1 cache which has a load latency of 2 cycles, and on most architectures it is 3 cycles. Putting 128 qwords into registers is an absolutely dramatic speedup for programs which have a working set more than 8 dwords (all that IA-32 gives you).
Re:328 registers??? (Score:1)
Probably 128 integer regs + 128 float regs + 64 branch/predicate regs (note: NOT general-purpose) + miscellaneous regs like the IP, etc.
While the P3/P4 have lots of registers, they aren't registers in the sense most people think about them. They solve the dynamic antidependecy problem. The static data allocation problem is a separate beast. Those renaming registers aren't visible to the compiler so you'll still have the same number of memory operations in the program.
Same deal with SMT/Hyperthreading. More registers are needed, but they aren't the sorts of registers the compiler can use.
It's interesting that a write to R0 is defined to fault. Is this just for Itanium or is it an IA64 architectural decision? If so, it seems like a very poor one to me.
Re:Treat r0 more like /dev/zero (Score:2)
anybody would want to do that
Think NOP. I believe on the alpha a nop was implemented as add r0, r0, r0. Possibly on other architectures as well.
assembly language reference? (Score:2)
So where do I download a free reader that runs on Linux for that file of binary garbage?
Re:328 registers??? (Score:2)
My only question is for the OS guys out there. How does an OS handle context switches with 328 registers? With 8 bytes per register, thats more than 2K of data to dump out every context switch!
When will we see some improvements from the Alpha? (Score:2)
Re:When will we see some improvements from the Alp (Score:3, Funny)
Re:When will we see some improvements from the Alp (Score:2)
In 1979, Finis Conner (who later founded Conner which was bought by Seagate) approached Shurgart to develop 5 1/4" hard drives and the two founded Seagate.
I believe Shurgart Associates was purchased by Xerox around the time when Seagate was founded.
Re: "HyperThreading" in IA-64 by 2002 (Score:5, Informative)
Right. And there's no indication that something similar will appear in IA64 until at least 2006 (which is the *earliest* that the Alpha team could likely add it to that complex - or if you prefer messy - an architecture if the hooks for it weren't already built in).
It's a weak second to SMT. With HT, as I understood it, if a processor happens to have a floating point op and an integer op on hand at the same time, it can run both of 'em at once, instead of sequentially. That's the limit to the HT magic. It can't do two FP or integer ops at once.
Well, real-world server applications could be sped up by 30%, which would mean that HT could execute multiple *non*-FP instructions at once (and the article doesn't say it can't, just that it can't execute two FP ops at once).
It actually seems to look quite a bit like EV8's SMT, except that we don't know if it currently adds more execution units to the P4 architecture and whether all execution units can be applied to service a single thread if multiple threads aren't present. And, of course, it only supports two concurrent threads rather than four.
Intel stole and then implemented Alpha technologies for its Pentium, and only much later did it negotiate with Digital to get the official right to use that stuff.
No: I'm assessing the situation, unlike your propensity for drawing conclusions based on vague speculation and no data.
IA64 has to all appearances been developed with zero attention paid to things like out-of-order execution (in fact, it was developed explicitly to *avoid* out-of-order execution). OOO and SMT are intimately intertwined in EV8's SMT design, and apparently also in HT's. There's no indication that Intel has until now given any thought toward incorporating SMT/HT technology in EPIC, and every indication that it will thus take at least close to 5 years before such IA64 technology hits the street (especially as incorporating it into EPIC will almost certainly involve radically different internal approaches than those used to incorporate it into EV8 and P4).
Pentium 4 Multithreading? (Score:2, Insightful)
Shades of the whole 486SX [ic.ac.uk] debacle?
Re:Pentium 4 Multithreading? (Score:2)
Re:Pentium 4 Multithreading? (Score:2)
Well, This isn't it. (Score:1)
The chipset is the magic... (Score:2)
The thing sucked eggs, and I threw the motherboard in the trash and used the CPU as a paperweight.
At some stage, I needed a faster CPU, needed a motherboard to go with it, so I made the jump to a PentiumIII/450. I needed to revive my firewall, so I bought a decent ASUS 486 mobo at a fair. On a hunch, I put my paperweight AMD133 in. I was pleasantly surprised, and I only replaced the thing when I got a real cheap 300MHz Cyrix mobo.
Bottom line, it't the motherboard (or rather the chipset on it) that makes or breaks the CPU. I'm now running an ASUS A7V-E with a 1GHz Athlon, and I've been a happy camper. I'm not an overclocker (matter of fact, I underclock some machines just because I don't need CPU power for other things than video recoding, and some machines are on the other end of the globe, so I don't want to lose sleep over fan failure).
My main gripe with the VIA KTA133 chipset is the fact that I have to sign a $#@#%$#% NDA to get decent specs on it. FreeBSD doesn't seem to grok its I2C based hardware monitoring, and without those docs I'm SOL. Apart from that, it's working great. Even under Carmageddon^WWindows.
On-board OS (Score:1)
The BIOS on all Itanium chipsets (AFAIK) are setup to have a small kernel onboard. I.E. - you can boot the system with limited funcionality even if there's no floppy, HDD, or other boot medium present. If you do have a filesystem present, the "BIOS-boot" will even give you access to it.
Not the biggest feature on the block but helpfull none the less.
Re:On-board OS (Score:2)
True, but probably unimportant. OpenFirmware has been around (I think even as an IEEE standard) for ages, but apparently the PC world doesn't care.
If I sound frustrated, it's because I am. OpenFirmware is such a small bone to throw the techies that I think it's criminal it never came about in the PC world. After years of haggling the BIOS vendors, we now have BIOSes on some machines that can optionally emulate an ANSI terminal for console access on a serial port. Which means you can use Kermit under DOS to manage the machine remotely. Even "tip" on UNIX is suboptimal here, and the best this BIOS will do is emulate the full-screen config menu. Beeeuuurk.
Any support for virtualization? (Score:1)
*idiotic consumer point of view* (Score:2)
"Wow, a 64-bit processor with 6MB? I can finally have a computer more powerful than my N64! I hope it doesn't let little Billy access all of that satanic-internet-porn any faster, though...."
A Pentium-Equivalent Rating for these? (Score:1)
How will they market that? (Score:3, Interesting)
F-bacher
Re:How will they market that? (Score:3, Funny)
Re:How will they market that? (Score:1)
The people who buy McKinley's know about it first, you'd have to for something that expensive.
Re:How will they market that? (Score:2)
It is only the consumer market which looks at gigahertz. Which means that Intel will have to make a high megahertz version if it expects Itanium to enter the consumer market.
Diminishing clock speeds (Score:3, Insightful)
We can only hope that this chip helps the media away from using clock speed as the primary (often only) measure of performance.
Re:Diminishing clock speeds (Score:1)
Re:Diminishing clock speeds (Score:2)
I wouldn't mind checking it out again if you can point me to a copy.
Re:Diminishing clock speeds (Score:2)
When you buy a machine with $2000-$5000 CPUs, you tend to do real research on the performance of the system you are buying.
Re:Diminishing clock speeds (Score:2)
> tend to do real research on the performance of
> the system you are buying.
Which makes you wonder who would possibly buy an Itanium (especially for non-FPU-intensive servers where Intel's pushing it).
Chick magnet (Score:3, Funny)
Watch where you say that, or you'll be using that nifty Itanium to repel the hordes of women instinctively flocking to you like the salmon of Capistrano.
IA64 is the "heir apparent" (Score:5, Insightful)
It's amazing that ANYONE can field the number of mistakes that Intel has, and get away with it. For some time now, their first-outs have been essentially flops:
Pentium: Remember the 5V room heaters?
Pentium: Then the 3.3V units with floating point bugs?
Pentium Pro: The ancestor of the Pentium II/III line was a good CPU in its own right, and worked well for Unix and OS/2. But it completely missed the market, performing terribly on 16 bit code.
Celeron: DeCeleron, until they put the cache back on. From another point of view, the whole Celeron program has been a disaster, either by its own crippling, or by revealing how overpriced the PII/PIII line is.
Pentium III: CPUID - A 'workstation idea' that once again missed its market. Maybe if they'd found a way to node-lock software that can't be used for machine tracing. Maybe that's not what they were after.
Pentium 4: Let's face it, this CPU is just plain uneven and imbalanced. After a round of redesign to even it out, just like with the others, it could very well be an excellent CPU. Tame the prefetch, expand the trace cache, etc.
Itanium: Didn't even make it out the door before spin-doctoring began. "Just wait for McKinley!" I've already heard one set of rumors that McKinley isn't going to *really* do it either, so just wait for IA64-III.
Is all this any better than the "Just wait for this new release!" that Microsoft keeps pulling? Though I guess Intel does generally get each family right on the second shot.
AMD has a good product, I just wish they were a little less mum, and had a better response than warmed-over P-numbers. I also wish we could hear a bit more noise about the Hammers.
Re:IA64 is the "heir apparent" (Score:1)
Umm... have you ever used a MIPS chip? The R10k and R12k are beautiful processors and very fast. Don't let the low MHz rating fool you. The SGI compilers are also very good -- they do a lot of optimization and the profiling tools are some of the best around. There are lots of hardware counters on the R10k (32 I believe) that make it easy to find out where in your code to all your FLOPS are, the secondary cache misses, branch mispredictions, ...
I wish SGI/MIPS would continue along with these chips. They are a wonderful platform to develop on.
Re:IA64 is the "heir apparent" (Score:2)
Plus Sun sure hasn't rolled over either, Sparc performance has always been subpar, but they make up for it with a good OS (Solaris) and tons of applications.
Re:IA64 is the "heir apparent" (Score:2)
Re:IA64 is the "heir apparent" (Score:2)
Re:IA64 is the "heir apparent" (Score:2)
Re:IA64 is the "heir apparent" (Score:3, Informative)
I think you're confusing CPUID with Processor Serial Number (PSN).. PSN, IMHO, was a good idea, but the privacy zealots cried foul and ruined an otherwise good way to lock software to a specific individuals CPU. (YES, I know there are work-arounds that pirates can use (from simply hex-editting the instructions that check for the PSN to writing drivers return false info).) I really wish Intel hadn't backed down on PSN and included it in the P4 (afterall, for those naysayers that don't want PSN, or their identity, revealed to websites or software, you can disable it in the BIOS).
Oh well. Thought I'd clear that up. CPUID is GOOD. PSN is BAD (to the privacy folk, anyways).
CPUID vs PSN (Score:2)
I merely wish they had looked into some PSN-type technique that would let software be nodelocked without being usable for tracking. I don't believe PSN must be bad, at least not to anyone other than a fanatical Free Software type, who believes NO software should need to be paid for. I'm sure a technique can be used which will not alarm privacy advocates.
Re:IA64 is the "heir apparent" (Score:2)
Quite useful, and pretty much does away with arcane checks to see what processor the code is really being ran on (like the various methods of checking to see if you're running on an 8086 or 286, or 386 vs. 486, for example).. =) Unfortunately, if you want to run on these golden oldies, you still have to do those arcane checks, but once you establish that you're working with a Pentium or higher processor, you simply do a CPUID and you're done.
Re:IA64 is the "heir apparent" (Score:1, Insightful)
Re:IA64 is the "heir apparent" (Score:2, Insightful)
Re:IA64 is the "heir apparent" (Score:2)
Another aspect of Rambus is the untamed prefetch on P4. It's so aggressive that only Rambus can provide enough bandwith to keep it running, at least until dual-channel DDR. But according to the reviews, most of that bandwidth is merely wasted, but needed to keep the processer fully fed.
Re:IA64 is the "heir apparent" (Score:2)
That is, all these companies have had their share of problems.
When the Alpha was first released it ran *HOT*. I had one of the early DEC3000/300 on my desk. DEC had other problems with the Alpha. The CPU itself was denied it's future because of poor quality boxes it was put into.
I'm not quite as farmiliar with Sparc or PowerPC, but we shouldn't forget that Sun was having difficulties with the Sparc found in the E10000 not too long ago. To the companies who had paid millions for these boxes, it was a bigger deal than the Pentium floating point problem.
AMD has had their share of flops. The early 386 and 486 designs were good, but should we all forget the K5 and the early K6?
I had a Cyrix 486DX/50 clone back in '94, and it wouldn't work with a variety of software under Linux such as ghostscript. Cyrix replaced it, reluctantly... I had to argue with them on the phone despite Infoworld articles reporting the problem.
I don't see Intel has having a signifigantly worse track record than others. Their product is certainly used in a higher number and thus the failures are higher profile.
Re:IA64 is the "heir apparent" (Score:2)
Re:IA64 is the "heir apparent" (Score:2)
Re:IA64 is the "heir apparent" (Score:3, Interesting)
Sure the single processor, or even up to 8 processor results are not the greatest thing out there. In the single through four processor units Intel beats them, and higher the Power series takes over. What one tends to forget is, for a processor that is designed for SMP, A) 1024 processors linearly is damn good, and B) it is relatively cheap for a server class processor. Also the SPARC line is known to have the least number of hardware bugs of any major processor out there.
Sun really doesn't need a sports car of a chip anyway. Servers and workstations need uptime. They don't need to attack the user market yet. First they seem to be more actively attacking the workstation market with the sub-$1000 SunBlades. With a Sun solution the workstation only needs to be moderately fast, but the server needs to be DAMN fast because the most intensive processes run on the server and display over the network. Small steps.
Re:IA64 is the "heir apparent" (Score:2)
> P4 and Itanium) with helping the compiler writers
> get the most out of the chip.
It's just warmed-over x86 (and I mean that in a good way). Should be dead simple to modify an x86 compiler to target x86-64
X86-64 (Score:2)
One goal of the protagonists was to have the architecture extensions be clean, and if there was a wart, it would be the legacy part. After this topic came up, I took a quick look at some X86-64 stuff, and it looks as if AMD may have done just that. The 8 new GPRs are really GPRs, and I suspect the whole batch of 16 64-bit GPRs really are GPRs. It may be a cleaner 64 bit machine than it was 32 bit. I hope so.
Actually, I had to learn 8080 pretty thoroughly in college, learned a fair amount of 8086, less 80286, and by the time 80386 came around, was pretty well esconced into HLLs. So I can't speak very authoritatively on that side of it.
power (Score:2)
So, McKinley isn't a properly designed system? (Score:5, Insightful)
(bolding is my emphasis)
To protect against heat-related system meltdowns, McKinley includes a programmable thermal trip that can throttle processor performance by 40 percent to cut power consumption. But the company sees that more as a safety net, not as an answer to thermal issues. "This should never be needed in a properly designed system," said Naffziger.
Re:So, McKinley isn't a properly designed system? (Score:2, Informative)
Re:So, McKinley isn't a properly designed system? (Score:2)
Fat pipe (Score:1, Funny)
20 - 8 means a slower clock??? (Score:1)
What??? That's totally false, not to mention counter-intuitive. The whole reason for the shorter pipeline is to increase throughput. Think of Henry Ford and the classic assembly line. If you have stages that involve scheduling instructions to be fed into different (parallel) pipelines, as opposed to DIRECTLY COPYING instructions from cache into the appropriate pipeline, which do you think should be faster?
328 registers? (Score:2, Interesting)
Will anyone outside of cpu engineers and compiler authors even learn asm on this monster? Or have we truly moved past the point where programmers understand the cpu?
Remember 6502? (Score:2)
Needless to say, this great concept had gone to the dogs before the first consumer laid his/her hands on the device. Oblivious to the CPU design, a major manufacturer of operating systems (we called them BASIC interpreters at the time, by the way) has decided that most of page zero should be allocated to the OS^WBasic interpreter. I'll leave it to our hidden conscience to name the prepretrator of this gruelsome mistake.
I have long grown over the idea of using assembly as a faster programming language. The number of times I beat an assembly program with something hacked up in Perl, I don't even want to remember. Not because Perl is the best thing since sliced bread, but because humans are so poor at dealing with complexity. Get it working first, and leave optimization to the compiler. Then, if you have a bottleneck, analyze it, and fix the bottleneck in a targeted piece of code (whether C, or assembly, or something else).
Re:Remember 6502? (Score:2)
That's good. I'm happy to be reminded every once in a while that whatever half baked wisdoms I spout, they are usually based on a corporate image of what makes economical sense and what doesn't.
The art of computing wasn't furthered that much by the corporates, I know!
Re:328 registers? (Score:2)
I'm not sure what the other 100+ registers are, though I believe there are 64 "predicate" registers that have a 1-bit accuracy (eg: set to 1 or 0) and can't be used as generics (and wouldn't be useful even if they could).
130 Watts (Score:2)
Re:130 Watts (Score:3, Funny)
Whew! It's fun to be over your head. (Score:2, Interesting)
What I'm confused about is how it affects programming. Does this mean that everything will need to be optimized for you to take advantage of the higher bitrate? How will programs that are written for 32-bit systems handle it; can they handle it? How about backwards compaibility?
Do any other people read these sort of threads even though they know that it will be over their heads most of the time?
Re:Whew! It's fun to be over your head. (Score:2, Informative)
Yup, exactly right. It means that the CPU tends to deal mainly with 64-bit (8 byte) chunks of data at a time, instead of the more common 32-bit chunks. As far as programming goes, not everything needs larger instructions. For example, to program a user interface, 32 bit integers are quite sufficient for most purposes (unless you have over 4 billion items in a listbox or something). If you only need to store a number from 1 to 10, using 8 bytes instead of 4 is a waste of memory. (This happens a lot.) However, it is useful for many operations, such as multimedia, games, DSP applications, crypto, etc. etc. These applications would run faster on a 64-bit processor because they can use 1 instruction to manipulate a 64 bit number instead of 2 or more that are necessary to do the same thing on a 32-bit processor.
The other reason to use 64-bit processors is that it makes it easier to use 64-bit memory addressing. (For various reasons, it's a little easier to program if memory addresses are the same size as integers.) If you have more than 4 GB of RAM, (or you want more than a 4GB address space more precisely) then you need larger pointers. At the moment x86 programs use 32 bit pointers, but the Pentiums and above actually have 36 address lines, so they can use up to 64GB of RAM. Anyway, a 4 GB address space will be fairly cramped in about 10 years, so it's time they bumped that up a bit.
Intel has an emulation mode in the IA-64 series to allow people to run existing 32-bit programs, but at the moment it's dog slow. (It runs at about the speed of a Pentium 133, if that, when the processor is running at around 700 MHz.) The IA-64 architecture is completely different from the current IA-32 (x86) stuff. I get the impression that the 32 bit emulation doesn't use as many tricks as the existing processors to get programs to run faster. They're also overhauling the motherboard/BIOS stuff that's been around for a long while. (Some of it since the original IBM PC.)
Of course, just because a processor can do 64-bit operations, it doesn't mean that it's actually faster than its predecessors. For instance, IA-64 has a few weaknesses:
email from intel (Score:2, Interesting)
Speed is important. On Monday, Intel launched the Intel® Pentium® 4 processor at 2 GHz. Tuesday, during his keynote atthe Intel Developer Forum, Paul Otellini, executive vice president and general manager, Intel Architecture Group, demonstrated a processor operating at fully 3.5 GHz.
But that's not the half of it. Otellini went on to note that the Pentium4 microarchitecture is expected to scale to a whopping 10 GHz.
Now that's a "Wow!"
But, exciting as speed is, it isn't everything. While it is important,"it is not sufficient to drive the levels of growth and innovation that will allow our industry to prosper," Otellini said.
Speaking before an audience of 4,000 developers, designers, and executives Tuesday, Otellini noted that as the computing industry has grown and new technologies have evolved, purchasing criteria are changing. "We all need to change the pattern of our investments," he cautioned the crowd. "We need to think beyond gigahertz and build substantially better computers."
Buyers now look to a variety of features, noted Otellini: style, form factor, security, power consumption, reliability, communications functions, price, and overall user experience. Combinations of these and other features are driving end-user technology requirements in individual market segments. Intel plans to develop technologies that will help address these changing requirements in each of the key market segments.
Here are just a few of the ways Intel plans to go beyond gigahertz, as Otellini revealed in his keynote address:
It's like multiple processors on a single chip
Otellini introduced the audience to a breakthrough in processor design called hyper-threading. This technology allows microprocessors to handle more information at the same time by sharing computing resources more efficiently. The technology provides a 30 percent performance boost in certain server and workstation applications and will first appear next year in the Intel® Xeon[tm] processor family.
130 Watts. (Score:3, Interesting)
This makes me wonder, how many Crusoe processors could you put in a box (all other components equal) and equal this power consumption? Would the performance of such a box meet or exceed the performance of an Itanium box for real-world servers?
Re:130 Watts. (Score:2)
Translation Time...arrggg... (Score:4, Funny)
This beast has a small wang... its not the size that counts, but how you use it. (no giggling from the girls damn't)
130 Watts power consumption...
Who needs space heaters anyway?
OY! Hold your wallet tight, not for the light bank accounted!
I'm sure many people can appreciate 64 bit integer ops; for me, it means single instruction xor for the 64 bit hash codes used in chess transposition tables.
Not quite what the intel boys will be using in their next commercial. However, the wizards in marketing will be stressing the enhanced features of porn browsing. The fourth blue intel commando will be a scantily clad woman... further emphasizing the need for this processor which will not just make the internet faster, but will speed on your favorite pron sights.
130 watts?!?! (Score:2)
For those that don't remember their EE or physics courses: watts = volts * amps. And one amp through your torso is enough to kill just about anybody.
Re:130 watts?!?! (Score:2)
64-bit operations useful...sometimes (Score:2)
Yes, 64-bit operations have a handful of general uses, but when you weigh the benefits against the huge increases in transistor count, power consumption, and memory usage, are they worth it? I argue that they aren't. Doubling the size of almost every unit on the chip is a steep price to pay.
This is outside the x86 realm (Score:2)
For the record, Intel has cooked up x86 "replacements" before, like the i860 and i960.
Re:Yeah but... (Score:1)
Re:Yeah but... (Score:1, Interesting)
; 6 : for(i=0;i<10;i++) j+=1;
00006 c7 45 fc 00 00
00 00 mov DWORD PTR _i$[ebp], 0
0000d eb 09 jmp SHORT $L468
$L469:
0000f 8b 45 fc mov eax, DWORD PTR _i$[ebp]
00012 83 c0 01 add eax, 1
00015 89 45 fc mov DWORD PTR _i$[ebp], eax
$L468:
00018 83 7d fc 0a cmp DWORD PTR _i$[ebp], 10 ; 0000000aH
0001c 7d 0b jge SHORT $L470
0001e 8b 4d f8 mov ecx, DWORD PTR _j$[ebp]
00021 83 c1 01 add ecx, 1
00024 89 4d f8 mov DWORD PTR _j$[ebp], ecx
00027 eb e6 jmp SHORT $L469
$L470:
; 7 : for(i=0;i<10;++i) j+=1;
00029 c7 45 fc 00 00
00 00 mov DWORD PTR _i$[ebp], 0
00030 eb 09 jmp SHORT $L471
$L472:
00032 8b 55 fc mov edx, DWORD PTR _i$[ebp]
00035 83 c2 01 add edx, 1
00038 89 55 fc mov DWORD PTR _i$[ebp], edx
$L471:
0003b 83 7d fc 0a cmp DWORD PTR _i$[ebp], 10 ; 0000000aH
0003f 7d 0b jge SHORT $L473
00041 8b 45 f8 mov eax, DWORD PTR _j$[ebp]
00044 83 c0 01 add eax, 1
00047 89 45 f8 mov DWORD PTR _j$[ebp], eax
0004a eb e6 jmp SHORT $L472
$L473:
is less efficient. Perhaps you need to get a decent compiler.
Re:Yeah but... (Score:2)
Thanks dude, I just sent this to Bjarne Stroustrup (Score:1, Funny)
C++ is just WRONG. The proper way to do it is ++C. The other way is both less effecient, and it makes it seem as if you need the original value of C for some special processing, while in fact you don't.
Re:G4 kicks butt. (Score:2)
"G4 has 128 bits in it! Bits make computer go fast! Bits good!"
Re:PXOR can do xor on 64bit numbers (Score:2)