
Itanium Problems 479
webdev writes "An article in today's NYTimes (free but...) highlights some industry concerns over Itanium. The author suggests the normal "what's bad for Intel is bad for the computer industry". Anyone know the power consumption for IBM's 64 bit effort GPUL?"
IBM's Processor (Score:4, Interesting)
Re:IBM's Processor (Score:5, Informative)
When's the renaming? (Score:3, Funny)
hrm, somethings amiss, me thinks (Score:5, Insightful)
Skeptical? More like, forget it Chachi, it ain't happening.
I guess the larger companies don't get it. Corporations are struggling. Companies are in holding patterns, waiting for the mess, erm, economy, to level off.
Can I have a job now making millions being a skeptical technologist?
Re:hrm, somethings amiss, me thinks (Score:4, Insightful)
This current bust is mainly just a post-bubble bust, just like "The New Economy" was mainly just a bubble. Companies will eventually start spending again, and eventually they'll even start overspending again, and then cut projects, rinse repeat.
It's not because of the economy (Score:3, Interesting)
But is Itanium a good product? That was the question of this article. Even during a good economy there will not be a big market for Itanium because Intel just went into the wrong direction with it's design (bloatware). At least I believe so. And Intel agrees with predicions of a 10% market share of the server market.
Even in a good economy, people will just buy from competitors as Google is going to do (and Google has good economics already). With other X86 compatible processors or platform independent programming, it's a buyer's market and Itanium just doesn't seem to be the best buy.
I can applaud the decision to make a break from the old X86 architecture, but why did they design it as structurally complex bloatware?
First they head into the direction of more simplicity (switch to RISC core inside the CISC Pentiums) and then they double back into the complexity trap with Itanium.
Humans are just much better at improving simple things than they are at improving complex things. Why didn't they just go multi-core or something? I guess it's their CISC cultural heritage.
And if I may go slightly offtopic for a bit. I think there's something unelegant about those extremely power hungry chips. Something just doesn't feel right about the fact that your solid-state chip's continued existance is dependant on the oil on the ballbearings of a spinning bit of plastic, and that it's just a matter of time before your PC/server breaks.
A PC should be as solid-state as possible, just make sure electricity keeps going in and it runs. I think server farm cowboys/girls agree with me. They have better things to do than replace fans all day.
For this reason I like the Transmeta Crusoe, Via C3 and IBM G3.
However, even though it's power hungry, I do like the Intel Pentium 4's ability to survive the removal of it's heatsink, and continue running Q3 like nothing's happened when you put the heatsink back on. Could you underclock and undervolt a P4 3GHz to 1.5GHz and run it using a giant heatsink without a fan? I bet you can! At least it would survive.
Re:hrm, somethings amiss, me thinks (Score:2)
But, assuming Intel did their work correctly, IA64 will have long-term advantages over extensions of IA32.
Bear in mind that the orginal 8086 was for a while supplanted by the 8088 for compatibility reasons, but the 16-bit architecture won in the end.
Re:hrm, somethings amiss, me thinks (Score:2, Informative)
> bear in mind that the orginal 8086 was for a while > supplanted by the 8088 for compatibility reasons
It was just a price decision. The 8088 can do all that the 8086 can, except it's memory bus was only 8 bit wide instead of 16. This made for a much cheaper machine to build (fewer wires). The performance difference was not very significant and the software was 100% compatible.
Re:hrm, somethings amiss, me thinks (Score:2)
Re:hrm, somethings amiss, me thinks (Score:2, Insightful)
Many large organizations are spending as much on IT this year as they were spending two years ago. Life goes on. Indeed, in actual terms the economy continues to expand rather than contract, and the total IT spending is increasing.
Panicky "end of the world stop everything!" thinking is the hallmark of someone who watches a little too much Dateline and 20/20.
Re:hrm, somethings amiss, me thinks (Score:2, Interesting)
There are always certain industries going through upheavals. Right now the
When people don't have jobs, or fear losing their job, they stop spending money. When they stop spending money, companies can't sell goods. It's one of those trickle down things.
No, it's one of those "self fulfilling prophecy" things. When people run around with their heads cut off because pets.com can't sell $3 of kitty litter with $20 shipping, then indeed consumer spending can collapse. But you know what? It hasn't happened. People have gotten wary of the media's constant "next big depression" bullshit (and the pessimists who run around proclaiming it whenever they can), and consumer spending has remained tremendously high. Most people have come to realize that, as mentioned, life goes on. The stock market, a completely ridiculous pyramid scheme that has little bearing on reality, crashed? Big deal. Most of us aren't retiring this year, and it always comes back: Nothing to panic about.
Google is your... (Score:4, Informative)
Re:Google is your... (Score:2)
Re:Google is your... (Score:4, Insightful)
Re:Google is your... (Score:2, Informative)
http://www.nytimes.com/2002/09/29/technology/ci
seems to be a nice bug
Bullshit (Score:3, Interesting)
Besides, all that is being "subverted" is the moronic registration process, something that the nyt willingly gives up for google news readers
itanium is a solid chip from what I've seen... (Score:2, Informative)
Re:itanium is a solid chip from what I've seen... (Score:2)
The thing I don't get about VLIW is this... (Score:2, Interesting)
Why not just work on n-way SMP, so that an application can monopolize one or more processors and still have cycles to spare for mundane work.
Re:itanium is a solid chip from what I've seen... (Score:4, Informative)
I know this was a joke, but a lot of people won't understand how silly this comment is. A nuclear reactor can really be quite small... but all it will do for you is get hot.
A lot of people don't seem to realize that a nuclear reactor is really just a fancy steam generator. The nuclear pile gets hot (heat-- after neutrons-- is the primary by-product of a fission reaction) and that heat is used to boil water. Steam drives a generator which creates electricity from the kinetic energy of motion.
So a trashcan-sized nuclear reactor isn't such a fanciful idea. But the enormous closed-loop steam turbine generator attached to it may be somewhat unwieldy.
Now, if you want to talk super-high-efficiency fuel cells, you've got my attention.
Re:itanium is a solid chip from what I've seen... (Score:4, Interesting)
Anyway, you don't necessarily need steam either. There are those nuclear batteries used on spacecraft and shit like that. Terribly inefficient, but you get electricity from a nuclear reaction with no moving parts at all. And don't forget gas turbines, that many of the more modern nuclear powerstation designs are using. They can be a lot smaller than comparable steam generator systems. For example the Pebble bed modular reactor [pbmr.com].
Ironic (Score:5, Informative)
My favorite quote from the article (Score:5, Funny)
He should follow that up by saying, "Here at Microsoft we have proved this time and time again."
I'd go with ... (Score:2)
SPECint / SPECfp vs. POWER4 / US III / P4 (Score:5, Informative)
http://www.hp.com/products1/itanium/performance/ar chitecture/speccpu.html [hp.com]
-Kevin
NEC Scientist Fired Over Itanium/EPIC Criticism (Score:2, Interesting)
I submitted this a couple weeks ago, but I guess it didn't make the grade:
Re:NEC Scientist Fired Over Itanium/EPIC Criticism (Score:2)
Migration path is everything. (Score:5, Interesting)
Itanium flopped before; chances are good it will flop again.
Re:Migration path isn't everything. (Score:2, Informative)
There's something even above compatibility (migration path) - namely Moore's law. The goal #1 goal of a CPU company is staying on Moore's curve. Now the problem with x86 is that it is a f*cked up instruction set architecture, and because of its monstruosities (8 registers ? stack-based FP ?) it has become a major hurdle in staying on Moore's curve. Good luck to AMD with their 64 bit thing ... I seriously doubt that their 64 bit chip will be any faster than their own Athlon (going from 16 to 32 bits registers is a big deal, from 32 to 64 not so much)
The Raven
Re:Migration path isn't everything. (Score:2)
Could have fooled me. It seems like just yesterday that MIPS said they would change the world. Not buying it, this time around.
C//
Re:Migration path isn't everything. (Score:2)
Now the problem with x86 is that it is a f*cked up instruction set architecture, and because of its monstruosities (8 registers ? stack-based FP ?) it has become a major hurdle in staying on Moore's curve.
Huh, that's really interesting. I'd say Intel and AMD have been doing a pretty good job. If what you say is true how come we aren't all running RISC computers now? Well, in a way we are. Today's AMD and Intel chips are not truly CISC anymore. Might wanta read up on the features of CISC and RISC and then read the specs on a K7 or P4.
Re:Migration path isn't everything. (Score:2)
Non-Reg Link (Score:2)
Pricing problem (Score:3, Interesting)
The article itself doesn't mention any problem with the chip other than electricity usage and heat which are both a product of the large amount cache on the current configuration.
Re:Pricing problem (Score:5, Insightful)
Wrong... Intel IS in the compiler business: they have their own compiler called "icc". They could give code to GCC, but they won't because it'll hurt their icc business. You'd think they'd be smart and release their optimizations to GCC to help their processors perform better, but Intel doesn't think this way. They want you to believe their slick marketing that their processors really are better, AND they want you to shell out for their compiler (which may or may not actually get those processors to perform well--you won't know until you pay up and try it out). Of course, how does this help all of us who use open-source software (which includes Google mentioned in this article), compiled by GCC? It doesn't.
Re:Pricing problem (Score:3, Informative)
Re:Pricing problem (Score:3, Insightful)
Re:Pricing problem (Score:2)
Sorry, but this is the way of the "old" world. In the "new" world of VLIW, the compiler can almost be thought of as a part of the chip itself, that is how closely the two are now coupled. The ip that Intel has in the Itanic version of ICC is huge, and represents more of an investment than just a few SSE optimizations, or a scheduling trick or two. The fate of the Itanic rests soley on the performance of ICC (and of course MSVC, which you KNOW Intel has given plenty of input into. Wouldn't surprise me a bit if the IA64 version of MSVC just execs icc). So this isn't quite as cut and dry as you may think (i.e. Evil Intel holding back info from the "good guys").
Re:Pricing problem (Score:2)
Riiiight. ESPECIALLY NOT those people who build beowolf clusters out of alphas. Nope, not them for sure.
Really, your logic isn't very logical, nor does it apply to the real world (at least not in all cases).
Re:Pricing problem (Score:2)
Open source completely dominates in research areas, its areas like the office productivity that closed source is still dominant.
Re:GCC is mediocre (Score:3, Interesting)
Just as a quick point Compaq offered Gem for free to Linus about 5 years ago. They wouldn't agree to licensing terms so Gem didn't become system compiler but Compaq's willingness to give away a crown jewel to woo the Linux crowd proves I'm not entirely out of line.
Re:GCC is mediocre (Score:2)
Re:GCC is mediocre (Score:2)
there's one test that Tech-report are fond of (sphinx speech recognition) that's faster on the P4 using a Microsoft compiler, and faster on the Athlon with Intels
Re:GCC is mediocre (Score:3, Insightful)
While you may think that GCC should not expect anything from Intel, Intel disagrees; Intel has provided documentation as well as money for Red Hat (and Cygnus before them) to get free software to run decently on their hardware. AMD has done the same, it is simply good business.
GCC is a portable compiler; ia64 is a radically new architecture that needs special treatment from compilers. It will take time to get things working well, and problems with compilers may be the factor that makes AMD win in the long run over Intel. If the ia64 is theoretically faster, but compilers generate better code for the less radical AMD 64-bit processor, AMD wins the performance battle. If you have to buy a compiler from Intel to get the same performance you get with AMD with the free compiler, same deal. For that reason, Intel will have a strong financial motivation to help GCC do better, even if this cuts into their compiler business.
Re:GCC is mediocre (Score:2)
Re:Pricing problem (Score:2, Interesting)
Re:Pricing problem (Score:2)
Is Intel doing the right thing? (Score:2)
Even with 220 million transistors in it that is a lot of power. Intel should consider that big companies and small users dint always want the BEST of the BEST, they want something that is cost effective. As the story mentions Google might prefer to use a lower power chip because they could save millions in power costs. This can apply to small users too as that chip alone could cost you up to $100 a month.
Think on the bright side, during the winter when you are on doom 3 you are also heating the house!
Medevo
Re:Is Intel doing the right thing? (Score:2)
Re:Is Intel doing the right thing? (Score:2)
Re:Is Intel doing the right thing? (Score:2)
Better check your math again. I = P / V = 130/120 = 1.1 Amps.
Re:Is Intel doing the right thing? (Score:2)
Oh, for the love of christ... POWER == CURRENT * VOLTAGE. Even if this were a 130A chip (and it ain't), it would pull only about 3A at the fuse box.
Err... Uhhh.... (Score:4, Funny)
He's absolutely correct. The most intelligent thing to do is to make insignificant [zdnet.com], incremental [microsoft.com] changes [microsoft.com], and charge customers full price for each of them.
Re:Err... Uhhh.... (Score:2)
So any insignificant upgrade is going to require a new CPU (not counting microcode updates)
Not dead, just new (Score:5, Informative)
As always, the NYT ignored that you'll need the 64-bit address space for large applications, it has excellent memory bandwidth, and those customers requiring such a system weren't explicitly interviewed or mentioned. The heat issue is true, and that's it's one failing, but as with the Alpha, it will get better in time. (I still remember the rumors, pre-release of the Alpha that DEC was going to have to build a liquid-cooled workstation)
64 (Score:2)
Re:Not dead, just new (Score:2)
They've definitely cooked up a FUD article here.
Flexibility? Speciality? (Score:2, Insightful)
Of course faster is always better in database mining and protein folding and nuclear explosion modeling, but I wonder if the field isn't ripe for a move away from generalized powerhouse chips to more specialized chips that run at lower clock speeds (perhaps) and have lower power consumption (a must). Personal computing made advances due to cheap general use chips, but as our computers become specialized appliances, a move towards specializing the insides makes sense to me.
Itanium seems to me to be too late to the party. Its an old school chip and probably/ perhaps a badassed one at that. But computer users, from desktop to database, are likely to appreciate specialized chips in multiprocessor or multimachine configurations that express the flexibility. I don't know if its possible, but on the desktop side, rather than have a 3 Ghz general chip, maybe two cheaper and less power hungry 2 Ghz chips each with a unique specialization for certain types of tasks might perform better. One chip to rule them all is so last century.
Regardless of the feasibility of what I've said, lower power consumption is really cool (no pun intended, honestly). Just because it doesn't have an exhaust pipe port doesn't mean that the computer doesn't pollute.
At 135 watts per chip... (Score:5, Funny)
In fact, I know from a reliable source that tomorrow the president of the USA is going to reveal that the Iraqi army has managed to get hold of 2000 Itanium chips and is threatening to turn them all on and melt the Earth.
RMN
~~~
What is bad for Intel isn't necessarily bad... (Score:4, Insightful)
Bad for Intel probably means good for the industry, as we won't have another half-assed chip shoved down our throats.
I could blind a man (Score:2)
Seriously, is efficency no longer even considered?
Re:I could blind a man (Score:2)
Itanium Power Consumption (Score:3, Funny)
The beast has a 220V power line coming into it, and we've decided that the reason its so heavy is that if it was lighter, the fans would propel it across the room like a jet engine.
Migration path, opteron, and stuff... (Score:2, Informative)
I think he's very right. Take for instance SMP. A single threaded application running on an SMP system has no advantage over the same app running on a single processor system.
In the same way, most applications aren't even aware of 64 bits. So they will continue adding, multiplying, and addressing memory in 32 bits -- whether they be binary ports, or actually recompiled versions.
For the lazy man's migration path of using the same apps on a 64 bit system, there will be no advantage whatsoever of using a 64 bit system.
On the other hand, if you are recompiling, you might as well switch to the EPIC instruction set (Itanium), and get a defacto performance boost -- even if you don't port the code to be 64 bit aware... that's something you won't get even if you recompile for 64 bit CISC opteron.
And last, if you are refactoring, or re-designing your app for 64 bits, there is no migration path per se.
So I think it all boils down to: power consumption (for google), marketing strategy (ie. hyping strategy), and economy.
Re:Migration path, opteron, and stuff... (Score:2)
This isn't quite correct, because, at a minimum, the operating system can arrange each 32 bit application _at _least_ be given its _OWN_ 32 bit address space (using a sort of virtual segmenting) for 4 gig of addressable memory per application.
Meanwhile, the main advantage isn't that any one older programs can or can't get memory, but rather that they all continue to work, and the few you need to upgrade to 64 bit addressing can be done incrementally. This saves you quite a lot of $$$ on software budgets.
C//
Joyous ex-Itanic Designer (Score:2, Interesting)
I went into Merced with all the hope and excitement of a new engineer. I left hating the profession and the management that controls it.
Regardless of how much Intel stock makes up my portfolio, I hope Itanic crashes and burns. I hope Yamhill (64-bit x86, designed in Oregon) succeeds flawlessly. I am way too cynical to believe it'll happen but, I hope the success of Yamhill forces Barrett to realize the uselessness of Santa Clara design, causing him to shut it down and rely on Oregon design to do it right. But, considering that Gary Thomas was "punished" for his failures on Itanic by being given a ton of options and a cushy job in Intel-Folsom, Itanic and Santa Clara "mis-design" will just continue along.
Of course, I am just a bitter old engineer taking cheap shots.
Long live Itanic, Intel's Verdun!
Intel relies on compiler, Turing says it's foolish (Score:4, Interesting)
The Itanium relies heavily on exceedingly good compilers that will perform for the IA64 the same level of optimization that regular, on-the-fly predictive optimization do in RISC chips.
The main obstacle with this method is that Turing's theorem says static compile-time optimization will never work as well as dynamic optimization. This is because, roughly, the only way to guess what a program will do with a given set of input data is to execute it with its actual data set. Here is a link [theregister.co.uk] where a reader of The Register addressed this concern in 1999.
Is anyone aware of how well the limits predicting by Turing can apply to the compile-time IA64 algorithms?
Re:Dynamic optimization in software (Score:5, Insightful)
it's a bloody pain and keeping software back (Score:3, Insightful)
Itanium is a step backwards for software. It make the tradeoff of giving you somewhat better performance for a few languages and benchmarks, with complex compilers, while being even harder and more problematic for anything that deviates from the canonical benchmarks. That locks new kinds of software even more into a straightjacket than it already has been.
If Intel sees dynamic compilation as the solution to the complexity of Itanium, they should do the same thing Transmeta does: define a simpler instruction set for compilers to target and make the dynamic compilation and optimization software effectively part of the chip.
INTERGRAPH OWNS THIS PATENT;wins suit againt intel (Score:3, Informative)
Intel Corp. said it failed to reach an agreement in a $250 million dollar patent lawsuit by computer- services company Intergraph Corp., which already was paid $300 million by the world's biggest chipmaker to resolve an earlier dispute.
some info can be found here:7 0,146182,00.html
http://www.intergraph.com/intel/legalpic.asp
and
http://straitstimes.asia1.com.sg/money/story/0,18
Today, Intel and intergraph anounced a break down in cour ordered mediation to resolve a quarter billion dollar patent infringement suit against the ITanium.
In July last year, Intergraph (www.intergraph.com) brought a lawsuit against INTEL alleging the basic design of the Itanium violates ateleast two patents they had held for ten years. Intergraph alleges the concept of software based instruction routining in highly parallel architechtures was developed for their C5 (aka clipper) chip.
Itanium basic design is based on a HP concept for highly parallel processing in which the order of execution on the chip can actually create race conditions for dependencies in calculations. This allows performance enhancements and simplication of handshaking harware, since basically the chip does not have to wait for the slowest operations. INstead the job of preventing race conditions falls to the compiler. The compiler must model how the processor will execute an instruction in the context of the other instructions the chip will be executing in parallel and then re-order the micro-code to prevent erroneous computations.
It would appear the methodology for achieving this was patented by intergraph for the C5 chip. The C5 chip project was eventually abandoned and intergraph parteneres with intel to replace the CPU in their workstations with pentiums.
We all know that intel was previously accused of stealing the ALPHA processor designs and that law suit was "settled" by intel buying out the impoverished ALPHA (dec).
This law suit is for 250 million dollars. which is about 5 % of the entire 5 billion dollar development const of the Itanium. Mediation talks have broken down so the Suit will presumable go ahead. If you are interested try a google search, there's lots of info out there as this trial has dragged on for over a year.
Re:Problems with relying on Turing (Score:2)
So let's see: Gold was $20 an ounce in 1934. Hmmm.... he could have kept the $20 instead of an ounce and would have had all that cash instead of gold, which is worthless per ounce today.
Obviously Turing was a complete idiot.
Re:Intel relies on compiler, Turing says it's fool (Score:3, Insightful)
That's quite true for some architectures. However, note that the PowerPC CPU, for example, does a lot of optimizations at execution time with branch caching, speculative execution and other predictive techniques. This, on a code that has been somewhat optimized at compilation.
The question is not whether the IA-64 is the only processor to do these compile-time optimize. The question is whether it's wise to rely mainly on compile-time static optimization when you hope to be a performance leader. Turing says that you cannot, because static optimization, obtained by guessing the execution code path, is always inferior to dynamic optimization generated from the actual code path with the actual data.
Do you have pointers regarding the amount of dynamic optimization in the IA-64? In other words, if the compiler in only run-of-the-mill, can the IA-64 still perform?
The number one value of a 64 bit CPU... (Score:2)
PAE, for those of you who are, as yet, unaware of it, allows you to access more than 4G of physical RAM, by reviving an old technique called "bank selection". It's fairly useless for most of the applicaitons for which you would want more RAM in the first place, since it doesn't increase the allowable size of the kernel or process virtual address space at all, so the only thing it lets ou do is use RAM instead of swap, and not run lots of applications at the same time, without a lot of VM changes.
Intel keeps trying to sell us Itanium on performance, when, in fact, we don't care. What we care about is the ability to operate on larger data sets.
Intel: just because your delivery of access to larger amounts of physical RAM on 32 bit processors, via the PAE, was not welcomed (mostly because it was implemented in a way that was totally useless to software engineers and OS designers), doesn't mean that access to more RAM *by a single kernel or process* will not be the major selling point for Itanium: it will.
Get your crap together, and quit concentrating on clock rates.
-- Terry
Is it just me, or is... (Score:2)
Re:Is it just me, or is... (Score:3, Insightful)
Re:Is it just me, or is... (Score:2)
too much energy for large data center (Score:2, Interesting)
Allegedly large data centers such as Google are sensitive to power consumption. Of course we are not just taking about the power consumption of the processor. We are also taking about the power needed to keep the boxes cool as well as the power that is needed run the air conditioner to cool the data room at about a 20% efficiency. What this means is that several watts of energy must be used to cool each watt used by the computer equipment.
I agree that Itanium may have misjudged the market for this chip. If AMD can produce a chip that is almost as good, but much more efficient, it may well be more economical to buy three AMD based machines instead of two Intel based machines. This becomes even more possible as a box becomes a single disposable commodity component in a very large networked array. Much like the auto industry, it may be practical to build inefficient cars when energy prices are low, but it is nevertheless a risky venture.
One Thing I Never Understood... (Score:3, Insightful)
"Benefits" of killing the Alpha and PA-RISC... (Score:5, Interesting)
"There are other benefits for Hewlett-Packard. The Itanium allows the company to eliminate both of its current 64-bit chips -- the H.P. PA-RISC and Compaq Alpha. That alone should save the company $200 million to $400 million annually in development and manufacturing costs, according to Steven M. Milunovich, an analyst at Merrill Lynch."
Yeah, HP and Compaq have been fine stewards of their engineering legacy...
Re:"Benefits" of killing the Alpha and PA-RISC... (Score:2)
HP kills the technology of these two worthy chips by choosing a third option. In doing so, they effectively reduce the aggregate technical knowledge available for use by our society for their own gain.
As a society, we protect intellectual property so that the creators can use it, not so that they can lock it away. This is a case of current law perversely hampering the general welfare.
Intellectual property should only be protected when it is used by its owners.
Use it or lose it. If HP doesn't want to use Alpha or PA-RISC technology, then others should be allowed to do so.
Intel bashers take note: (Score:3, Insightful)
As for Itanium, there are quite a few ways it could succeed. It has the potential for serious performance. The super-wide architecture is perfect for code like scientific processing, image processing, and 3D graphics that are nice, regular, and easy to optimize and parallize. And what kind of processing do you think is going to be popular in the future?
Re:P4 did flop, for quite awhile (Score:2)
>>>>>>>>>
Um, no. The P4 was initially aimed at the high-performance market, to whom RDRAM's cost really wasn't that much of an issue. The real problem was that even with RDRAM, The P4 was slower than a cheaper Athlon. The RDRAM factor is arguable (given that RDRAM is still the fastest memory for the P4) but the P4 really took off when they jacked up the clock-speed and overtook AMD.
The fact is that for the past three years Intel has done a lot more wrong than right, stretching all the way back to the infamous re-called 1.13GHz P3--it's the first time in my memory that a shipping cpu was ever recalled by the manufacturer.
>>>>>>
Wow. Obviously, somebody doesn't remember the fdiv pentium. I'd hardly call the 1.13 GHz P3 infamous. They were so rare that the recall affected all of the five people who actually bought one. Besides that, and the trouble with the P4, which I referred to when I said they have had some initial problems with new products, what else have they done wrong?
In fact, it wasn't until the Northwood P4 2.53GHz variant that Intel started doing some things "right"--and that's been for only a few months now.
>>>>>>>>
Just because AMD was a good competitor doesn't mean that Intel wasn't doing the right things. They were working on jacking up the speeds on the P4, and that'll pay of significantly now that they've got a handle on it.
Everybody knew that the low IPC in the P4 would be made up for, eventually, in sheer clock speed--that wasn't debated as far as I can recall.
>>>>>>
Read up on the
What hardly anyone suspected was that AMD would be able to extend the Athlon architecture so well against Intel's Pentium architectures. Indeed, with a new stepping of the Thoroughbred core which started shipping only last week, The Athlon holds its own against the P4 and will do so up to the 3GHz level and maybe beyond. After that comes Hammer, which supposedly will start shipping at close to the MHz range where Athlon XP leaves off, ~2.4GHz.
>>>>
Err, most of the stuff I've seen pegs the Athlon XP at around 2 Ghz, not 2.4.
Only thing is that Hammer will be at least 25% faster than Athlon XP clock for clock, which makes it considerably faster than NOrthwood clock for clock, yet it will have no trouble scaling up in MHz.
>>>>>
I doubt it will have "no trouble." Due to the architecture, it simply won't be able to scale to the kind of clock-speeds the P4 will. Intel is gunning for 5+ GHz, real soon now. AMD will have a hard time keeping up.
hehehehe (Score:2, Interesting)
And of course, Apple would love that too, hehe
Leapfroggers take note (Score:2)
Then Intel will go back to their day job of manufacturing chips in incremental 25% improvements. Intel will reach the limits of power consumption before they reach the manufacturing tolerance limit.
itantic was doomed from the start (Score:2)
the continuing campaign is just throwing good
money after bad. now is AMD's time to shine.
i'm considering doing my next project closed source
just so that i can release it exclusively as
opteron-only, because i love being right.
Actually, this ties in w/some stuff for me... (Score:2)
Re:Actually, this ties in w/some stuff for me... (Score:2)
also, low power is nice for running off of 48vdc power, as is required for telco gear. as it turns out, one of the last major industries to still be paying too much money for underperforming sun hardware is the telco/carrier industry (they still move at the same glacial pace they always have and haven't caught onto the fact that sun is a sinking ship )
so in review: low power consumoption is good for sun becase 1) some of their chips exhibit it 2) the only market they're still relevant in needs it
Why I want Itanium to succeed: (Score:3, Interesting)
Only If... (Score:5, Funny)
Only if you try to overclock it.
Re:Only If... (Score:2, Funny)
Re:last quote... (Score:2, Interesting)
Re:last quote... (Score:2, Insightful)
Doom 3...? (Score:2)
The Itanium is not meant to be a desktop chip. The problem is, it can't seem to cut it as server chip either (too expensive, too power-hungry).
You say there's no demand for 64-bit chips? I wonder why Sun and IBM are still in business, then...
RMN
~~~
Re:Doom 3...? (Score:2)
Hmmmn.... I would not be sure of that. I've seen a fair share of 'enterprise' servers running CS servers and/or the clients after hours. Good thing there is a PCI video card market too. (grin)
Re:Doom 3...? (Score:2)
It was pretty damned sweet!
Re:last quote... (Score:2)
already running x86 linux boxes with 3.5G of
RAM, any more than that and you need 64-bit.
Google for example uses commodity x86 boxes,
and keep there whole internet index in RAM, for
that cheap, big memory, 64-bit
boxes would really come in useful.
Re:last quote... (Score:2)
Re:One good reason for 64-bit (Score:2, Interesting)
Re:Planet of the apes... (Score:3, Funny)
Yup, and IPv4, and people will still not buy a PC without a 1.44MB floppy drive, despite the fact that the last floppy disc was finaly destroyed in 2589...
Re:Planet of the apes... (Score:2, Insightful)