Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel

Itanium Problems 479

webdev writes "An article in today's NYTimes (free but...) highlights some industry concerns over Itanium. The author suggests the normal "what's bad for Intel is bad for the computer industry". Anyone know the power consumption for IBM's 64 bit effort GPUL?"
This discussion has been archived. No new comments can be posted.

Itanium Problems

Comments Filter:
  • IBM's Processor (Score:4, Interesting)

    by rice_web ( 604109 ) on Sunday September 29, 2002 @07:17PM (#4355706)
    I'd venture to say that IBM's processor uses little more power than other PowerPC CPUs. Doesn't it sport SOI and other technologies to limit heat production? Heck, for an--albeit moderately poor--example of this is IBMs 750FX processor vs. the P4. At the same clock speed, the 750FX would consume roughly one fourth the power of the P4.
  • by Anonymous Coward on Sunday September 29, 2002 @07:17PM (#4355708)
    Because this thing really should be called the Itanic...
  • by Em Emalb ( 452530 ) <ememalb AT gmail DOT com> on Sunday September 29, 2002 @07:21PM (#4355728) Homepage Journal
    "It has taken an entire decade, an estimated $5 billion and teams of hundreds of engineers from the two companies to bring the first Itanium chip to market. As the struggles and costs mount for the companies, skeptical technologists say Itanium now has the hallmarks of a bloated project in deep trouble. It is already four years behind schedule, emerging just as companies are in no mood to spend money on technology"

    Skeptical? More like, forget it Chachi, it ain't happening.
    I guess the larger companies don't get it. Corporations are struggling. Companies are in holding patterns, waiting for the mess, erm, economy, to level off.

    Can I have a job now making millions being a skeptical technologist?
    • Given that Intel plans a 20 year life for the IA64 technology, they're going to go through a number of business cycles. The way to make money during the boom is to have built good products during the preceeding bust, and have them ready to sell once there is a market for them. A poor economy can gut AMD's budget just as much as Intel's, actually improving IA64's long term prospects.

      This current bust is mainly just a post-bubble bust, just like "The New Economy" was mainly just a bubble. Companies will eventually start spending again, and eventually they'll even start overspending again, and then cut projects, rinse repeat.
      • "The way to make money during the boom is to have built good products during the preceeding bust, and have them ready to sell once there is a market for them."

        But is Itanium a good product? That was the question of this article. Even during a good economy there will not be a big market for Itanium because Intel just went into the wrong direction with it's design (bloatware). At least I believe so. And Intel agrees with predicions of a 10% market share of the server market.

        Even in a good economy, people will just buy from competitors as Google is going to do (and Google has good economics already). With other X86 compatible processors or platform independent programming, it's a buyer's market and Itanium just doesn't seem to be the best buy.

        I can applaud the decision to make a break from the old X86 architecture, but why did they design it as structurally complex bloatware?
        First they head into the direction of more simplicity (switch to RISC core inside the CISC Pentiums) and then they double back into the complexity trap with Itanium.

        Humans are just much better at improving simple things than they are at improving complex things. Why didn't they just go multi-core or something? I guess it's their CISC cultural heritage.

        And if I may go slightly offtopic for a bit. I think there's something unelegant about those extremely power hungry chips. Something just doesn't feel right about the fact that your solid-state chip's continued existance is dependant on the oil on the ballbearings of a spinning bit of plastic, and that it's just a matter of time before your PC/server breaks.

        A PC should be as solid-state as possible, just make sure electricity keeps going in and it runs. I think server farm cowboys/girls agree with me. They have better things to do than replace fans all day.

        For this reason I like the Transmeta Crusoe, Via C3 and IBM G3.

        However, even though it's power hungry, I do like the Intel Pentium 4's ability to survive the removal of it's heatsink, and continue running Q3 like nothing's happened when you put the heatsink back on. Could you underclock and undervolt a P4 3GHz to 1.5GHz and run it using a giant heatsink without a fan? I bet you can! At least it would survive.
    • I guess the larger companies don't get it. Corporations are struggling. Companies are in holding patterns, waiting for the mess, erm, economy, to level off.

      Many large organizations are spending as much on IT this year as they were spending two years ago. Life goes on. Indeed, in actual terms the economy continues to expand rather than contract, and the total IT spending is increasing.

      Panicky "end of the world stop everything!" thinking is the hallmark of someone who watches a little too much Dateline and 20/20.
  • Google is your... (Score:4, Informative)

    by xenoweeno ( 246136 ) on Sunday September 29, 2002 @07:21PM (#4355733)
    ...friend [nytimes.com]!
    • Why do people keep posting that? If that keeps up, NYT may disable the &partner=google accout, and we will have destroyed the usefulness of Google News.
      • by xenoweeno ( 246136 ) on Sunday September 29, 2002 @07:58PM (#4355888)
        partner=cmdrtaco [nytimes.com] appears to work just as well. You can use that one instead.
        • it actually works with any word after that... or no word at all. if you don't believe, try it out.

          http://www.nytimes.com/2002/09/29/technology/cir cu its/29CHIP.html?ex=1033963200&en=3b60e461ca6b0684& ei=5062&partner=

          seems to be a nice bug
      • Bullshit (Score:3, Interesting)

        The nytimes needs google *much* than google needs the nytimes. Without the nyt - google *still* has thousands of news sources - without google, the nyt looses probably 20 to 30% of the page views they would get otherwise.

        Besides, all that is being "subverted" is the moronic registration process, something that the nyt willingly gives up for google news readers

  • I'm part of a team of people working on a largish supercomputer using itanium2. The things are fast fast fast. Much faster than i anticipated. it's special purpose I think, which is why it defies industry logic
    • Hrm... SGI hasn't laid you off yet?
    • It's all well and good to be able to execute 4 instructions at once, but most systems spend a large portion of their time in library routines (strlen), function prolog/epilog, and so on. Even assuming that you are running some pretty hard number crunching code that can parallelize the inner loops, you are still starving all of the other threads/processes that could be running.

      Why not just work on n-way SMP, so that an application can monopolize one or more processors and still have cycles to spare for mundane work.
  • Ironic (Score:5, Informative)

    by sheepab ( 461960 ) on Sunday September 29, 2002 @07:23PM (#4355739) Homepage
    I just read a story on msnbc.com about AMD's 64bit processor, I close the window, check slashdot and there is the story about Intels Itanium. Anyway here is the link for msnbc. http://www.msnbc.com/news/813950.asp?0si=-
  • by wolfgang_spangler ( 40539 ) on Sunday September 29, 2002 @07:23PM (#4355741)
    "Every big computing disaster has come from taking too many ideas and putting them in one place, and the Itanium is exactly that," said Gordon Bell, a veteran computer designer and a Microsoft researcher."

    He should follow that up by saying, "Here at Microsoft we have proved this time and time again."
    • "Every big computing disaster has come from taking too many ideas and putting them in one place, and the Itanium is exactly that," said Gordon Bell, a veteran computer designer and a Microsoft researcher."
      "That's why here at Microsoft we just rip off everyone elses ideas and release OSs every year"
  • by khuber ( 5664 ) on Sunday September 29, 2002 @07:28PM (#4355764)
  • by Anonymous Coward

    I submitted this a couple weeks ago, but I guess it didn't make the grade:
    An anonymous reader writes "
    According to this InfoWorld article [infoworld.com], chief technologist Leonard Tsai [nanocluster.net], of NEC Solutions, has been fired over his criticism of Intel's new Itanium platform. At a conference in July, Tsai said 'that it would take years for engineers to learn the EPIC (Explicitly Parallel Instruction Computing) instruction set used in the Itanium chips, and that this would delay the adoption of the chip,' that it would 'take a massive effort to educate enough people about EPIC and the Itanium processors to make them successful,' and that 'Intel had "bullied" NEC into picking Itanium for its servers and that HP, as co-designer of EPIC, received preferential treatment from Intel.' So much for freedom of speech."


    • Freedom of speech has little to do with badmouthing your own employer (indirectly, maybe, in this case) or divulging confidential information. Freedom of speech simply does not apply here, and any company is in their right to fire you for it. Depending on the circumstances, and contracts you signed, you could even get sued for talking too much...
  • by mesozoic ( 134277 ) on Sunday September 29, 2002 @07:31PM (#4355771)
    AMD's x86-64 architecture will allow companies to upgrade individual parts of their software systems to 64-bit without having to replace everything else. That's the key to AMD's future success; it makes the migration path to 64-bit that much easier (and that much cheaper).

    Itanium flopped before; chances are good it will flop again.
    • I agree with you, up to a point ...

      There's something even above compatibility (migration path) - namely Moore's law. The goal #1 goal of a CPU company is staying on Moore's curve. Now the problem with x86 is that it is a f*cked up instruction set architecture, and because of its monstruosities (8 registers ? stack-based FP ?) it has become a major hurdle in staying on Moore's curve. Good luck to AMD with their 64 bit thing ... I seriously doubt that their 64 bit chip will be any faster than their own Athlon (going from 16 to 32 bits registers is a big deal, from 32 to 64 not so much)

      The Raven

      • ...and because of its monstruosities (8 registers ? stack-based FP ?) it has become a major hurdle in staying on Moore's curve...

        Could have fooled me. It seems like just yesterday that MIPS said they would change the world. Not buying it, this time around.

        C//
      • Now the problem with x86 is that it is a f*cked up instruction set architecture, and because of its monstruosities (8 registers ? stack-based FP ?) it has become a major hurdle in staying on Moore's curve.

        Huh, that's really interesting. I'd say Intel and AMD have been doing a pretty good job. If what you say is true how come we aren't all running RISC computers now? Well, in a way we are. Today's AMD and Intel chips are not truly CISC anymore. Might wanta read up on the features of CISC and RISC and then read the specs on a K7 or P4.

      • Hey, at least you get 16 GPR with x86-64.
  • Pricing problem (Score:3, Interesting)

    by jbolden ( 176878 ) on Sunday September 29, 2002 @07:33PM (#4355780) Homepage
    The only problem with the Itanium 2 is that Intel is only offering it in a high end configuration with lots of cache. The chip itself when you normalize for cache costs about as much as the P4. GCC already supports the Itanium and Intel has great code they could give to GCC in terms of optomization (Intel doesn't make money in the compiler business). Apple is looking for a new chip and IBM doesn't work out this is a great place to go. Grabbing Linux, BSD and Apple will put tremendous pressure on Microsoft.

    The article itself doesn't mention any problem with the chip other than electricity usage and heat which are both a product of the large amount cache on the current configuration.

    • Re:Pricing problem (Score:5, Insightful)

      by Grishnakh ( 216268 ) on Sunday September 29, 2002 @07:47PM (#4355839)
      GCC already supports the Itanium and Intel has great code they could give to GCC in terms of optomization (Intel doesn't make money in the compiler business).

      Wrong... Intel IS in the compiler business: they have their own compiler called "icc". They could give code to GCC, but they won't because it'll hurt their icc business. You'd think they'd be smart and release their optimizations to GCC to help their processors perform better, but Intel doesn't think this way. They want you to believe their slick marketing that their processors really are better, AND they want you to shell out for their compiler (which may or may not actually get those processors to perform well--you won't know until you pay up and try it out). Of course, how does this help all of us who use open-source software (which includes Google mentioned in this article), compiled by GCC? It doesn't.
      • You know, you can download [intel.com] the compiler for evaluation purposes to actually see if there is a speedup in your application. The linux version is even free for non-commercial use.
      • Re:Pricing problem (Score:3, Insightful)

        by jbolden ( 176878 )
        Intel is not watcom. They sell compilers to sell chips. They've often developed technologies and then given them away for free. To pick a good example they spent a fortune developing a compiler for the i486/i860 combined systems. These never took off but Intel did give the code to companies like SCO, Haupagee and Microway.

      • You'd think they'd be smart and release their optimizations to GCC to help their processors perform better, but Intel doesn't think this way

        Sorry, but this is the way of the "old" world. In the "new" world of VLIW, the compiler can almost be thought of as a part of the chip itself, that is how closely the two are now coupled. The ip that Intel has in the Itanic version of ICC is huge, and represents more of an investment than just a few SSE optimizations, or a scheduling trick or two. The fate of the Itanic rests soley on the performance of ICC (and of course MSVC, which you KNOW Intel has given plenty of input into. Wouldn't surprise me a bit if the IA64 version of MSVC just execs icc). So this isn't quite as cut and dry as you may think (i.e. Evil Intel holding back info from the "good guys").
    • Re:Pricing problem (Score:2, Interesting)

      by druiid ( 109068 )
      You state that Apple might be a potential buyer for the Itanic..... well there's a couple problems with that. All the jokes made about how AMD procs are hot, are NOTHING compared to the Itanium. Literally, you can cook an egg on the things. Apple is commonly known for having systems with low(er) power requirements, and low heat output. If you stick a HUGE chip in an apple system, you've suddenly lost both of these. I doubt Apple is doing anything more than laughing at the Itanium.
      • Every chip with 3 megs of cache runs hot. That's the cache that's making it power hungry and hot (and also really expensive). You reduce the cache you cut the price, the power needs and the heat.

  • A freaking 130 Amp Chip?
    Even with 220 million transistors in it that is a lot of power. Intel should consider that big companies and small users dint always want the BEST of the BEST, they want something that is cost effective. As the story mentions Google might prefer to use a lower power chip because they could save millions in power costs. This can apply to small users too as that chip alone could cost you up to $100 a month.

    Think on the bright side, during the winter when you are on doom 3 you are also heating the house!

    Medevo
  • by gmhowell ( 26755 ) <gmhowell@gmail.com> on Sunday September 29, 2002 @07:40PM (#4355807) Homepage Journal
    "Every big computing disaster has come from taking too many ideas and putting them in one place, and the Itanium is exactly that," said Gordon Bell, a veteran computer designer and a Microsoft researcher.


    He's absolutely correct. The most intelligent thing to do is to make insignificant [zdnet.com], incremental [microsoft.com] changes [microsoft.com], and charge customers full price for each of them.

    • The difference is, of course, that there's no way to patch a CPU.

      So any insignificant upgrade is going to require a new CPU (not counting microcode updates)
  • Not dead, just new (Score:5, Informative)

    by fparnold ( 181906 ) on Sunday September 29, 2002 @07:41PM (#4355814)
    We've ported chemistry simulation code to the pre-release ITA-2, and run benchmarks. There's not much like it, performance-wise, and on a cycle/dollar scale, it's in a class by itself. Smokes US-IIIs, walks away from the Alpha, and keeps pace handily with the Power4, at a more academicly-tolerable price. It's a good chip in its second incarnation, and has the misfortune to be introduced during a recession.

    As always, the NYT ignored that you'll need the 64-bit address space for large applications, it has excellent memory bandwidth, and those customers requiring such a system weren't explicitly interviewed or mentioned. The heat issue is true, and that's it's one failing, but as with the Alpha, it will get better in time. (I still remember the rumors, pre-release of the Alpha that DEC was going to have to build a liquid-cooled workstation)
    • by akb ( 39826 )
      NYT contrasted I2 with AMD's upcoming 64 bit offering quite prominently.
    • The NYT isn't in the business of interviewing scientists and high-end users. They interview someone who regular readers can relate to, and a 'search engine' company is someone like that.

      They've definitely cooked up a FUD article here.
  • The emergence of the 64bit chip market is pretty exciting, even to an ignoramus like me, but this article got me thinking about some things. The whole power consumption issue is really undervalued I think. We've gotten to the point that most chips are fast and powerful (strength) enough to do tasks efficiently. But I've heard that specialized chips are more efficient at lower clock speeds and power consumption but suffer from their rigor and restriction to a certain type of processing. Maybe its time to give specialized chips their due and move flexibility off the chip itself and into multi-proc (using different specialized chips) or even multi-machine situations.

    Of course faster is always better in database mining and protein folding and nuclear explosion modeling, but I wonder if the field isn't ripe for a move away from generalized powerhouse chips to more specialized chips that run at lower clock speeds (perhaps) and have lower power consumption (a must). Personal computing made advances due to cheap general use chips, but as our computers become specialized appliances, a move towards specializing the insides makes sense to me.

    Itanium seems to me to be too late to the party. Its an old school chip and probably/ perhaps a badassed one at that. But computer users, from desktop to database, are likely to appreciate specialized chips in multiprocessor or multimachine configurations that express the flexibility. I don't know if its possible, but on the desktop side, rather than have a 3 Ghz general chip, maybe two cheaper and less power hungry 2 Ghz chips each with a unique specialization for certain types of tasks might perform better. One chip to rule them all is so last century.

    Regardless of the feasibility of what I've said, lower power consumption is really cool (no pun intended, honestly). Just because it doesn't have an exhaust pipe port doesn't mean that the computer doesn't pollute.
  • by Rui del-Negro ( 531098 ) on Sunday September 29, 2002 @07:46PM (#4355834) Homepage
    ...can you imagine a beowulf cluster of these?

    In fact, I know from a reliable source that tomorrow the president of the USA is going to reveal that the Iraqi army has managed to get hold of 2000 Itanium chips and is threatening to turn them all on and melt the Earth.

    RMN
    ~~~

  • by Shivetya ( 243324 ) on Sunday September 29, 2002 @07:51PM (#4355852) Homepage Journal
    Heaven knows they have a copy of MS's book on corporate behaviour when it comes to competitors.

    Bad for Intel probably means good for the industry, as we won't have another half-assed chip shoved down our throats.

  • using as many 10 watt bulbs as that thing would light...

    Seriously, is efficency no longer even considered?
    • I've never seen a 10 watt lightbulb. A run of the mill lightbulb is about 100 watts, not much less than this processor here. And you cannot blind someone with a 100 watt lightbulb except by poking his eyes out with it.
  • by guinan ( 191856 ) on Sunday September 29, 2002 @07:57PM (#4355877)
    We have an early model of the Itanium ( given to us free by HP ;-).

    The beast has a 220V power line coming into it, and we've decided that the reason its so heavy is that if it was lighter, the fans would propel it across the room like a jet engine.
  • "It may not be as simple as people think it is to take advantage of a 64-bit processor,"

    I think he's very right. Take for instance SMP. A single threaded application running on an SMP system has no advantage over the same app running on a single processor system.

    In the same way, most applications aren't even aware of 64 bits. So they will continue adding, multiplying, and addressing memory in 32 bits -- whether they be binary ports, or actually recompiled versions.

    For the lazy man's migration path of using the same apps on a 64 bit system, there will be no advantage whatsoever of using a 64 bit system.

    On the other hand, if you are recompiling, you might as well switch to the EPIC instruction set (Itanium), and get a defacto performance boost -- even if you don't port the code to be 64 bit aware... that's something you won't get even if you recompile for 64 bit CISC opteron.

    And last, if you are refactoring, or re-designing your app for 64 bits, there is no migration path per se.

    So I think it all boils down to: power consumption (for google), marketing strategy (ie. hyping strategy), and economy.

    • In the same way, most applications aren't even aware of 64 bits. For the lazy man's migration path of using the same apps on a 64 bit system, there will be no advantage whatsoever of using a 64 bit system.

      This isn't quite correct, because, at a minimum, the operating system can arrange each 32 bit application _at _least_ be given its _OWN_ 32 bit address space (using a sort of virtual segmenting) for 4 gig of addressable memory per application.

      Meanwhile, the main advantage isn't that any one older programs can or can't get memory, but rather that they all continue to work, and the few you need to upgrade to 64 bit addressing can be done incrementally. This saves you quite a lot of $$$ on software budgets.

      C//

  • by Anonymous Coward
    As an ex-Itanic designer, I can't help but get a warm fuzzy feeling every time I read bad news about Itanic. I sat there for years and watched upper-middle management screw over the project (and each other) in order to advance their careers. The only escape (especially after they froze internal job transfers) most of us grunts got was a job at a new company.

    I went into Merced with all the hope and excitement of a new engineer. I left hating the profession and the management that controls it.

    Regardless of how much Intel stock makes up my portfolio, I hope Itanic crashes and burns. I hope Yamhill (64-bit x86, designed in Oregon) succeeds flawlessly. I am way too cynical to believe it'll happen but, I hope the success of Yamhill forces Barrett to realize the uselessness of Santa Clara design, causing him to shut it down and rely on Oregon design to do it right. But, considering that Gary Thomas was "punished" for his failures on Itanic by being given a ton of options and a cushy job in Intel-Folsom, Itanic and Santa Clara "mis-design" will just continue along.

    Of course, I am just a bitter old engineer taking cheap shots.

    Long live Itanic, Intel's Verdun!
  • by SysKoll ( 48967 ) on Sunday September 29, 2002 @08:09PM (#4355931)

    The Itanium relies heavily on exceedingly good compilers that will perform for the IA64 the same level of optimization that regular, on-the-fly predictive optimization do in RISC chips.

    The main obstacle with this method is that Turing's theorem says static compile-time optimization will never work as well as dynamic optimization. This is because, roughly, the only way to guess what a program will do with a given set of input data is to execute it with its actual data set. Here is a link [theregister.co.uk] where a reader of The Register addressed this concern in 1999.

    Is anyone aware of how well the limits predicting by Turing can apply to the compile-time IA64 algorithms?

    -- SysKoll
    • by CustomDesigned ( 250089 ) <stuart@gathman.org> on Sunday September 29, 2002 @09:08PM (#4356149) Homepage Journal
      Dynamic optimization is not restricted to hardware. Java Hotspot will do well with Itanium (if Sun survives), and I believe Smalltalk and LISP have dynamic optimization as well. The way I see it, Virtual Machines are the future of high performance computing. And yes, .NET is important for Microsoft to prosper in the non-IA32 world. (Although I hate it when the wicked prosper.)
      • It has taken 20 years to get even the mediocre dynamic optimization that Java offers, and it works only because the Java language is fairly inconvenient and restrictive. Smalltalk and Lisp attemp dynamic optimization, but they fail miserably where it counts: numerical code; for that you have to drop back into a mess of type declarations and unportable hints to the compiler.

        Itanium is a step backwards for software. It make the tradeoff of giving you somewhat better performance for a few languages and benchmarks, with complex compilers, while being even harder and more problematic for anything that deviates from the canonical benchmarks. That locks new kinds of software even more into a straightjacket than it already has been.

        If Intel sees dynamic compilation as the solution to the complexity of Itanium, they should do the same thing Transmeta does: define a simpler instruction set for compilers to target and make the dynamic compilation and optimization software effectively part of the chip.

    • from bloomberg news service (bloomberg.com) Intel, Intergraph Fail in Mediation of Chip-Patent Dispute
      Intel Corp. said it failed to reach an agreement in a $250 million dollar patent lawsuit by computer- services company Intergraph Corp., which already was paid $300 million by the world's biggest chipmaker to resolve an earlier dispute.

      some info can be found here:
      http://www.intergraph.com/intel/legalpic.asp
      and
      http://straitstimes.asia1.com.sg/money/story/0,187 0,146182,00.html

      Today, Intel and intergraph anounced a break down in cour ordered mediation to resolve a quarter billion dollar patent infringement suit against the ITanium.

      In July last year, Intergraph (www.intergraph.com) brought a lawsuit against INTEL alleging the basic design of the Itanium violates ateleast two patents they had held for ten years. Intergraph alleges the concept of software based instruction routining in highly parallel architechtures was developed for their C5 (aka clipper) chip.

      Itanium basic design is based on a HP concept for highly parallel processing in which the order of execution on the chip can actually create race conditions for dependencies in calculations. This allows performance enhancements and simplication of handshaking harware, since basically the chip does not have to wait for the slowest operations. INstead the job of preventing race conditions falls to the compiler. The compiler must model how the processor will execute an instruction in the context of the other instructions the chip will be executing in parallel and then re-order the micro-code to prevent erroneous computations.

      It would appear the methodology for achieving this was patented by intergraph for the C5 chip. The C5 chip project was eventually abandoned and intergraph parteneres with intel to replace the CPU in their workstations with pentiums.

      We all know that intel was previously accused of stealing the ALPHA processor designs and that law suit was "settled" by intel buying out the impoverished ALPHA (dec).

      This law suit is for 250 million dollars. which is about 5 % of the entire 5 billion dollar development const of the Itanium. Mediation talks have broken down so the Suit will presumable go ahead. If you are interested try a google search, there's lots of info out there as this trial has dragged on for over a year.

  • The number one value of a 64 bit CPU, to my mind, is the ability for it to address more than 4G of RAM, without destroying locality, like the PAE does on 32 bit processors.

    PAE, for those of you who are, as yet, unaware of it, allows you to access more than 4G of physical RAM, by reviving an old technique called "bank selection". It's fairly useless for most of the applicaitons for which you would want more RAM in the first place, since it doesn't increase the allowable size of the kernel or process virtual address space at all, so the only thing it lets ou do is use RAM instead of swap, and not run lots of applications at the same time, without a lot of VM changes.

    Intel keeps trying to sell us Itanium on performance, when, in fact, we don't care. What we care about is the ability to operate on larger data sets.

    Intel: just because your delivery of access to larger amounts of physical RAM on 32 bit processors, via the PAE, was not welcomed (mostly because it was implemented in a way that was totally useless to software engineers and OS designers), doesn't mean that access to more RAM *by a single kernel or process* will not be the major selling point for Itanium: it will.

    Get your crap together, and quit concentrating on clock rates.

    -- Terry
  • GPUL the stupidest name for a processor? What were they thinking?
  • According to the article Intel is not primarily concerned with selling a few machine to number crunchers, although this is where the Itanium is clearly most useful. Rather, they wish to sell hundreds of machines to large data centers.

    Allegedly large data centers such as Google are sensitive to power consumption. Of course we are not just taking about the power consumption of the processor. We are also taking about the power needed to keep the boxes cool as well as the power that is needed run the air conditioner to cool the data room at about a 20% efficiency. What this means is that several watts of energy must be used to cool each watt used by the computer equipment.

    I agree that Itanium may have misjudged the market for this chip. If AMD can produce a chip that is almost as good, but much more efficient, it may well be more economical to buy three AMD based machines instead of two Intel based machines. This becomes even more possible as a box becomes a single disposable commodity component in a very large networked array. Much like the auto industry, it may be practical to build inefficient cars when energy prices are low, but it is nevertheless a risky venture.

  • by RAMMS+EIN ( 578166 ) on Sunday September 29, 2002 @08:45PM (#4356061) Homepage Journal
    There's one thing Inever understood about Intel's and AMD's design for 64-bits CPUs. Intel seems to aim for simplicity, that is, 64-bits code should be clean, as compared to current x86 code. AMD, on the other hand, seems to be mainly concerned about downward compatibility (which is a huge win). But why not have it both ways? The CPU could just start out in 16-bit stone age legacy mode, and then be switched to 64-bits mode, similar to how today's x86en are switched to 32-bits mode. The 64-bits code could then be clean like Intel proposes, and we'd all be happy. Of course, it would effectively mean having two CPUs on one chip, one for legacy code, and one for modern code, but isn't that what's happening anyway? Last thing I want to say: clean 64-bits code makes me think MIPS.
  • by dpbsmith ( 263124 ) on Sunday September 29, 2002 @08:49PM (#4356080) Homepage
    Saddest sentence in the whole article:

    "There are other benefits for Hewlett-Packard. The Itanium allows the company to eliminate both of its current 64-bit chips -- the H.P. PA-RISC and Compaq Alpha. That alone should save the company $200 million to $400 million annually in development and manufacturing costs, according to Steven M. Milunovich, an analyst at Merrill Lynch."

    Yeah, HP and Compaq have been fine stewards of their engineering legacy...
    • This is another case of intellectual property ownership having the perverse effect of stifling innovation and the general welfare.

      HP kills the technology of these two worthy chips by choosing a third option. In doing so, they effectively reduce the aggregate technical knowledge available for use by our society for their own gain.

      As a society, we protect intellectual property so that the creators can use it, not so that they can lock it away. This is a case of current law perversely hampering the general welfare.

      Intellectual property should only be protected when it is used by its owners.

      Use it or lose it. If HP doesn't want to use Alpha or PA-RISC technology, then others should be allowed to do so.
  • by be-fan ( 61476 ) on Sunday September 29, 2002 @08:50PM (#4356086)
    I'd give Intel engineers just a bit more credit than the average /. poster. Intel has been right at getting the trends for awhile now. Take the Pentium 4 for example. Everyone thought it would flop cuz it had crappy IPC. It sucked in the first several iterations (less than 2 GHz). But its quite the speed demon now, ain't it?
    As for Itanium, there are quite a few ways it could succeed. It has the potential for serious performance. The super-wide architecture is perfect for code like scientific processing, image processing, and 3D graphics that are nice, regular, and easy to optimize and parallize. And what kind of processing do you think is going to be popular in the future?
  • hehehehe (Score:2, Interesting)

    by athlon02 ( 201713 )
    What's bad for Intel is bad for the computer industry? Intel may have their fingers in a lot of things, but if Intel (and for that matter MS) disappeared tomorrow, the computer industry would survive. AMD would love that, I'm sure... they would not only be the de facto standard on x86-64, but on x86, in general. And hopefully AMD would hurry up and release a mobile Duron or XP with really low power consumption, enough to be put in a PDA along with plenty of AMD's flash memory too (come on, ya know many of you would love an x86 PDA that you could run windows, freebsd, linux, etc. on with minimal changes)...

    And of course, Apple would love that too, hehe
  • IA64 will have the edge for about 6 months. After that Power4 (next rev) will leap over IA64 with a minimum of disruption because it is already 64 bit.

    Then Intel will go back to their day job of manufacturing chips in incremental 25% improvements. Intel will reach the limits of power consumption before they reach the manufacturing tolerance limit.
  • google for itanic, and you'll begin to see why.
    the continuing campaign is just throwing good
    money after bad. now is AMD's time to shine.
    i'm considering doing my next project closed source
    just so that i can release it exclusively as
    opteron-only, because i love being right.
  • I always wondered why Sun was putting a great deal of emphasis on power consumption on their new line of processors. In retrospect, I see why. Smaller blade servers, which allows you to pack a lot of servers into a small space. And power consumption, which if it is very high, eats into the TCO. Oddly enough, it looks like the SPARCs may be playing the game better than you'd think.
    • sun makes a deal out of low power consumption because they happen to be unable to compete on any other merit w.r.t. processors.

      also, low power is nice for running off of 48vdc power, as is required for telco gear. as it turns out, one of the last major industries to still be paying too much money for underperforming sun hardware is the telco/carrier industry (they still move at the same glacial pace they always have and haven't caught onto the fact that sun is a sinking ship )

      so in review: low power consumoption is good for sun becase 1) some of their chips exhibit it 2) the only market they're still relevant in needs it

  • by sockit2me9000 ( 589601 ) on Sunday September 29, 2002 @10:10PM (#4356429)
    So let me get this straight, the new Intel's require a complete hardware shift in order to be useful, just like Apple. Both have 64 bit chips in the works. For the first time Apple, Sun, IBM et al will be on a level playing field with Intel. If Intel succeeds with Itanium then none of the software owned by any company will run, necessitating purchase of a new OS, programs, ect. Doesn't this realy put Apple, Sun and IBM in an interesting position? For the first time companies will see a level playing field. I would hope companies see this as a golden time to dump x86/Intel architecture and go instead towards more open solutions. After all, they have to switch hardware and software anyway. Why not think different?

"If it ain't broke, don't fix it." - Bert Lantz

Working...