Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Technology

What's Next in CPU Land after Itanium? 589

"I work for a major research organization. Of late a lot of the normal big computer companies have been visiting and preaching the gospel of Itanium. My question to them, and to the assembled masses here at Slashdot is what happens next when Itanium is real? My world view is that Itanium based systems will become commodity products very quickly after good silicon is available in reasonable volume. At that point, why should one spend $8-10k for that hardware from the likes of HP, Compaq, Dell and others when one can build it for $2k (or even less)? In other words, has Intel finally done in most of their customers by obliterating all the other CPU choices (except IBM Power4 [& friends G4, et al] and AMD Hammer) and turned the remainder of the marketplace into raw commodity goods? Lest you defend the other CPUs... Sparc is dead, Sun doesn't have the money (more than US$1B we'll guess) to do another round. PA-RISC is done, as HP has given away the architecture group. MIPS lacks funding (and perhaps even the idea people at this point). Alpha is gone too (also because of the heavy investment problem no doubt). Most other CPUs don't have an installed base that makes any difference, especially in the high end computing world. So what's next? I don't like the single track future that Intel has just because it is a single track!"
This discussion has been archived. No new comments can be posted.

What's Next in CPU Land after Itanium?

Comments Filter:
  • compilers (Score:3, Insightful)

    by avandesande ( 143899 ) on Monday February 18, 2002 @05:32PM (#3028512) Journal
    Itaniums will become commodities when people figure out how to write compilers for them. That will be in about 10 years.
    • Not likely, it would take a couple of weeks max for the first compilers to appear. Wish I could afford one, I'd love to hack away on a compiler for a new machine.
      • Re:compilers (Score:3, Insightful)

        by shitfit77 ( 80494 )
        You seem to miss the point on this a little bit. Although there will be compilers available, there is an extreme difference between a compiler and a good compiler. A compiler works, a good compile is able to utilize an architecture to its fullest (or at least close).
      • first compilers != useful compilers
      • Re:compilers (Score:3, Insightful)

        by jmv ( 93421 )
        Not likely, it would take a couple of weeks max for the first compilers to appear.

        Sure, but the problem is how long before there are good compilers? That's one of the main problems with architectures like Itanium.
      • Re:compilers (Score:5, Insightful)

        by Zathrus ( 232140 ) on Monday February 18, 2002 @06:38PM (#3028878) Homepage
        Not likely, it would take a couple of weeks max for the first compilers to appear

        You obviously know nothing about Itanium, EPIC, VLIW, or pretty much anything else on this topic.

        The issue isn't whether or not there's a compiler available. The issue is how GOOD the compiler is. In the case of a Very Large Instruction Word (VLIW) CPU like the Itanium, the compiler is the bottleneck for system performance. Why? Because the premise of these CPUs is that while they have a low clockspeed (750-800 MHz for Itanium), they execute many instructions per cycle - 10 or more. So while "slower", they get more done per cycle, resulting in a faster overall execution. It's up to the compiler to properly structure the executable machine code to take maximum advantage of this layout and keep all execution units of the CPU busy at all times, as well as reduce disseparate memory accesses and so forth.

        The intial compilers that are released with these machines do it, but not as well as they could. In fact, compiler writers are still trying to grasp the issues with pipelining on modern CPUs and their much lower number of execution units, and this is without utilizing special instructions that explicitly do non-conflicting operations at once. We're still years away from writing fully optimized compilers for contemporary CPUs. And while there's been a great deal of work done on VLIW already (prior to Itanium), there's even more yet to be done. A decade for a "good" compiler is probably optimistic.

        You may be wondering, what's the point anyway? If VLIW is so damn hard, why bother? Just ramp up that clock speed and get more CPU power! Well, that's nice, but it doesn't work in reality. We're starting to bump up against physical limitations in CPU speeds. Electrons are not magical particles that travel instantaeously. They are limited to slightly under the speed of light, which means roughly 1 cm per nanosecond. This doesn't seem to be a big deal until you realize that a 2.0 GHz CPU means each clock cycle is 0.5 nanoseconds. So if you have to fetch an instruction or data from main memory, and that memory is a mere 5 cm away, under optimal conditions you've just sat around for 10 clock cycles waiting on that memory to be fetched. This is ignoring the fact that there's propogation delays, latch delays, and other things. So go ahead, pump that CPU up to 10 GHz and waste even more clock cycles waiting on data. That or redesign the entire thing, expect the compiler to do the work and properly feed you data and instructions such that you can do 10x as much in the same amount of time, and all with no wasted CPU instructions.

        That's the theory at least.

        Reality is that not only does the compiler have to properly organize the machine code, it also has to have some idea of what the code is doing to do so. Compile the code w/ profiling, run the code against a "realistic" data set, then recompile it again feeding it the profile data. Many compilers can do this now, but it's rarely done. Because it's hard to guess a "realistic" data set, it's hard to acquire the same, how you expect the code to be used and how it actually is used are rarely the same, and there's more development time involved in all of this. So most companies don't bother. And despite what I said above, 2.0 GHz still hasn't reached the point where the CPU is sitting on it's ass more than it's doing work. Until we start approaching that point there's little incentive to put in the R&D time necessary to switch to a new CPU archictecture.

        And, of course, on top of all of the above is the issue that Joe Sixpack will invariably see 2 GHz as faster than 750 MHz no matter what. Have fun with that one.
        • Hi,

          Speed of light is 3.10^8 m/s

          In a nanosecond (10^-9s), light travels 30cm,
          not 1cm like you wrote.

  • by wrinkledshirt ( 228541 ) on Monday February 18, 2002 @05:35PM (#3028530) Homepage
    "Anadium"

    That's probably only funny to chem majors.

    Okay, maybe not even chem majors.
  • by Indras ( 515472 ) on Monday February 18, 2002 @05:36PM (#3028534)
    Think for a minute how long we've been using 32-bit processors. If (and when) 64-bit becomes mainstream, I imagine it will be around for a LONG time, as it becomes standardized and slowly takes over a majority of the market. Also, we'll have the other contenders butting in with equivalent and cheaper options, like Cyrix (tried) and AMD (did).

    Just because Intel will pave the way for mainstream 64-bit processors using the Itanium doesn't mean it will monopolize the market until it comes out with a 128-bit processor. No matter what, it will probably be years from now before we have to worry.
    • The only problem with AMD's 64 bit line is that it isn't going to be compatible with the Itanium. That is both good and bad. Good in that it is an alternative, bad in that it is going to cause a lot of confusion.

      I think a lot of people are too overconfident that Itanium is going to be successful, let alone quickly. It is going to require a lot of changes to software in order to take advantage of it because it isn't just a 64 bit x86, it is a whole new architecture, one more closely related to HP PA-RISC than x86. It also may not do a very good job of running existing 32 bit code, which could slow down its acceptance, particularly in desktop systems. The last time Intel made a big push (with the i432) to create a whole new non-x86 processor family, it was less than successful. Although to be fair, the i432 was a radically different proposition and the Itanium with its more proven PA-RISC roots looks a lot more sound.

      AMD's Hammer architecture, on the other hand, is more conservative, being a x86 family processor extended to 64 bit. It should require less modifications to existing software to take advantage of it, although an argument could be made that it won't have as much advantage to take having more legacy issues with the aging x86 architecture. It also may perform a lot better on existing 32 bit code than Itanium. And if AMD's track history holds true, it will probably be significantly less expensive than the Itanium.

      A lot of whether it is Intel or AMD that paves the way for 64 bit mainstream CPUs will probably have to do with which of them is the first one that offers a price attractive product that runs existing 32 bit software well while being marketable as a 64 bit chip. Unfortunately for AMD, the marketable part is, as always going to be tough. While AMD has been hugely successful in "white box" sales where customers can choose their CPU, they've had a much more difficult time penetrating the big name PC markets, particularly in higher end systems. This despite the fact that in many cases an Athlon or Duron would offer a better performance than a PIII or P4 at a better price.

  • Next? (Score:2, Funny)

    by jgerman ( 106518 )
    In a word, quantum. Or maybe that's two words, actually it might only be a word when you're looking directly at it.
  • Itanium is Titanium without the T, so Anadium is Vanadium without the V.

    I can't wait until they get to Hassium. They could name their chip Assium!

  • by Talonius ( 97106 ) on Monday February 18, 2002 @05:39PM (#3028554)
    AMD's newest chip is supposedly fairly remarkable (don't have specifics, see Tom's Hardware's search engine). What about the Crusoe? VIA's purchase of (I believe) the M3? I wouldn't look at companies that are currently in the business only - I would tend to look at companies that might move into the business, either via investment, startup, or outright purchase.

    I'm not too worried about Itaniums, and I don't see them becoming prevalent for quite a while. While the Pentium II, III, and IV moved through the marketplace fairly rapidly they all offered compatibility at some level. If I recall correctly 32 bit programs that are not rewritten for 64 bit run SLOWER on the Itanium than they do the equivalent Pentium line.

    In essence consider this: it's like a brand new operating system attempting to break into the monopoly that Microsoft has. (Parallels drawn out of necessity.) While it may be better, faster, superior in every way it doesn't have 20+ years of legacy code behind it - and that will end up being what drags it down.

    Only time will tell. Remember the Pentium Pros..

    Talonius
    • If I recall correctly 32 bit programs that are not rewritten for 64 bit run SLOWER on the Itanium than they do the equivalent Pentium line.
      When Apple transitioned from the M68K line to the PPC, they were in the same situation - 68K code would run faster on a 40Mhz 68040 than on a 40Mhz PPC 601. The reason consumers didn't mind was that the the PPC 601 started at 60Mhz (approximately the break-even point to the emulation layer), and (to the end user) didn't cost significantly more.

      Until Intel gets the Itanium cost down to the point where they run 32-bit code at equivalent speed to a Pentium at the same cost, Itanium probably isn't ready for the consumer market.

      --
      Damn the Emperor!
      • When Apple transitioned from the M68K line to the PPC, they were in the same situation - 68K code would run faster on a 40Mhz 68040 than on a 40Mhz PPC 601. The reason consumers didn't mind was that the the PPC 601 started at 60Mhz (approximately the break-even point to the emulation layer), and (to the end user) didn't cost significantly more.

        While that's a valid point, it also bears pointing out that Pentium IV is at 2200 MHz whereas Itanium is at 800MHz -- about 1/3rd the clock speed. That ratio is going to remain for awhile too -- McKinley will come out at 1000 MHz, while Pentium IV continues its mad march toward 3000MHz and beyond. You acknowledge this fact implicitly with your next statement (re: Itanium not viable until approx same speed at approx same cost), but I felt it'd be interesting to point out just how large a gap there is.

        These ratios spell doom for hardware-level emulation of the Pentium on the Itanium. Unless Intel has some serious magic, having a 100% cycle-for-cycle perfect emulation of the Pentium III or even Pentium IV on the Itanium die will never run better than 1/3rd the speed of the real thing, since the fundamental clock rate is so far off. The only real way to get close is to do a software-level translation and get a boost from scheduling for the native hardware.

        It's interesting to note, BTW, that HP's Dynamo [hp.com] project does a software translation of PA-8000 code targeting (guess what) a PA-8000 CPU, and rather than slowing things down, it actually gets 20% speedups! Ars Technica [arstechnica.com] also did a piece on this. Perhaps that's why HP doesn't have hardware-level translation from PA-RISC to Itanium on the die like Intel does -- they (HP) are in a better position to just translate the PA-RISC code to IA-64 when needed. (Also, in the UNIX world, it's just simply less necessary.)

        --Joe
        • While 800/2200MHz is a large difference, you fail to mention something that everyone here should know by now, that clock speed does not equal performance.

          Clock speed does not equal performance. This is a fact of life, especialy with 20 stage pipelines and the like. AMD and Apply have been trying to teach this to the world, and on the surface most geeks understand, but they don't beleive it in their hearts.

          Now, I'm not saying that the PIV won't be faster than Itanium for a good while here, and I honestly have no idea if it will be or not. We just need to stop using Mhz as our comparisons unless we're comparing the same chip.
      • PPC 601 started at 60Mhz (approximately the break-even point to the emulation layer)


        Actually the break even point wasn't reached until about 100 Mhz or so, not sure. But I do remember when the first ppc came out they were definatly slower than the old 040's. Still don't know how Apple pulled that one off (selling new computers that were essentially slower than previous models)
      • Well, eventually, that will happen, without a doubt. Moore's law pretty much assures it, in fact. The big question mark is whether or not the Itanium can match the price/performance of the Pentium like before someone else does. Seeing as Itanium is currently running at clock speeds around 800 mhz when it would need about 1600 to be equivalent to a P4, even Intel's not betting on this (hence the Yamhill) and they're seemingly relegating the Itanium to high-end servers (to take over where the Suns and Alphas left off) which seems to be where they're best suited. At least for now, it looks like the x86-64 (Hammer/Yamhill) is the platform of the future, and Itanium will be just another expensive non-consumer platform.

        The luxury Apple had in this situation was control of the operating system, which Intel doesn't have. Ironically, Apple will also be moving to a 64-bit architecture within the year (conservative rumors say Q3/Q4 2002.) The transition is supposed to go very smoothly, as developers are being told to prepare their programs with the 64-bit OS X libs and OS X-64-bit is being developed in concurrence with the 32 bit version. FAT binaries helped immenseley in the 68k-PPC transition, and probably will again for the G4-G5 transition.

        Though honestly, if Microsoft gets what they want with the entire .NET plan (not the framework, the entire plan) then architecture will become largely irrelevant. In any case, I doubt that many people will need frequent execution of their old 32-bit apps much more than 2 years after any sort of major switch happens. It happened with Mac OS, and it'll happen with Windows. Linux is irrelevant here, as most Linux software can be easily patched and recompiled.
    • by Locutus ( 9039 ) on Monday February 18, 2002 @06:26PM (#3028813)
      > Only time will tell. Remember the Pentium Pros

      the ONLY reason the Pentium Pro didn't catch on was because Microsoft released a 16bit OS and told everyone it was a 32bit one ( Windows 95 ).

      SCO Unix, OS/2, and to some degree Windows NT ran quite a bit faster on the 32bit optimized PPro when compared with the same clocked Pentium.

      Because of Microsofts great PR, even Intel was caught off guard and scrambled out a hack called MMX to give the appearance of progress in the CPU market. While the MMX based Pentiums were getting press/air time, Intel was hacking at the Pentium Pro core to get it to run THE 16bit OS (Windows) faster. That was the Pentium II.

      IBM did some speed tests of OS/2 on the PPro and in some cases they saw a 100% speed increase on the 32bit optimized PPro.

      This reminds me of the 7degrees from Kevin Bacon reference. It seems that many failures in the computer industry are only about 3degrees from Microsoft. And never is the failure do to competition but more likely, marketing and market control. IMHO.

      The PPro was a darn good CPU. It finally took 32bit-ness seriously though about 10 years after the 32bit i86386 was released. As much as I like the simplicity of RISC, Intel will never get the Titanicium off the ground and AMD/Hammer will force Intel to follow their lead with an extension to the i86 instruction set into 64bit land.
      IMHO.

      LoB
      • "the ONLY reason the Pentium Pro didn't catch on was because Microsoft released a 16bit OS and told everyone it"

        I wouldn't say ONLY. There was also the slight problem of the double chip package (separate cache and cpu dies mounted on one substrate) being horrendously expensive to produce. Looks like Itanium will have thesame problem [slashdot.org].
  • Recurring problem (Score:4, Interesting)

    by colmore ( 56499 ) on Monday February 18, 2002 @05:39PM (#3028558) Journal
    This seems to be a recurring problem in a number of technology based industries. Once you get to a certain lever of high-tech, only the (very) big boys can even compete.

    So here's the question: how do you keep competition alive when an initial investment costs in the billions of dollars. For any company less than Intel sized, a single bad product cycle spells complete doom. That's no kind of market to be in.

    Also, wasn't this inevitable. There are a few Beowulf jokes being posted, but that's really what's going on. Increasingly high performance tasks (Google, render farms etc. etc. etc.) are using massive arrays of low-power CPUs. It costs a lot of money to develop big iron chips, and if people aren't buying them then there's no point in investing that much money.

    What I'm worried about are the isolated markets that still require massively powerful, low processor number architectures. Not everything splits into nice Distributed.net packages.
    • Also, wasn't this inevitable. There are a few Beowulf jokes being posted, but that's really what's going on. Increasingly high performance tasks (Google, render farms etc. etc. etc.) are using massive arrays of low-power CPUs. It costs a lot of money to develop big iron chips, and if people aren't buying them then there's no point in investing that much money.

      The problem is that a massively parallel computer is only useful for certain classes of problem. There are many types of problem where communications load goes up very rapidly with the number of processors, which makes a cluster (with its relatively poor communications bandwidth) impractical. This is what Big Iron is designed to be useful for.
  • Quantum computing, 2007.

    Bet on it. ;)
  • SPARC is dead? (Score:4, Interesting)

    by bconway ( 63464 ) on Monday February 18, 2002 @05:41PM (#3028570) Homepage
    That's news to me. I could swear a friend of mine just jumped in on the UltraSPARC 4 project.
    • Re:SPARC is dead? (Score:4, Informative)

      by Anonymous Coward on Monday February 18, 2002 @05:46PM (#3028612)
      Actually, I was just transferred to the UltraSPARC 4 project at Sun [sun.com] in Burlington, MA. I don't know of the official release date, though I've heard rumors of early 2003. I'm amazed at the quality of FUD in this "article" and that it actually made it to the front page of Slashdot.
    • Go take a look at Sun's sales numbers for 2001 and Q1 2002. Given that they have X86 machines ready to hit the market in June, the chances of Sun being able to convince already reluctant buyers that Sparc systems are still worth the money are rather low, especially now that big iron is being replaced with clusters of cheap systems. Sparc may not be dead, but Sparc's future as a commodity item is dim at best.
  • by ChrisRijk ( 1818 ) on Monday February 18, 2002 @05:41PM (#3028573)
    McKinley is 464mm^2! That's a huge CPU. Will be very expensive to product, even though Intel will probably be subsidsing it with their profits from x86. Current Itanium systems start at about $8000 - doubt McKinley will be much cheaper. It'll take a long time for volume to build up, especially as it has so little software ported to it. Even if you have Intel's money, still can't just create a new platform overnight. Intel's optimisticly expect it to be until about 2005 before Itanium has any real market presence.

    Also, on what kind of clueless basis do you assume that Sun has little left. Here's what's coming in just the next 2-3 years:
    http://www.aceshardware.com/read_news.jsp?id=55000 446 [aceshardware.com]

    Sun's CPU division is 1300+ strong and they're planning to hire another 100-200 in the next 2 years.

    A lot of HP's PA-RISC customers (and Compaq's Alpha customers) are quite unhappy with being forced to change architectures and are jumping ship to Sun and IBM - HP had a 7% drop in Unix sales Q3 to Q4 last year, while Sun had a 10% rise. By 2003 the significant majority of the $100k+ system market will be owned by Sun and IBM. Very reason for any of those customers to switch to Itanium, so it'll mostly just eat Xeon sales.
    • If Itanium fails you can be sure Intel will release the Yamhill [slashdot.org], a chip much like AMD's Hammer.

      "It's pretty well understood that Itanium will not provide leadership x86 performance. That's Hammer's great hope, in fact. AMD's strategy depends on Intel mistakenly abdicating its x86 throne leaving Hammer and its descendants the heirs apparent to a software kingdom.

      Would Intel so cavalierly jeopardize its legacy? Not on your life. To no one's great surprise, Intel is rumored to be developing something that will give future Pentium processors--not IA-64 processors--a performance kick. In a perverse reversal of roles, Intel may actually be following AMD's lead in 64-bit x86 extensions. A "Hammer killer" technology, code-named Yamhill, may appear in chips late next year, about the time Hammer makes its debut. It's suggested that Intel's forthcoming Prescott processor will be based on Pentium 4, but with Yamhill 64-bit extensions that coincidentally mimic Hammer's. (Prescott is also rumored to be built on a 0.09 micron process and implement HyperThreading.)

      Naturally, the very existence of Yamhill, if it exists at all, is a diplomatically touchy subject at Intel HQ. The company doesn't want to undermine its outward confidence in Itanium and IA-64, but neither can it afford the possibility of ceding x86 dominance to a competitor. Besides, whether they appear in future Pentium derivatives or not, Intel's 64-bit extensions could appear in future IA-64 processors instead. New IA-64 features plus competitive x86 performance--now that's a compelling product."

      From Extremetech. [extremetech.com]

      Another article on Yamhill at The Register [theregister.co.uk] and Extremetech. [extremetech.com]
    • Will SUN make it? (Score:4, Interesting)

      by rcs1000 ( 462363 ) <<moc.liamg> <ta> <0001scr>> on Monday February 18, 2002 @08:23PM (#3029389)
      The problem with discussions of Intel vs every other chip maker is they ignore the extraordinary differences in scale between the players.

      Let's compare: Sun is a company that produces operating systems (Solaris), computers, CPUs, motherboards, and a host of peripherals. (Plus it has to invent Java, J2EE, etc.) It's R&D budget was $2.0bn in 2001.

      Intel is 95% CPUs. It spent $3.8bn on R&D in 2001.

      Intel has the world's most productive fabs. It's capex budget is so huge, it can order the lithograohy companies and the like to build to order inside its factories. Result, it's yields are 25% better at start; and still 10-12% better after 6-9 months.

      It is incredibly difficult for anyone to keep up with the Intel machine. I wish it weren't so; but it is.

      *r
  • Re: (Score:2, Interesting)

    Comment removed based on user account deletion
  • AMD's 64-bit K8 (Score:2, Informative)

    AMD has a 64-bit K8 chip in the works right now.

    I searched and managed to find an old comparison of the K8 vs. Itanium and a few other chips.

    The article (page 5 of 5 of a review) is here:
    http://www.sysopt.com/articles/k8/index5.html [sysopt.com]

    EricKrout.com :: A Weblog On Crack [erickrout.com]
  • by AtariDatacenter ( 31657 ) on Monday February 18, 2002 @05:42PM (#3028580)
    Having recently participated in an NDA from Sun regarding the SPARC processor (and even with the knowledge I had walking into the meeting), SPARC is not dead or dying. In fact, I'd say that Sun squarely recognizes it as a strength. Their competition (HP for example), however, is wishing they didn't knife their baby.

    As far as money to go another round, remember, Sun doesn't fab CPUs. What Sun does is design them, and they turn it over to Texas Instruments for production. And TI has their own reasons to keep up-to-date with the latest production technologies, so Sun doesn't eat that cost.

    BTW: I really wish that I could talk about the SPARC presentation. I liked it a whole lot better than the NDA I attended with HP talking about their Itanic future.
    • BTW: I really wish that I could talk about the SPARC presentation. I liked it a whole lot better than the NDA I attended with HP talking about their Itanic future.

      Itanic. That's really funny.
      • Itanic rituals and sacrifices
      • Itan worshippers
      then I ran out of ideas and had to search for 'satanic' on google
      • Itanic Sysadmins
      • The First Itanic Church
      • The Itanic Verses
      • Itanic Hampster Dance
      ...

      (this post is obviously the set-up. now I just need someone to supply the punchline)
    • by Anonymous Coward
      I heard SPARC chips are so fucking scared of the multi-GHz x86 clones that they are running their instructions out of order! Some of the Sparc instructions think they can even hide in a delay slot (under a jump) so the x86 clones won't find them and kick their sorry out-of-date asses!
  • Itanium (Score:4, Insightful)

    by crumbz ( 41803 ) <[moc.liamg>maps ... uj>maps_evomer> on Monday February 18, 2002 @05:44PM (#3028596) Homepage
    Given the tremendous capital requirements in building a state of the art fab along with the incredible amount of enginnering man-hours required to leap to the next level, I think we are seeing a situation similar to the one for airliners: Airbus or Boeing. They are the only two that matter because the cost of entry into the airliner market is so prohibitive. This does not necessarily apply to Microsoft and it's OS monopoly as the Linux community has illustrated. Mindshare and marketshare are not always linked.

    I have hopes for Intel producing the worlds best microprocessors as that would benefit s all. Simply advocating a move to Itanium for marketing reasons or to meet revenue targets does a disservice to the computer industry.

    Then again, they are in business to make $$$....

  • by RobL3 ( 126711 ) on Monday February 18, 2002 @05:46PM (#3028606)
    The Unobtainium

    It's release will follow the distribution pattern established by Transmeta.
  • by Brian Stretch ( 5304 ) on Monday February 18, 2002 @05:47PM (#3028614)
    The huge die size of the Itanium and its upcoming successor make the chip far more expensive than the Pentium series, so I would not expect Itanium machines for $2K. So far, the CPUs alone are several $thousand. I also haven't seen where its performence is that impressive. x86 code performence, since its emulated, is poor. Recompile or else. Intel has sold, what 500 Itanium CPUs?

    The upcoming AMD Hammer series, OTOH, is supposed to be about 30% faster clock-to-clock than the current Athlon XP series (which is considerably faster clock-to-clock than the Intel P4) and start at 2GHz. Sun's recent announcement of Linux x86 platform support, with details to come midyear, suggests that they'll be moving to the Hammer (to ship Q4). Sun would certainly love to take a swipe at Intel, and Sun has made positive comments about AMD's x86-64 Hammer architecture.

    Speculation: Intel gets Hammered in the second half of this year.
  • Just... (Score:2, Funny)

    by JonWan ( 456212 )
    name it P-51 and use the 'nickname' Mustang.
  • No, no, and no. (Score:4, Insightful)

    by hotsauce ( 514237 ) on Monday February 18, 2002 @05:50PM (#3028629)

    No, Itanium will not become commodity as soon as you foresee because compilers and software do not exist to make good use of it (some argue nothing can make good use of it [derogatory]).

    No, Intel has not killed the competition. AMD is alive and well. The PowerPC family is on the verge of The Next Big Thing (G5). And the reports of Sparc's demise have been greatly exaggerated.

    No, other vendors are not irrelevant. Hitachi makes killer chips for big iron, and looks set to increase that trend. If anything, the CPU market is looking less and less like a monopoly than before.

  • by S-prime ( 550519 ) on Monday February 18, 2002 @05:50PM (#3028631)
    Now that the G4 has finally gotten past the 1GHz mark, and Apple has a brand spanking new Unix based OS running on it(and if you don't like it you can run others), this opens a whole new choice for the researcher looking for a new platform.
  • by eyefish ( 324893 ) on Monday February 18, 2002 @05:51PM (#3028637)
    It is my opinion that once Microsoft makes its Common Language Runtime a forced deFacto standard, and once they manage to implement it on other CPU architectures, they'll essentially have a hardware-independent Windows platform. Once that happens Microsoft will have sole leverage on the PC business. That means that Intel will NOT be needed at all for running future versions of Windows-compatible programs. Who knows, maybe this could spell a revival on new and innnovative CPU architectures, since they all will now be able to run the CLR. Side note: We *could* do this today with Java, but sadly Sun doesn't have the leverage Microsoft's monopoly does on the PC business.
    • ... that a runtime environment where "Hello World" will require, let's say, several GB of disk, a few hundred MB of RAM, continuous online updating (also requiring continuous hardware updating), and hundreds of old and newly-arriving security holes and exploits, is going to "take over the world."

      Granted, it's going to be popular for a while. But isn't what's popular *always* sucky?
    • They already tried that. Guess what? NT was supposed to be multiplatform! And geez do you see any of the non-X86 versions out there? Nope....

      In fact, NT was developed on MIPS. And M$ is in no way interested in having the CLR running on non windows based platforms. CLR is not designed to make code machine-independent, but rather location-independent. M$ still wants you to be using Windows, it just wants to have a tighter grip on you no matter where you go.

      Why would anyone even think about adopting .NET is beyond me.
  • First, you are assuming that Itanium will succeed and drive all other choices from the market. At the moment, this is far from clear, and even Intel is said to be hedging their bets with a P4 follow-on.

    Second, what will drive the price of the Itanium down? Historically, Intel have announced that their latest superchip is "targeted at servers, not desktops" about a week before releasing a flood of them into the desktop marketplace (usually the ones that didn't pass spec at the higher speed level), thus driving down the price of the server chips to where no one else could compete. What will be the driver this time? Businesses aren't buying desktops, and when they do start buying again it will be pure commodity: there is zero appeal for Itanium on a business desktop. And treble for home desktops.

    Which leaves high-end servers. I don't think that any datacentre manager worth his pay is going to pull out $100,000 HP N-Class boxes in favor of $2,000 Intel clones. There's a bit more that goes into a server than the CPU.

    sPh

  • Dead? I doubt it. (Score:5, Interesting)

    by BlackStar ( 106064 ) on Monday February 18, 2002 @05:52PM (#3028648) Homepage
    SPARC dead? I'm not sure where you come across that idea. Having listened to a few talks down at JavaOne and chatted briefly with Marc Tremblay (head chip dude down there, father of MAJC and one designer of SPARC) they've already got design down on the next two levels of SPARC as the IV is experimental, and the V is the next production level as I understand it. MAJC seems to be the experimental platform they are using for smaller implementations and alternative ideas to be tried, based on some of Tremblay's theories.

    I may be off base on some of the details, but Sun has a unified approach from top to bottom, from tools to silicon for the systems they plan to deliver. I doubt it will just throw in the towel. Ultimately, Sun ships iron, and they lead the market in their segment.

    I don't see the basis for your assertion, and where you pulled 1B out of for cost I also don't know.

    Alpha is AMD now, as that's where a good chunk of the people went. MIPS is still kicking, with the 14000 so far, but I won't speak to the future of that chip line. There's a lot of chip heads on this site with much better info than I on many of the lines.

    One decent, although dated summary is here [f9.co.uk]

    Please tell me there's more information you're basing this on than consumer workstation marketshare....

  • before everybody starts saying (too late, i'm sure) there is no 64bit software to support this chip, I'd like to point them to here [microsoft.com].
  • by Kerne ( 42289 ) on Monday February 18, 2002 @05:53PM (#3028656) Homepage
    A fast CPU is nice, but how about upgrading the rest of the standard PC architecture and peripherals to the same level?

    Weren't we all suppose to be using high-speed serial connections by now instead of a cocktail of SCSI (1/2/3, wide, fast, hold the mayo), IDE (ATA-33/66/100), parallel, 8 bit serial, USB, Firewire, PS/2, PCI, ISA (which is finally disappearing), etc. Heck, I'd be happy if the motherboard ran at even half to a third the speed of the cpu. :P

    Using a 20 year old peripheral port on last weeks multi-gig cpu is like sucking a McDonalds shake through a coffee stirrer!
    • Agreed. We're disproportionaly favoring cpu when the real gains would be seen in high speed interconnects especially with storage devices. Most of my cpu's time is spent waiting for instructions and even when sent could stand to recieve them both faster and in greater numbers.
    • Yes, In one of my CS classes we were told the statistic(which was probably made up, and i've since forgotten) about how long it takes to read the entire contents of the hard drive. If current trends keep up, it'll soon take us weeks to just read everything we can store on one hard drive. Anyone have "hard" figures?
    • A fast CPU is nice, but how about upgrading the rest of the standard PC architecture and peripherals to the same level?

      Weren't we all suppose to be using high-speed serial connections by now instead of a cocktail of SCSI (1/2/3, wide, fast, hold the mayo), IDE (ATA-33/66/100), parallel, 8 bit serial, USB, Firewire, PS/2, PCI, ISA (which is finally disappearing), etc. Heck, I'd be happy if the motherboard ran at even half to a third the speed of the cpu. :P


      The good news is that USB is well on its way to completely replacing serial and parallel ports, and that PCI has been the One True Bus for the past couple of years now. Everything south of the southbridge is slowly fading away.

      IMO, if we'd switched to 66 MHz 64-bit PCI years ago, we'd have no further problems on this front. In practice, PCI-X may finally be pushed through by Intel, and that will serve most internal communications needs. Motherboard chipsets are modular enough that it doesn't really matter what flavour of IDE/SCSI/firewire your drive is hanging off of; the drive controller is just another PCI device to the processor. You have enough bandwidth and DMA functionality on PCI bus to handle it.

      The only peripherals that are currently bottlenecks are RAM and the video card. RAM is handled by upgrading the memory bus every couple of years. This is easy to do, because peripherals don't care what happens on the other side of the northbridge. The video card was handled adequately by the hack that is AGP (64-bit 66 MHz PCI would have been a much better idea, but that wouldn't have given Intel its nice AGP port to license).

      The only peripheral that *might* be a problem in the future will be the network card (when gigabit cards finally come into vogue), and that will probably be what forces motherboard makers to put wider/faster PCI on to midrange boards and not just high-end boards.

      In summary, this is less of a problem than it first appears to be.

      The only serious bottleneck for performance is RAM latency, and that's not because of legacy peripherals.
  • by Animats ( 122034 ) on Monday February 18, 2002 @05:53PM (#3028659) Homepage
    My own guess for the desktop is that NVidia will put a CPU core, probably from AMD, in the next generation of their nForce part. That puts CPU, graphics, networking, sound, disk control, and the motherboard logic on a single chip. Their current nForce part already has all of that but the CPU.

    If you look at the transistor counts, NVidia's graphic chips already are more complicated than most CPU parts. This is quite do-able.

    • if you look at the transistor counts, NVidia's graphic chips already are more complicated than most CPU parts. This is quite do-able.

      There's more to [CG]PU complexity than transistor count. Look at the 512Mbit memory cells that run for only a couple dollars a chip.

      The trick is inter-related logic complexity. To my understanding the existing GPUs have no issues with backward compatability (so much of the x86 overhead is avoided), the core itself is pipelined and modular, so the complexity is spread out across the whole chip (independent teams can work on their own components with little concern for sistern components, whereas every ounce of performance is being squeezed out of x86's which require complete coordination). Further, graphics acceleration is simply the application of graphical algorithms into silicon. While I'm not quite sure which algorithms there are, the possibilities are endless. Imagine a fast-fourier transform implemented as a SIMD floating point instruction. You create an array of floating point logic units, and interconnect them. The floating point unit is pretty much a common-off-the-shelf design, so the only real logic you apply is the interconnectivity.

      I'm not saying that GPU's are easy to design, I'm just saying that hardware filters are designed this way all the time, and I would'nt be surprised if a large percentage of the nVida chips weren't stock logic modules.

      -Michael
  • Sure, build your own box for $2k instead of buying one for three times that much -- if you don't mind being fired.

    You don't pay $6k or $8k for a server just because there's high markup on the parts. A lot of it is due to tighter tolerances required for high-availability or high-reliability equipment. There's greater consideration for issues of heat, RF, power consumption and stability -- and then there's the built-in redundancy for many components (power supplies, fans, etc).

    It's not as simple as you think.

    Twoflower
  • You talk alot about Sparc, MIPS, and Alpha in that question of yours. Yes, those are all relatively low volume products, yes they do cost a lot of money. However, the Itanium is almost like Intels version of those products, done in a slightly different way. Even though they are made in lower volumes they are still profitable because the people buying them will pay a lot more for a system. Sun can sell a 64-processor UltraSpac III system for in the realm of a million dollars and more. If you don't think they are making a nasty profit of of that you are nuts. That is why they keep advancing the technology.

    People love to through buzz words like 64-bit vs. 32-bit and stuff like that but when it comes down to it what do you need on your desktop? If you are using your PC for basic development or coding there is not much to be gained from a 64-bit core at all. You don't really need anymore precision. If you are talking about scientific applications then maybe you do need the 64-bit core.

    I am not saying that desktop PC's won't eventually go to 64-bit cores. However, even if you were to get a cheap Itanium right now it would perform no better, and possibly worse then your high end AMD and Intel x86 processors because few of your applications would take advantage of the core.

    This question will be better asked for when Intel puts a processor on there desktop timeline that utilizes IA-64 technology.
  • Umm... Given how well Sun is entrenched in the financial world, I think you saying the platform is dead is just plain FUD. Check with the IT department at any major financial company and ask them how many 4500 or better systems they have. (I know, I used to work for one) And yes, a lot of them are upgrading to the new UltraSparc III machines.

    And for those folks doing hard research (or special effects companies with lots o' money) SGI is still king. Despite what nvidia would like us to believe, SGI's not going anywhere anytime soon for big 3d rendering projects.
  • At the moment, Itanium systems are worth their money only if you have large address space requirements. Intel seems to focus on optimizing the Pentium 4 compiler, and not the Itanium compiler. I doubt that the Itanium architecture will surpass IA32/x86 on the desktop (where 4GB is enough for everyone ;-) anytime soon.

    That's why I doubt that we are going to see affordable IA64 systems soon. After all, the transition is quite rough, thanks to Itanium's abysmal IA32 emulation (performance-wise), so there isn't even much market demand.

    In the future, Intel may well decide to switch to the IA64 instruction set before it is really time for it, just to make things a bit more complicated for AMD.
  • You have a very valid question, but you're statement,

    "At that point, why should one spend $8-10k for that hardware from the likes of HP, Compaq, Dell and others when one can build it for $2k (or even less)?"


    Is missing something. HP, Compaq and Dell provide more than the hardware. They provide services that go along with the HW. They use the hardware to suck you into to using their services. While small companies can build these systems on their own for cheaper, the larger companies are the ones that need to outsource some things that HP, Compaq and Dell's services provide.

    Also its kind of silly to think that these IA-64 systems will be able to be built for $2k each (given the cost of similiarly performance) Sparc's and IBMs. Intel is hoping for their backwards compatibility and clout to push ISVs into programming for their systems. Once they have those vendors in their camps, the chip and server prices will go up again.

    And finally, most people that would need a 64bit solution will probably need multiproc systems. OEM's will be able to provide the small systems, but once you go past the 4-8 way space, there really isn't a cheep way of scaling up any higher (, and btw, clustering is really only a solution for tasks that don't involve large sharing of data between processors that is time sensitive.) Which is where HP, Compaq, Fujitsu, NEC, and IBM will be with their high-end systems. I doubt I will ever see Dell release a system with more than 8 IA-64 processors.

    Of course only time will tell what will happen next. OH, one last thing. The guy who posted should be informed that HP did not sell any processor guys, they sold some chipset guys to Intel. I'm surprised that someone that is in a processor research group would not know this. Checkout:
    http://slashdot.org/comments.pl?sid=22319&threshol d=0&commentsort=3&tid=118&mode=thread&cid=0
  • My 2c (Score:4, Interesting)

    by UTPinky ( 472296 ) on Monday February 18, 2002 @06:05PM (#3028727) Homepage
    I had a professor last semester that worked at Intel, and several things he told me, reminded me of somthing: It's still a busisness. In my opinion Intel will not make any huge move, until they KNOW that they will profit off of it. This means that they won't make any major move until the consumer market is there. For example, he was telling us that there have been times where they have come up with ideas that would in fact increase performance, HOWEVER due to their wonderful job at brainwashing the entire public into thinking that clockspeed is THE measure of performance, they scrapped the ideas because they noticed that they would cost too much to implement, and would result in no frequency increase. (Thanks Intel)

    I also think that while AMD has shown that they can provide an honest competition in terms of performance, it is going to be stuck following Intel's every move, for the mere reason that Intel is "sleeping with" so many big OEMS (*cough* Dell *cough*), leaving it as the CPU for the hobbyist

    Well, anyways, that's just my 2c...
  • by orz ( 88387 )
    You're not going to be getting an Itanium based system for $2000 anytime soon.

    First of all, Intel has said ever since the Itaniums much-delayed release that it couldn't really compete and is primarily released to get some infrastructure ready for when the McKinley is ready (IIRC, it's scheduled for about 3 months from now...).

    Secondly, the die size for the McKinley is HUGE. On todays top-of-the-line .13 micron process, the manufacturing costs are likely to be too high for this chip to make it into high-end workstations, let alone $2000 consumer computers.

    Thirdly, the competition isn't dead yet. Sparc and PA-RISC may be dead, but Sun offers competition, and IBMs Power4 will be a decent competitor. Alpha does indeed look to have disappeared, but I thought I heard something about some Japanese company buying rights to some Alpha stuff, and planning on a big die shrink and integrating a large cache (which is all the Alpha really needs to compete, for the near future).

    Fourth of all, the performance of even the McKinley is questionable. Compilers for it's IA64 instruction set are still quite poor, with little sign of the anticipated improvements. It's predecessors, the Merced/Itanium, was dog-slow at most tasks (though good at floating-point). The most recent benchmarks show the McKinleys 32-bit performance as terrible, though it's floating-point performance is supposed to be stellar, and its integer performance decent (when combined with an enormous on-die L3 cache...).

    Anyway. Intel just likes the Itanium because the the instruction set is sufficiently complex that the prohibitive cost of designing a compatible would raise the cost of entry to the market enough to give them a more secure monopoly for the next decade.
  • by camusatan ( 554328 ) on Monday February 18, 2002 @06:12PM (#3028760) Homepage
    The implicit assumption that the author is making here is that 64-bit CPU's such as Itanium will be the 'next big thing'. I'm not sure - 64-bit CPU's really only are necessary for machines that need more than 4 GB of VM space - and with various x86 addressing extensions, some IA32 CPU's can address up to 16 GB (I think).

    Now don't get me wrong - 64-bit filesystems are great, and necessary - being limited to 2GB or 4GB files is terrible. But no 64-bit CPU is necessary for that kind of thing, the filesystem just has to be written as 64-bit (which is easier said than done, and could easily sacrifice backwards-compatibility with various API's, but I digress...).

    That being said - Intel might very well be moving down the wrong path - the Itanium is a huge, expensive, hot, completely new chip. Even Intel is hedging its bets [theregister.co.uk] on whether or not Itanium will take off - and AMD is poised to eat Intel's lunch with their new Hammer design [com.com].

    Who knows, perhaps all CPU's from now on will be compatible with x86 IA32, and innovation will be in the various processing units that sit behind the instruction-set decoder. Take a look at AMD or Transmeta for examples of that, already.

  • Just look at the auto industry. GM, Ford, Chrysler began the North American market by consolidating all the smaller auto companies and dominated for years. Then along came Honda, Toyota, Nissan and now they have made huge gains.

    The fact is that even though it looks impossible to overcome Intel at this point, someday someone will.

  • by Kjella ( 173770 ) on Monday February 18, 2002 @06:14PM (#3028768) Homepage
    Rewriting standard applications to take advantage of the Itanium is one thing. However, companies that need a $10k+ server usually have programs that are specialized. After 20 years of the x86 standard there's a large codebase, although given a few improvements along the way. If you read the FreeDOS article a little while back companies were still running DOS in production systems, because it *works*. Porting it to Itanium will be a lot worse than porting it to x86-64 and Hammer. Let's face it, the hardware cost is usually minimal today. Software programmers however, are not cheap.

    Kjella
  • by Anonymous Coward
    You won't see anybody building an Itanium for $2K, since the chips cost more than that when you buy 1000 of them at a time.

    Maybe 10 years from now, but that's too far off.

    1) HP's PA-RISC is as dead as Intel's x86

    2) Alpha should regain the speed crown with the EV7 for a while, so they aren't dead yet. They've just announced they'll be dead in a few years :)

    3) IBM's POWER4 is the current speed king and is likely to be around for a long long time.

    4) MIPS.. Aren't these popular RISC chips in the world due to their embedded use? (N64, Playstation, networking) At 500Mhz in SGI's machines they are pretty dead, but various MIPS chips are doing quite well in emerging areas. Infact AMD just bought a MIPS company.

    5) Sparc has never been that great CPU vs CPU with the other companies, but I expect them to be around for a fairly long time still, just based on their installed base. Their customers never really bought on performance (otherwise ALpha would still be around!), but on service and reliablity. As long as they can provide good enough performance they'll be around.

    The next Itanium is HUGE making it very expensive to produce (meaning you won't ever build a system for under $2K with one!), requires a LOT of optimization in software to get accepable perfomance (meaning it'll suck unless you run active profiling optimizations and I doubt most game companies will even do that), it uses a lot of power and creates a lot of heat (it makes the Athlon/P4 look like embedded chips!), and it isn't really compatible with existing software. Nobody is going to run Win98, WinXP, or even GNU/Linux on it on the desktop.

    The next Itanium will be more popular than the last, but it won't even register on people's radars as it won't provide the best performance, it won't have a bunch of software written for it, and it'll be expensive. Apple will sell more iBooks than Intel sells Itaniums for the next few years.
  • There is little compelling need for desktop users (the ones that create the volume for commoditization) to move to 64 bit systems.

    Until there is breakthrough brought on by computing speed, we will see a stall in computer upgrading as we have seen in the past.

    I expect we will see more things like the Imac (very cool computers), before we see a press for new computers for speed.

    The two things I think will create the next level breakthrough.

    Real Time CGI imaging at Toystory/Mosters INC/FF, level of quality. We can probably predict precisely WHEN that will be possible by mapping the development speed of 3d hardware, memory, software breakthroughs, and polygon density to date, and where the predictable bottlenecks will appear. (My suspicion is that we are 5-8 years away).

    The other breakthrough which I think would do it, and right now it is very difficult to predict when it will happen, but I suspect that adoption would be pretty rapid, is real time voice interaction that is 5 9's accurate. This is likely to appear after a certain speed level of computers, and a breakthrough understanding/algorithm for speech recognition.

    However, I suspect the AMD x86-64 solution may be adopted much faster than the Itanium solution. Likely there is an app out there that may have a large enough niche to require 64 bit apps, and the rest of the apps on the computer would be 32 bit. I suspect that the app will be imaging or video related, and that will create an adoption around the AMD solution, before the Itanium moves out of the server market to the desktop market where it will be commoditized.
  • except IBM Power4 [& friends G4, et al]

    While the Power4 will no doubt compete with the Itanium in the server space, since many people are talking about when 64-bit chips will hit the desktop, you should note that its "friend" the G4, which has been out since before the P4, is by no means meant to compete with new Intel offerings; the Goldfish PowerPC 8500 ("G5") is aimed squarely to dominate the desktop space before Intel can get to it with 64-bit chips. It's ability to run 32-bit code at much better speed than the othet 64-bit offerings makes it much more appealing to people looking to transition to 64-bit on the desktop, and if they can pull off the .13 SOI, 500MHz RapidIO bus, etc. it should reassert A.I.M.'s competitiveness in high-end desktops. Now when it will actually ship, how much of this will get implemented, and at what frequency it starts at is anyone's guess.

  • A boring movie titled "Itanic", starring an effeminate man-boy and a chubby love interest.
  • by sirwired ( 27582 ) on Monday February 18, 2002 @06:53PM (#3028973)
    No, you can't build something like a Netfinity (oops. er - xSeries eServer) in your garage for $2k. Built into a high-level xSeries is:

    1) Hot-pluggable power supplies, drives, and PCI - slots.
    2) Built-in hot-plug SCSI
    3) Integrated service processor for diagnostics (essentially a computer within a computer)
    4) Extremely well-tested box. (Very important to do integration testing on high-end units.)
    5) Very nice, serviceable, rack-mount chassis
    6) Crap-load of PCI slots
    7) Light-path diagnostics. (Lets somebody without training figure out what's broke.)
    8) IBM Director
    9) Well-designed cooling that would be impossible to achieve with a garage box. (Do you know how to do airflow modeling?)
    10) Support.

    The list goes on...

    Yes, they will become a commodity, in that you will be able to get them from multiple major manufacturers, but don't expect to build it yourself in your basement anytime soon.

    SirWired
  • by BadlandZ ( 1725 ) on Monday February 18, 2002 @08:34PM (#3029428) Journal
    Sparc is dead, Sun doesn't have the money (more than US $1B we'll guess) to do another round

    Someone remind me to post a link back to this story in a month or two when Sun announces their faster processors with solved ecache solutions...

  • by spinlocked ( 462072 ) on Monday February 18, 2002 @08:34PM (#3029431)
    Fud, fud, fud. I can't speak for the other companies but Sun can easily afford to fund R&D on the next generation SPARC chip, they've got 6 billion $ cash in hand [sun.com]. Let alone investments, and have done for over 2 years. BTW the current generation is UltraSPARCIII, UltraSPARCIV is just a fabrication improvement. Work is already underway on UltraSPARCV's design. Sun's crown jewels are SPARC/Solaris, when Sun stops working on their own OS/CPU/Server platform it's time to stop investing in them.
  • by guacamole ( 24270 ) on Tuesday February 19, 2002 @04:45AM (#3030727)
    My world view is that Itanium based systems will become commodity products very quickly after good silicon is available in reasonable volume. At that point, why should one spend $8-10k for that hardware from the likes of HP, Compaq, Dell and others when one can build it for $2k (or even less)?

    When peolpe start buying Itanium systems in volume, then the prices will drop on the Itanium systems. The reasons, they're expensive is not because the chips are hard to come by but because no one wants to buy them right now.

    However, this comment alone makes me wonder about he posters cluelessness. He obviously hasn't worked in any real production environment. You people should realize that you simply can't build the kind of systems that Dell, HP, etc sell -today- out of commodity components. Take a look at a typical high-end SMP Dell server: propietary OEM motherboard, propietary case, hot-swap hard drives, hot-swap redundant power supplies and cooling, LOM support, etc. All components have been carefully designed to work together to produce a reliable, and scalable server system. You will never ever build the same kind of system on your own and if you do it's not going to be cheaper than buying one. Plus you don't get the vendor support.

    The comment about SPARC being death is completely astonishing at the time when Sun is -THE- unix market leader. SPARC CPUs were never faster than the competition but that didn't worry Sun users as long as they were up to par with the competitors. The reason people buy Sun hardware is not the CPUs (CPU is alone is useless) but Solaris which is THE enterprise class OS and its applications, Sun's excellent support, massive multiprocessor scalability of Sun systems, massive I/O bandwidth, etc.

    Current Sun chip is not bad at all (UltraSPARC III) and Sun is working on UltraSPARC V.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...