Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AMD

Linus Has Harsh Words For Itanium 825

Anonymous Coward writes "As a follow up to the earlier story "Intel: No Rush to 64-bit Desktop"... In words that Intel are likely to be far from happy with, the Finnish luminary has stuck the boot into Itanium. His responses to some questions on processor architecture are sure to be music to AMD's ears. Linus, in an Inquirer interview concludes: "Code size matters. Price matters. Real world matters. And ia-64... falls flat on its face on ALL of these."" Of course, Linus works for a chip maker ;)
This discussion has been archived. No new comments can be posted.

Linus Has Harsh Words For Itanium

Comments Filter:
  • by More Karma Than God ( 643953 ) on Monday February 24, 2003 @09:15PM (#5375351)
    Not to mention the fact that most home users won't see a 2X performance boost from 64 bits.
    • by Waffle Iron ( 339739 ) on Monday February 24, 2003 @10:52PM (#5375956)
      Not to mention the fact that most home users won't see a 2X performance boost from 64 bits.

      Most home users are going to see a performance drop from 64 bits. 64-bit code needs 8 bytes to hold every pointer. This will serve to eat up more cache and memory bandwidth, which are already major bottlenecks for any CPU.

      Unless you have a program that actually needs to work on more than 2G of data at one time, 64 bits buys you nothing but extra time waiting to move around millions extra of zeroed out upper bytes.

      Some people need that much data in memory at once, but the average home user today doesn't. People typically mention video, but if there's one thing that's easy to stream in and out of a smaller memory space, it's multimedia data.

      • by KewlPC ( 245768 ) on Monday February 24, 2003 @11:41PM (#5376262) Homepage Journal
        64-bit code needs 8 bytes to hold every pointer. This will serve to eat up more cache and memory bandwidth, which are already major bottlenecks for any CPU.

        The only thing this eats up is cache; because the system has a correspondingly wider data bus, there isn't a hit in memory bandwidth (unless the designers are trying to be cheap bastards and give a 64-bit CPU the same data bus width you'd use for a 32-bit CPU). And most 64-bit CPUs have a lot of cache.

        And as for what kind of applications you potentially need several gigabytes worth of memory for, there's scientific processing and the like.
        • by Waffle Iron ( 339739 ) on Tuesday February 25, 2003 @12:05AM (#5376386)
          The only thing this eats up is cache; because the system has a correspondingly wider data bus, there isn't a hit in memory bandwidth (unless the designers are trying to be cheap bastards and give a 64-bit CPU the same data bus width you'd use for a 32-bit CPU)

          Ever since the 8086/8088 duo, the bus width of a CPU has been decoupled from its word size. For a long time, the external bus width of (non Rambus) 32-bit CPUs has been wider than 32 bits. This works because the memory unit fetches entire cache lines. The CPU designers could be less cheap bastards today and bring out 32-bit CPUs with 256-bit wide busses if they wanted to.

          And most 64-bit CPUs have a lot of cache.

          You could put a lot of cache in a 32-bit CPU. You could put a small cache in a 64-bit CPU. In fact, the biggest difference between high-end and low-end CPUs is just the size of their caches.

          To be fair, the current Itanium has an enormous cache that uses the vast majority of the die size and dicates its price and power consumption. It's logic core really isn't that big. If you embedded an X86 core in all of that cache, you'd get a very fast chip. If you teamed up an Itanium core in a Celeron cache, you'd get Celeron-level performance. 64 bits has little to do with it; you're mostly paying for cache and bandwidth when you buy high end CPUs.

      • News flash...

        The pentium architecture has been loading 64 bits of memory at a time since the PII. They have to because that is the only way the RAM has a chance in hell of keeping up with the processor. Basically they load 2 instructions at once, and have them execute at double the speed of the RAM. (That's also part of why you get such a kick in the pants when you optimize with the -mcpu=i686 flag in gcc.)

  • Obsessive (Score:3, Insightful)

    by snack-a-lot ( 443111 ) on Monday February 24, 2003 @09:15PM (#5375354)
    Not only does he work for a chip maker, he's like totally obsessed with the i386 architecture. I guess it's what he cut his teeth and and he's going to stick with it. But to think that no-one else has a use for it is very short-sighted.

    It'll probably still make it into the kernel, though. I mean, alpha and sun architectures are in there, so ...
    • by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Monday February 24, 2003 @09:57PM (#5375628) Homepage
      Linus isn't saying he won't let it in. He's simply saying that the thinks it's not a good arch based on technical merit. He'll let it in. He never said he wouldn't. He's just saying he doesn't like the way the chip was designed (what choices they made, etc).
    • Re:Obsessive (Score:5, Interesting)

      by Pharmboy ( 216950 ) on Monday February 24, 2003 @10:23PM (#5375771) Journal
      Not only does he work for a chip maker, he's like totally obsessed with the i386 architecture. I guess it's what he cut his teeth and and he's going to stick with it. But to think that no-one else has a use for it is very short-sighted.

      He works for a company that doesn't build chips with the i386 architecture. Its emulated in firmwear, "code morphing" is what they call it. Its slightly slower than hardware but its worth the trade for power consumption.

      I am betting he has worked with plenty of morph code, creating virtual cpus, subsets of the i386 chip, or different completely. This is akin to designing hardware, in software.

      I can't see how him working for Transmeta hurt his understanding of processors. Seems like it would actually enhance his understanding.
  • by YOU ARE SO FIRED! ( 635925 ) on Monday February 24, 2003 @09:16PM (#5375362) Journal
    This is from the Linux-Kernel mailing list, not an Inquirer interview. Here [iu.edu] is the post.
  • Linus too Harsh (Score:4, Insightful)

    by enigma32 ( 128601 ) on Monday February 24, 2003 @09:18PM (#5375383)
    Now, we all know that the Itanium isn't everything it's cracked up to be, and I think none of us at are wrong in blaming intel for coming out with a lousy product....

    But, isn't one of those situations he mentions in the interview (namely, running a large database server) what this chip is designed to be doing?

    As I recall, the IA64 isn't designed for the desktop user... In fact, desktop users probably don't even need 64 processing for a number of years still....

    Yet we're attacking Intel for making the chip to fit it's niche?

    Perhaps we need to be more fair in the context of the usefulness of the chip, instead of considering it in all contexts and criticizing it based on that?
    • Re:Linus too Harsh (Score:5, Interesting)

      by Dynedain ( 141758 ) <slashdot2NO@SPAManthonymclin.com> on Monday February 24, 2003 @09:36PM (#5375498) Homepage
      As I recall, the IA64 isn't designed for the desktop user... In fact, desktop users probably don't even need 64 processing for a number of years still....

      I need more than 4GB RAM (3.5 if I want it stable) for video editing and 3D rendering.

      AMD is developing their 64bit chipsets with the desktop market in mind, as well as the server market. Intel has forgone the desktop, which will turn out to be a huge blunder. Especially when its already a determined fact that the 32bit emulation mode on AMDs line slaughters the 32bit mode on the Itanium.
      • by angst_ridden_hipster ( 23104 ) on Monday February 24, 2003 @09:54PM (#5375611) Homepage Journal
        Good God, man, haven't you ever heard of polygon reduction? Bump mapping? Image mapping?

        It's hard to believe you *really* need all of that RAM. Then again, I haven't done 3D in years.

        When I was a CG guy, we dreamt of bus speeds above 66MHz. We couldn't even imagine having more than 32M RAM. And we thought it was reasonable to wait two days for a 2k image to render...

      • by Frobnicator ( 565869 ) on Tuesday February 25, 2003 @01:09AM (#5376697) Journal
        You say you need more than 4GB for video editing and 3D rendering?

        Sorry while I rant, but you just stomped on one of my nerves. (Unless your comment about neededing that much RAM was a complaint about Adobe or their direct *cough* compeitors -- sucks to be you.)

        <Old Geezer Mode> In one case, not long ago, a fellow lab-rat Eric Mortenson [byu.edu] had sold his research and tools to Adobe, but part of the poorly-written agreement said that he couldn't upgrade his work station. So he finished his Ph.D on a 386 with 32-MB of RAM, while the rest of us in the lab were using Pentium 3's, DEC Alpha's, and various SGI boxs. Eric's algorithms ran great on the newer PC's even though he couldn't develop them on the new boxes. Other with Adobe (NOT on that web site interestingly enough) needed the DEC Alphas (64-bit machines) with scads of memory and much more running time to do a similar implementation of Eric's algorithms. </Old Geezer Mode>

        3D rendering doesn't take that much RAM. As a 3D graphics researcher and developer, I have worked with models where individual objects were multi-gigabytes (meshes+textures and volumes) but even then, having 1GB of RAM was more than enough for us to reach 20-30 FPS realtime on a box with NT4 and first- and second-generation 3D cards. Software rendering with very realistic detail was a little slower (3-5 fps) but was fine for writing movies. Progressive geometry & texture transmission, continuously calculated view-dependant detail levels, and other current and not-so-current research would solve the memory problems in 3D. Don't believe me? Go to Visualization 2003 [computer.org] and see if the leading researchers are finding RAM as their primary bottleneck. It is a bottleneck of course, but processing speed, caches, and the system BUS limitations are far more troubling.

        As for video editing, you only need enough memory for the tools, a few frames, and whatever operations you are performing. In every case that I've had to do video editing, I've seen two classes of tools -- those that take gobs of memory and try to copy the entire video clip into RAM and end up thrashing for memory -- and those that intellegently figure out what is needed and use only the memory needed for the app.

        An example of the first, an Adobe AfterEffects rendering a simple math function over time was only able to render 30-seconds because it wanted to buffer the AVI file in memory and ran out of RAM (2GB) after a several-hour rendering. An example of the second, a simple home-brew compositor that used the Windows multimedia API to write the AVI to disk -- the same machine and the same set of images required about 45 minutes to render the entire clip.

        So instead of saying:

        I need more than 4GB RAM (3.5 if I want it stable) for video editing and 3D rendering.

        I would suggest you say " I need to buy tools that are properly designed and implemented for my class of computer. "

        Frob.

    • Re:Linus too Harsh (Score:5, Interesting)

      by Amiga Trombone ( 592952 ) on Monday February 24, 2003 @09:39PM (#5375513)
      But, isn't one of those situations he mentions in the interview (namely, running a large database server) what this chip is designed to be doing?

      Sure, but it doesn't really do it significantly better than some of the more common RISC architectures (Sparc, Power, Alpha), and it's a lot more expensive.

      As I recall, the IA64 isn't designed for the desktop user... In fact, desktop users probably don't even need 64 processing for a number of years still....

      Probably not, but a lot more desktops get sold than high-end servers. If AMD manages to get a toe-hold on the desktop with their 64-bit solution, the chances are a lot better x86-64 will migrate up the food chain than ia64 will migrate down.

      Perhaps we need to be more fair in the context of the usefulness of the chip, instead of considering it in all contexts and criticizing it based on that?

      Well, that's the point. How useful is it really? What compelling reasons are there for using it in place of a x86-64 on the low end, or something like Power or Sparc on the high-end? All things considered, it really isn't a bad chip. But it is a solution in search of a problem.
  • wow (Score:5, Funny)

    by BigBir3d ( 454486 ) on Monday February 24, 2003 @09:19PM (#5375384) Journal
    Linus being opinionated and brash? Never!
  • by lingqi ( 577227 ) on Monday February 24, 2003 @09:20PM (#5375398) Journal
    the story ends with Of course, Linus works for a chip maker which just does not seem finished. I think they all mean to say that "... a chip maker that has been getting the thorough shaft from Intel."

    in the article AMD was said to be "reading between the lines" for "X86-64 is the way to go." I think it's really more like "please hire me AMD."

    *ducks*

    • by UberLord ( 631313 ) on Monday February 24, 2003 @09:28PM (#5375449) Homepage
      a chip maker that has been getting the thorough shaft from Intel

      Probably not for much longer. My company recently got some Compaq Tablet PC's in to demonstrate our product. They had (iirc) 900Mhz Transmeta processors in. And they ran our product really well - espically when it activated a machine via bluetooth and collected it's data :)

      But the thing about a tablet is power - they've gotta be carried around and used for a fair portion of a working day. As such, the Transmeta chips are a god-send due to very low power consumtion.

      I see good things for Transmeta in this market segment :)
      • by lingqi ( 577227 ) on Monday February 24, 2003 @10:17PM (#5375738) Journal
        yeah but honestly though to be fair, intel's new PentiumM (the centrino thing) chips sports about the same performance as a P4M but lasts about twice as long on the same battery. (tomshardware has a review) This would probably give transmeta some shivers.

        moreover, a PDA can do pretty much everything a tablet PC can, at 1/4 the size and three times the battery longevity. Okay so the screen resolution is not as high - but I think the other parts (and did I mention they cost (a lot) less and are generally more rugged?) of the equation more than balances it out.

        I mean, don't get me wrong - transmeta has cool stuff and I would love to see them succeed, but damn I just can't imagine myself plopping down that kind of money for their products, especially since there are alternatives.

        p.s. small side note on battery: construction workers have problem with their tool running out of batteries too - that's why they get two and have one charge while using the other one. In most places where tablet PCs are trying to market themselves (hospitals, say), this is a perfectly acceptable stratedy of remaining indefinitely mobile. Heck Apple can do battey swap during standby - I don't see why PCs can't implement the same thing if somebody just went out and did it.
        • Form factor on the tablet PCs is really suited for alot of the things we now use PDAs for. The larger screen size makes it alot easier to have productivity apps, too. When the price comes down a bit, we'll probably re-deploy the PDA apps we're developing now in tablet PCs. Battery life is still an issue, even if you swap them (I'd love to see table PCs with hot-swappable batteries. It should be possible, even without a full size and weight second battery) - for example, you want your working time to be at least as long as the recharge time.

          Now, I'm no zealot, and I don't really care what kind of chips are in the tablet PCs, but Transmeta is a good company that makes a great product, and I'd hate to see Intel kill them purely because of greater market power and brand recognition.

        • by EvilTwinSkippy ( 112490 ) <{yoda} {at} {etoyoc.com}> on Tuesday February 25, 2003 @12:28AM (#5376488) Homepage Journal
          Twice as long as a Pentium IV? Boy is that a hard limbo stick to crawl under! I'm actually writing this on a Sony Vaio with a PIIIm. I get about 2 hours on a battery charge, and I'm happy with that.

          1 hour and this puppy would be in a box back to the store.

          My biggest beef is the fact that I they don't have a side mounted heat sink so I could use it to keep my coffee cup warm. This puppy is only 800Mhz, and the fan never shuts up. I can't imagine this thing with a faster processor!

          The advantage of the Transmeta chip over other designs is that it is a) smaller and b) smarter. Smaller means that you aren't powering millions of gates that aren't being used. Smarter in that it can shut down what few parts it isn't using.

          As a result it uses a LOT less power, and it runs cooler.

  • by UberLord ( 631313 ) on Monday February 24, 2003 @09:20PM (#5375399) Homepage
    Code size matters

    It's what you do with the code, not how small it is ;)
  • How to improve x86 (Score:5, Interesting)

    by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Monday February 24, 2003 @09:22PM (#5375403) Homepage
    Now I'm no programming guru, but it seems to me that the x86-64 architecture is a great one. In fact, the only thing that I could see being done to improve it would be to add more general purpose registers. I believe that the new registers are all GP (IIRC), but I think that makeing them ALL GP (even the older ones) would be good, and maybe bring up the number of registers to a good round 32 or something. Am I missing something glaring wrong? If you're going to toss out all of the x86 stuff (like ia-64), I think you should be able to emulate it in hardware about as fast as current x86 processors can. When Apple switched to PPC, couldn't they emulate 68k code about as fast (or at least faster than 1/2 the speed of) the fastest 68k chips?
    • by _typo ( 122952 ) on Monday February 24, 2003 @09:37PM (#5375506) Homepage
      but I think that makeing them ALL GP (even the older ones) would be good, and maybe bring up the number of registers to a good round 32 or something. Am I missing something glaring wrong?

      Well, the only reason why the other registers aren't GP on x86 is that there are instructions that use them implicitly. If you don't care about these instructions you can use them as regular registers.

      As an example the EDI register is used by the SCAS* instructions as a pointer to memory. If you don't care about the instructions that use this register like that you're free to do regular operations on the EDI register, it has no limitations on what you can do with it.

      You're right to say that there are few registers though. Before I learned x86 I learned MIPS and there you got all the glory of 28+ GP registers. In the simple examples we did I never needed to push and pop from the stack.

    • by VAXman ( 96870 )
      Did you read the source for the Inquirier article [realworldtech.com]?

      Linus spends a lot of time debunking the "more registers is better" myth. X86 implementations have been addressing this issue for a long time, both by register renaming and by having extremely fast L1 data caches (esp. on P4). Adding more registers will not help speed up code much at all - and anyways requires a recompile and won't help improving legacy code.
    • by be-fan ( 61476 ) on Monday February 24, 2003 @10:00PM (#5375652)
      What you're missing is that x86 chips have a ginormous amount of internal rename registers (128 in a P4). The bump to 16 *visible* registers in the Athlon-64 is to allow the compiler optimizer to give more information to the CPU about variable usage. I'm guessing that AMD found that more than 16 visible GPRs really didn't help the compiler's allocation routines any.
      • by PeterM from Berkeley ( 15510 ) <petermardahl@@@yahoo...com> on Tuesday February 25, 2003 @12:20AM (#5376453) Journal
        I attended an information session by someone from AMD at UCB. It was my understanding from his presentation that the tricks they were using to get up to 16 registers without compromising the ability to run existing 32-bit code made it impossible to get past 16 registers.

        They would've liked to have 32 registers, but it simply couldn't be done in a backward-compatible way.

        If you want more information on this, and more than a guess, AMD has much information up on its website.
  • by fozzy(pro) ( 267441 ) on Monday February 24, 2003 @09:22PM (#5375406)
    The best architecture is still VAX. Clearly string operations at the processor levels is what any procesor needs to be the best and fastest ;}
    • by snStarter ( 212765 ) on Monday February 24, 2003 @10:43PM (#5375875)
      For the expensive memory environment for which it was designed the VAX was fabulous. And it was designed to be scalable as well.

      You can snicker at the CISC VAX architecture, but it ran multi-user in less RAM than many processors today have CACHE. Remember 2 MB of RAM was a lot when the 11/780 was introduced. 600 MB drives were considered HUGE and were the size of washing machines.

      Its scalable architecture let a copy of VMS from the lowliest processor be physically mounted on the most capable and boot just fine.

      It had BCD instructions too, not just string.

      But Gorden Bell got a lot more right than he got wrong. And the compact and orthogonal instruction set of the VAX looks pretty good today.
  • by banky ( 9941 ) <gregg@neur[ ]shing.com ['oba' in gap]> on Monday February 24, 2003 @09:23PM (#5375415) Homepage Journal
    Worse is better [naggum.no]

    although the original essay talks about Unix and the LISP machines, it just keeps being true. Linus talks about the "charming oddities", well there you go: worse is better. Try for perfection, and the real world will eat you alive.

    I also think he's right about the masses being what matter; I think Intel is still thinking about the data centre, not Joe Sixpack, with Itanium.
    • by On Lawn ( 1073 ) on Monday February 24, 2003 @09:41PM (#5375527) Journal

      One of my favorite all-time quotes from a flame war was about this topic.

      "While they sit in ivory towers, the mongols are multiplying in the hills. Soon, the towers will lay waste and the hordes will have moved on victorious."

      Of course he was talking about the ivory tower of PERL, and how the TCL was going to become the dominant force in scripting language. But I've loved the allegory ever since.

      --------------------
      OnRoad [onlawn.net]: Becuase hacking funner with a hacksaw.
      • Of course he was talking about the ivory tower of PERL

        PERL is an ivory tower? Wow, that must've been some flame war. Last I checked Perl stood for "Practical Extraction and Report Language", and I'm pretty sure that "Practical" is not a member of the set "Ivory Tower". It's a great quote, I just wouldn't every guess is was about Perl and Tcl. x86 vs. RISC maybe. But not Perl, unless you consider it one of the hordes. :-)
      • by Fnkmaster ( 89084 ) on Tuesday February 25, 2003 @01:50AM (#5376857)
        Geez, some people really take things too literally. If they looked at Perl and said "we can make something worse than this", they are some sick, sick fucks.


        Perl IS the "worse is better" language.


        Tcl goes so far to the "worse" end, it comes back around the circle at "utterly fucking miserable".

  • by d00dman ( 653178 ) on Monday February 24, 2003 @09:23PM (#5375418)
    As much as we depend on intel to push cpu manufacturing techniques to new heights, they have fallen down in the desktop market anyway. Ive lost count on how many new units they've added for poor lowlevel optimizers to keep up with. This with the slap in the face of reduced instructions per tic in the p4 so they could juice up the multiplier and sell "faster"mhz cpu's at double the price is more than enuf for me to stop watching them. Im far more interested in the new power5 coming out of IBM for a 64bit architecture to pay attention to. BTW, what ever happened to alpha 21364? is a 64bit cpu really newsworthy?
  • Chip Maker (Score:5, Funny)

    by LongJohnStewartMill ( 645597 ) on Monday February 24, 2003 @09:23PM (#5375422)
    Of course, Linus works for a chip maker

    And if trends continue, it could be Old Dutch.
  • by StevenMaurer ( 115071 ) on Monday February 24, 2003 @09:25PM (#5375430) Homepage

    So he is more likely to know what he's talking about.

    Personally, I'm getting a bit tired of all the inane cynicism that passes for reflective commentary in modern society. While it's true that the world has its villians, it is more true that people often just hold opinions irrespective of their economic interest. I for one, trust that Linus is among these favored many.

    (Not joking this time)

  • by Billly Gates ( 198444 ) on Monday February 24, 2003 @09:26PM (#5375442) Journal
    This is the problem with gcc and the architecture in general. Its very had to optimize code with it. Even with Intel's compiler optimized code is only marginally faster if any compared to a top of the line pIV. Gcc obviously will perform alot worse then Intel's own compiler and its the only one Linux can compile with.

    Sun has an interesting( biased) peace [sunmicrosystems.com]on Itanium. If I were buying a server I would avoid Itanium like the plauge. It is possible that Intel could even cancel the whole project and leave customers high and dry. Not to mention software availability is a problem.

    I prefer the risc architecture. I like the idea of keeping things simple and efficient which is alot like structured programming. VLIW does not follow this ethic.

    • by MtViewGuy ( 197597 ) on Monday February 24, 2003 @09:58PM (#5375638)
      I think the problems with the Itanium boils down to this:

      1. The CPU's are insanely expensive. They make the majority of x86-architecture Intel Xeon CPU's look like a bargain.

      2. Where are the server applications that take advantage of the Itanium CPU? They're not exactly widely available, to say the least.

      3. Programming for Itanium is still a somewhat iffy proposition.

      Meanwhile, AMD's Athlon 64/Opteron offers these advantages:

      1. The CPU will definitely NOT be insanely expensive to purchase.

      2. Programming for the AMD x86-64 architecture is not going to require kiboshing a bunch of legacy programming tools and starting from scratch--it is a straightforward process to convert today's programming tools to take full advanratge of the x86-64 native mode.

      3. Because the programming tools are so readily available, both operating systems and applications for the Athlon 64/Opteron will be available widely by the time the new AMD CPU's are finally released for sale. Already, UnitedLinux is porting Linux to run in x86-64 native mode, and Microsoft is very likely readying versions of Windows XP Home/Professional and Windows 2003 Server that will run in x86-64 native mode.

      Meanwhile, Intel supposedly has a 64-bit x86-architecture CPU codenamed Yamhill that has developed. However, given we don't know how Yamhill implements 64-bit x86 instructions Intel will have to do some VERY serious convincing to Linux kernel programmers and to Microsoft to write Yamhill-native code--and Intel is far behind the AMD efforts.
      • by TFloore ( 27278 ) on Tuesday February 25, 2003 @12:37AM (#5376541)
        It is still a full port, if you want to get the benefits of the 64-bit architecture. If you want to keep running 32-bit x86 code, don't even bother recompiling. But don't make the mistake of thinking that switching 32-bit x86 code over to x86-64 is a simple re-compile.

        It is still a port, with all that is included in that awful word.

        Do you understand how little 64-bit safe code there is that runs on 32-bit x86 systems? Most of the linux kernal is already 64-bit safe, because it has been ported to so many other 64-bit architectures already. And it still wasn't a simple "just recompile it".

        Speaking specifically to C programs here, porting from 32-bit to 64-bit is not a fun process. A variable declared as "int" switches in allocation size. This is good and bad.
        fread (fp, sizeof(int), &var); //(forgive me if I have the parameters backwards, I'm doing this from memory. And notice that I'm a bad programmer, I didn't check the return value.)
        Congratulations, you just killed all your existing data files. And if you happened to read a 32-bit pointer from that data file (any structures that you write directly that contain a pointer write a pointer... you'll throw the pointer value away when you read the structure back in, but you still have to read the proper data size), and then assign a pointer to it... Oh, you're going to have all sorts of fun playing with that.

        Yes, this may only be an issue with "bad" C code that assumes it will ever only run on a 32-bit platform... That probably covers 99% of all x86 C code out there, for any OS you care to name.

        Don't pretend it will be easy moving from 32-bit x86 to x86-64. For most programs, I assure you, it will be non-trivial. Anything that does direct memory allocation will have to be checked very carefully. Anything that does binary file i/o will have to be checked very carefully. Oh, and anything that uses "magic" numbers will have to be checked... Have you ever used an if conditional for an int of the form
        if (i == 0xFFFFFFFF)
        congrats, you just assumed 32-bit for your architecture.

        64-bit clean code is the exception, not the rule.
  • by caouchouc ( 652238 ) on Monday February 24, 2003 @09:29PM (#5375458)
    The Inquirer.com isn't exactly a bastion of responsible reporting.

    It doesn't look like an interview took place at all. It looks like they took some choice quotes out of context from the kernel development mailing list to spur some pageviews.
  • by grub ( 11606 ) <slashdot@grub.net> on Monday February 24, 2003 @09:30PM (#5375467) Homepage Journal

    Netcraft confirms it: Itanium is dying.

    One more crippling bombshell hit the already beleagured Itanium community when Slashdot confirmed that Linus thinks Intel dropped the ball with Itanium. Itanium now powers 0.00% or all servers. Coming on the heels of a Netcraft survey which plainly states that Itanium has gained absolutely NO market share. This reenforces what we've known all along: Itanium is collapsing in complete disarray.

    You don't need to be a Kreskin to predict Itanium's future. The writing is on the wall: Itanium faces a bleak future, in fact there won't be any future at all because Itanium is dying. Intel has dumped millions into Itanium, red ink flows like a river of blood.

    All major surveys show that Itanium has steadily held its ground at 0.00% use while millions of other processors are produced daily. If Itanium is to survive at all it will be among CPU dilettante dabblers and hangers-on. Nothing short of a miracle could save Itaniu, at this point in time. For all practical purposes, Itanium is dead.
  • by TheGratefulNet ( 143330 ) on Monday February 24, 2003 @09:36PM (#5375496)
    look here:

    pricewatch [pricewatch.com]

    almost $3000 for the chip. wow, and for so many mhz, too...

  • by 1nv4d3r ( 642775 ) on Monday February 24, 2003 @09:39PM (#5375516)
    Code size matters. Price matters. Real world matters

    If only on-chip instruction set morphing mattered...

    (sorry, but it's true...he's living in a glass house on this one.)

  • by mrm677 ( 456727 ) on Monday February 24, 2003 @09:47PM (#5375566)
    Check the latest SPEC CPU benchmarks [spec.org]. The Itanium2 has the fastest floating-point score and is no slouch in the integer tests either. It will improve. Linus will eat his words in a few years.
    • by dpletche ( 207193 ) on Monday February 24, 2003 @10:36PM (#5375837)
      SPEC scores tell me almost nothing useful. The code to run SPEC benchmarks is emitted by tricked-out compilers whose whole purpose is to emit hand-crafted assembly code specifically tuned to run those SPEC benchmarks. It doesn't tell me anything about how well common programs and subsystems perform at common tasks. You might as well buy a family car based on the quarter-mile time at the racetrack for a like-model car with a supercharger and dangerously-tweaked ignition timing, burning 120 octane racing fuel.

      In five years, if the Itanium isn't a huge success, will you eat your words?
  • by NullProg ( 70833 ) on Monday February 24, 2003 @09:56PM (#5375622) Homepage Journal
    The read from theinquirer.net is all wrong. The slashdot story line is also wrong. It does not state at all what it implies. Here is the link to what Linus actually wrote:

    http://www.ussg.iu.edu/hypermail/linux/kernel/03 02 .2/1909.html

    Now, I agree with Linus on the PPC MMU issue. Can anyone tell me what he means by "baroque instruction encoding"? I have been doing x86 and 68k assembler for a long time, I have never heard of this.

    Enjoy,

    • by leroybrown ( 136516 ) on Monday February 24, 2003 @11:36PM (#5376230) Homepage
      Can anyone tell me what he means by "baroque instruction encoding"?

      well, you know what they say: if it ain't baroque, don't fix it.

      i am going straight to hell for that one.

    • by cbiffle ( 211614 ) on Tuesday February 25, 2003 @12:03AM (#5376375)
      Probably (IANLT) he's referring to the various prefix encodings and variations for instructions. From my x86 manual, "Machine language instructions...vary in length from 1 to as many as 13 bytes. ...There are over 20,000 variations."

      Now, granted, that rather large number probably includes different target registers, but compared to (to use your example) the 68k, the x86's encoding format is just -weird-:
      16-Bit:
      -An opcode. Either 1 or 2 bytes.
      -Some flags and/or target register. 1 byte, optional
      -Displacement. 0-2 bytes.
      -Immediate. 0-2 bytes.

      32-Bit:
      -Optional address size prefix byte.
      -Optional operand size prefix byte.
      -Like above, but with 0-4-byte displacement/immediate and optional scaled index byte.

      Now consider the fact that many opcodes implicitly reference registers. Decoding this instruction set by hand would be a royal bitch, and it's exactly the sort of configuration that RISC targeted for demise.
      However, Linus makes a good point in the e-mail, which is (paraphrased) that the x86 encoding is basically a very good compression algorithm for its code. While the RISC machines that use 32 or 64-bits for every instruction may be more regular, their code does tend to be larger.

      The ironic thing, in my mind, is that the IA-64's encoding is in many ways -more- baroque than the x86s! Instructions in bundles, bundles in groups (or is it the other way around? I never remember), flags at the end to specify how to interpret the instructions before -- it's an interesting take on VLIW, in that it doesn't specify the number of execution units, but YUCK. :-)
  • by AxelTorvalds ( 544851 ) on Monday February 24, 2003 @10:35PM (#5375828)
    The Feb 17th issue broke it down nicely. You can read it on their web site. Basically. The conventional wisdom is that there are exactly 2 players in the 64bit arena: IBM and Intel. IBM isn't jumping on the Itanic either, at least not in any big way other than building some low end servers with it.

    AMD is the wildcard. If x86-64 is the bomb and takes off like AMD is betting on it. Intel lost the 64bit war for many years. IBM and maybe even Sun will quietly (well sun doesn't do jack shit quietly) push x86-64 for the low end while IBM POWER4 and POWER5 and POWER6 down the road run the big end.

    Basically Intel needs something like Sun to jump on it IA64 to really give it some credibility and they don't sound real eager to. IBM sounds like they are down for the fight. Alpha, MIPS, PARISC are all pretty dead; long term and relatively speaking. Meanwhile, if Intel doesn't get on the shit quick then they'll have to support x86-64 too and that's the real death blow to IA64.

  • by a-freeman ( 147652 ) on Monday February 24, 2003 @10:36PM (#5375838)
    Back when it was released, it was roundly maligned for offering shitty performance for Win95 users. "Buy a Pentium 233MMX" all the magazines screamed.

    Well, the PPro turned out to be one of the best chips of its day, and the 200Mhz version performed within 5% of the Pentium II 300mhzs that were released 18 months later. I still have dual-PPro system running my CVS/MP3/print/etc. server.

    Linus may be a god in the linux software universe, but I wouldn't discount Intel on this just yet.
  • by Pemdas ( 33265 ) on Monday February 24, 2003 @10:39PM (#5375857) Journal
    x86 -- it's cheap, fast, available, and compatible. That's why it consistently wins in the marketplace.

    That doesn't mean it's the best solution. Merely the one that's going to win. Architecturally speaking, x86 is one of the biggest loads of crap to come along since...well...hmmm...I can't think of anything crappier off the top of my head.

    Extreme register pressure. Segmentation models that make you want to retch. Hacks (PAE, anyone?) that leave any sane designer gibbering incoherently.

    If you read the thead, Linus' main argument seems to be "to get good performance, all the other architectures have had to do complex things in hardware, so there's no real hardware simplification in going with a 'better' architectural design. Plus variable length opcodes are a natural cache optimization!"

    I respect Linus a great deal, but he's talking out of his ass here. I agree that IA-64 may be best relegated to some academic's wet dream, but just about any of the major RISC architectures are big wins over x86. Intel and AMD have worked miracles with x86 to get it to run fast, but at a staggering engineering cost. The teams working on RISC chips tend to be a fraction of the size to come out with a high-performance chip. If the RISC houses had an engineering team of comperable size (and access to the same bleeding edge lithography processes) it would easily be worth an extra 25% in performance, minimum.

    If you look in the embedded world, just about anything that requires serious embedded performance is RISC based (MIPS/ARM, mostly), simply because it decreases the engineering work involved by an order of magnitude. Plus, writing low level software for just about any RISC chip is loads easier than for x86.

    Unfortunately, x86 is here to stay for the foreseeable future. Intel killed Alpha, not by buying it, but by doing a great job of pushing cheap x86 performance to the same level as Alpha, often surpassing it in later years. The same thing is happening to the other workstation-class RISC vendors, and, honestly, to Itanium, too. I don't see any reason to believe the march to x86 hemogeny outside the embedded world is likely to slow anytime soon.

  • by Yankovic ( 97540 ) on Monday February 24, 2003 @10:48PM (#5375912)
    The second highest rated TPC box in the world is running Itaniums...

    http://www.tpc.org/tpcc/results/tpcc_perf_result s. asp?resulttype=noncluster
  • by jbischof ( 139557 ) on Monday February 24, 2003 @10:53PM (#5375958) Journal
    because he knows.

    Linux made him ... oh wait nevermind.

    Transmetta makes a lot of ... oops there I go again.

    Intel is a company that time and time again proves it knows how to make money. It may not always support the crowds it should (like /. readers and superusers) but they are still making money.

    Sure there are lots of difficulties going to a new ISA. Especially at the server level. And yes Itanium has had some performance problems, especially in its first revision, but then again when was the last time you saw a company produce a 1st generation microprocessor and have it do well?

    IA64 offers tons of advanced ILP concepts and OS concepts that, when correctly implemented, can increase performance drastically. (if your looking for examples, data speculation, control speculation, predication, registers with kernel access only, rotating register files, a much larger register set, etc).

    The problem may be, it puts a lot of complexity into the Compilers, and compiler technology isn't good enough for Itanium yet.

    But then again, what do I know, Linus has made more money than I have. I just like arguing the other side while everyone else screams about how the Itanium will die.

  • by Edmund Blackadder ( 559735 ) on Monday February 24, 2003 @11:12PM (#5376096)
    First of all it is not very smart to try to reduce code size by putting complicated instructions in the processor architecture.

    A succesfull architecture may be used for 20 years, and there is no way you can know which complex instructions will be most usefull/popular in several years. And when you start making upgraded chips for a design, these complex instructions will be a real pain in the ass.

    The x86 architecture is a perfect example - it is a mess and many of its instructions are not used at all. The x86 is succesful because the way history played out - it was put on the first pcs, and the incredible numbers of precessors sold allowed intel to put more development money into that architecture than any body else was able to put into theirs. And large initial investments, and large sales numbers mean that individual chip prices can be lower.

    Nevertheless, the alpha and some of sun's chips can still compete with intel in the server environment, with much smaller investments and worse production technology. That basicly shows the weakness of the x86 architecture.

    When you have multiple pipelines and multiple stages per pipeline the size of your chip will grow exponentially to the number and complexity of your instuctions. Eventually adding more pipelines will be pointless and then you are reduced to adding cache as the only way you can improve your architecture.

    For a Risc architecture, multiple pipelines will cost less overhead and more can be used. Processor performance can be increased by adding more pipelines without having to increase speed.

    Intel has the money and the clout to make a succesful risc architecture. It is brave of them to do it, but from an engineering point of view it is the only right thing to do.

    AMD will support x86 because they do not have the clout to force a new architecture on the world. It is a completely understandable policy, but then again will result in worse performance (unless their engineers are somehow much more brilliant than intel's).

    Of course the real world matters and in the real world almost everyone uses x86. But if someone can change that it is intel.

  • by RelliK ( 4466 ) on Tuesday February 25, 2003 @12:05AM (#5376387)
    Without Windows for x86-64, AMD is dead. No, Linux will not save it. However, the moment Microsoft releases Windows for x86-64, Itanic is history. The market will overwhelmingly favour x86-64 because of the much lower price (I expect at least 3-4 times lower, cosidering that the Itanic CPU alone sells for over $3000), and perfect backwards compatibility. Itanic's ia32 support is so pathetically slow that it may as well not exist, so a move to Itanic requires you to replace _all_ your software, which ain't cheap, while x86-64 allows you to do incremental upgrades. So, taking simple economics into account, Itanic will go the way of that ship and AMD will emerge the winner... provided there is a version of Windows for x86-64. Without that there is no point of talking about "64 bit desktop" market because it just won't exist. So what is Microsoft doing?
  • by imnoteddy ( 568836 ) on Tuesday February 25, 2003 @12:18AM (#5376447)
    From the article:
    Torvalds wrote that Intel had made the same mistakes "that everybody else did 15 years ago"
    when RISC architecture was first appearing.

    RISC first showed up on the commercial radar screen almost twenty years when MIPS Computer Systems [pmc-sierra.com]
    was formed. But people at Stanford (and Berkeley, IIRC) had been publishing papers about
    RISC for four or five years before that, and people at IBM were working on it even before that.

    And the CDC 6600 was a RISC machine in the 1960s. If you don't believe me, ask Cray's Chief Scientist Burton Smith [cray.com].

    In seeking the unattainable, simplicity only gets in the way. -- Alan Perlis

  • by demachina ( 71715 ) on Tuesday February 25, 2003 @01:51AM (#5376868)
    The key point about Itanium is that it is a horrible general purpose processor but it is a serious contender to be very good processor for supercomputing. It has very good floating point performance and the EPIC architecture is designed to be very good on Fortran, especially vectorizable Fortran which is very prevelent in HPC applications. What Linus said is correct in the context of Itanium as a general purpose processor, but its doesn't give Itanium the credit its due as a floating point supercomputer which is the only place its going to sell and is what it was designed for.

    It will probably never be very good for most C and C++ apps. Pointer aliasing in particular will give the Itanium compiler fits. Unless you manually tell the compiler there are no two pointers accessing the same memory the compiler can't safely or effectively pack the parallel instructions in the VLIW and that is the essential to good performance in VLIW.

    You do have to really question the sanity of some execs at Intel and HP for spending the staggering sums they've spent on Itanium. Supercomputing just isn't big enough a market for them to have any chance to recoup their investment in our lifetime and they aren't going to sell it in to the mass market as Linus said.

    For a general purpose 64 bit processor to run existing C and C++ applications AMD is going to win hands down. But as many have noted its not likely most people are going to really need a 64 bit processor anytime soon so Intel will probably do just fine selling 32 bit x86 processors for a while.

BLISS is ignorance.

Working...