Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AMD Intel Transmeta

AMD, Transmeta Edge Up In Market Share 206

prostoalex writes "The new Mercury Research report on the microprocessor market is out, and it looks like the little guys are gaining ground. AMD now owns 15.7% of the market, instead of 15.6% a year ago, while Transmeta and other manufacturers went from 1.7% to 1.8% in a single year. Intel owns 82.5% of the market instead of 82.8% a year ago. News.com.com also notices: 'The competition between the two companies will shift into high gear over the remainder of the year. On Sept. 23, AMD will release the Athlon64, a new desktop chip that can run 32-bit and 64-bit software.'"
This discussion has been archived. No new comments can be posted.

AMD, Transmeta Edge Up In Market Share

Comments Filter:
  • Hrmm (Score:4, Interesting)

    by acehole ( 174372 ) on Monday August 04, 2003 @06:27AM (#6604459) Homepage
    If AMD are releasing thier 64 bit chip early, does intel have any plans to? or are they still insisting that desktop users arent ready for 64 bit chips?

    • Re:Hrmm (Score:5, Interesting)

      by Bloodmoon1 ( 604793 ) <`moc.liamg' `ta' `noirepyh.eb'> on Monday August 04, 2003 @06:47AM (#6604514) Homepage Journal
      Actually, Intel does have the 64 bit Itanium [intel.com] processor for "enterprise solutions". Though based on the last half of your post, you were wondering about desktop processors, in which case the answer probably goes something to the effect of, "Apple has had the G5 [apple.com] for about a month now, AMD will have the Athlon 64 [amd.com] in a month and a half, and we have nothing. Better up the P4 [intel.com] clock rate to 5 GHz in the next 6 months and pray Joe Idiot still thinks it's faster." Just my assumption at the next Intel marketing move.
      • Re:Hrmm (Score:4, Insightful)

        by Juanvaldes ( 544895 ) on Monday August 04, 2003 @06:52AM (#6604532)
        I'm a mac user and all but really, you will probably be ablet o get a Athlon 64 at the same time you can walk into a apple store and buy a G5.
      • Re:Hrmm (Score:5, Insightful)

        by dpilot ( 134227 ) on Monday August 04, 2003 @07:47AM (#6604654) Homepage Journal
        Well so far Intel is 82.5% right with their strategy, but that's down from being 82.7% right a year ago.

        But speaking of benchmarketing, it would be REALLY fun to see some sort of CPU shootout, *all done with gcc*. Most of us either buy applications, or compile them ourselves, using gcc.

        Really, Spec means very little to us. Quake, Unreal, etc fps are meaningful to those of us that play those games. To the Linux crowd, at home, business, and universities, gcc is how we get executables.

        Apple recently got a black eye for using gcc for benchmarking, but perhaps erroneously. Intel does wonders on benchmarks, but I hear rumblings that they have Spec-tuned compilers that may not yield results as good on things that don't look like Spec.

        When the masses, such as we are, compile, we use gcc. (I agree that most masses just buy Quake, Unreal, Photoshop, etc.) But I argue that a small subset compiles, and a smaller subset yet forks over for commercial compilers.
      • Re:Hrmm (Score:3, Interesting)

        by miketang16 ( 585602 )
        Ha.. I would hardly call Itaniums true 64 bits'. Look at their first attempt, 48 bits. =p I haven't read much up on Itanium 2's and although I believe they are 64 bits, they're not the best standard, which AMD holds with their x86-64 arch.
        • If you are refering to address space, AMDs first implementation has 40 bit physical address space, 48 bit virtual address space. Their spec states that later implementations will have up to 52 bit physical address, 64 bit virtual address.

          I can't speak for G5 or IA64, but they probably have similar limitations, either specific implementation or general design.

      • Apple has had the G5 for about a month now, AMD will have the Athlon 64 in a month and a half, and we have nothing. Better up the P4 clock rate to 5 GHz in the next 6 months and pray Joe Idiot still thinks it's faster.

        Well, what if it actually IS faster? The benchmarks I've seen of the G5 and AMD 64 bit CPU are not that much higher than the current P4. I happen to think either of those 64 bit chips running Linux (or MacOS, if it has proper 64 bit support) will be super cool, but it's hard to put my fin

      • Ugh. 64bit != faster
        In fact there is no imperical evidence that any 64bit implementation is faster at doing <=32bit calculations.

        What 64bit means is a radical archetectural shift, which affords (both to compiler writers and to the CPU designer's management) an excuse to add other radically new technology that is independent of bit-depth. Namely, we're spending the money, might as well pack-the-punch.

        AMD64, for example uses a new CPU-mode which gives them cart-blanche ability to deprecate / add instruc
      • "Better up the P4 clock rate to 5 GHz in the next 6 months and pray Joe Idiot still thinks it's faster"

        If Joe Idiot thought that it would be faster, he'd probably be right. I think the important question is why do you assume that the Athlon 64 will automatically be faster than a P4 5GHz? What does "Joe Idiot," doing word processing and browsing the web, need a 64 bit CPU for? Is he creating massive databases?

        Course, Joe Idiot doesn't need all the power of a 5GHz P4 either. But he should decide between th
    • Re:Hrmm (Score:2, Funny)

      by borgdows ( 599861 )
      or are they still insisting that desktop users arent ready for 64 bit chips?

      Bill Gates (1985) - "No one needs more than 640kb of memory"

      Intel (2003) - "No one needs more than 32-bit processors"
      • Re:Hrmm (Score:5, Interesting)

        by finallyHasANickname ( 559395 ) on Monday August 04, 2003 @07:09AM (#6604572) Journal
        Not to put too fine a point on it, but don't such questions ultimately redound to philosophy? Who needs a widget? Before scoffing/flaming/shrugging, gimme just a coupla extra sentences' worth.

        I paid more than $100 for the extra 2 megabytes of RAM necessary to get Turbo C++ 3.0 for DOS working on my 10 MHz Cyrix-based AT clone (i.e., i80286, 80286, '286, 286, depending when you "label"). It was worth every penny.

        The thing that might most merit your attention here is something I learned very quickly after getting just the first few programs to work. The permutations of what I could program might as well be considered infinite. Get this: It is difficult to completely reign in (or even fully to comprehend) the vast and diffuse capabilities of a 10 MHz beige box limited to the 80286 instruction set and bend-over-backwards-in-the-Protected-Mode 16 MB of RAM physical ceiling. This weak piddly hardware has--I said has, not had--more capability than I could explore in ten lifetimes as a creator of software. When the companies continue to crank out traincar loads of what (for now in the "Pre Palladium Rollout Era") is still pretty general-purpose hardware, "limitations" are matters of philosophy of science, which is where I started, come to think of it. I guess my age is showing, but I think (that is, when I think well) it is all (literally) awesome, and it has been thus for about a half century and counting.

  • Surely? (Score:5, Interesting)

    by Black Parrot ( 19622 ) on Monday August 04, 2003 @06:28AM (#6604466)


    Surely those 0.1% differences are below the threshold of noise in the marketplace, if not in the sampling methodology?

    BTW, I thought I had heard on the news that AMD was really hurting these days. Again. Anyone know?

    • Re:Surely? (Score:5, Insightful)

      by Sleeper ( 7713 ) on Monday August 04, 2003 @06:32AM (#6604473)
      I guess that means that nothing really changed. That is little guys at least stay in game. Which is probably good news at the moment.
    • Re:Intel (Score:2, Informative)

      by ftvcs ( 629126 )
      I don't see Transmeta and other manufacturers kicking Intel's ass soon because they are targetting a smaller amount of users with their ~1Ghz processors.

      Not yet that is.
    • For Transmeta, that 0.1% increase in market share is a 5% increase in sales. Granted it's only significant because their market share is so small, but they definitely have more reason to celebrate than AMD does.
    • Nowhere in the article do they mention if the market share numbers are for "unit volume" or "revenue share". There's a huge difference.

      You could have 50% market share on a unit volume basis, but if all you're selling are money losing Durons, then that really doesn't help you much.

      AMD sells their product for substantially less than Intel. You heard right, they are hurting these days. They're bleeding money and have no product that is currently able to command the higher prices of a Pentium4. They've pr
  • WTF??? (Score:4, Informative)

    by gowen ( 141411 ) <gwowen@gmail.com> on Monday August 04, 2003 @06:28AM (#6604467) Homepage Journal
    0.1 of a percentage point? Whats the betting that is *well* inside the bounds of sampling error.

    Nothing to see here, move along.
    • Re:WTF??? (Score:3, Insightful)

      by sporty ( 27564 )
      That kinda depends. If the data is taken from door to door, yeah. Bu if it's taken from sales records, no.
      • Re:WTF??? (Score:3, Insightful)

        by gowen ( 141411 )
        But if it's taken from sales records, no
        Who's sales figures? The chip manufacturers can only tell you what they've shipped to PC manufacturers, which isn't the whole story. The PC manufacturers can tell you their sales, but there are rather too many of them to get everyone's figures.

        So, even on sales figures, there are sampling effects.
        • Probably AMD's and/or Intel's. After all, you can't wind up with an AMD or Intel chip on your desktop w/o it originating from them.

          If they cook the books, yeah, you'll have a huge error, but their sales records must be kept proper, w/ no rounding errors. If they make errors, they are paying too much taxes and what not to the gov't.. or to little.

          I doubt the rounding errors on numbers THAT big would be 1% but even less significant. But there are lies, bigger lies, and statistics :)
  • Umm (Score:4, Redundant)

    by Michael's a Jerk! ( 668185 ) on Monday August 04, 2003 @06:31AM (#6604472) Homepage Journal
    This increase is tiny - it's not statistically sound. It's smaller then the sampling error.

    That said, I've just bought a Dev Kit [transmeta.com] from Transmeta, and I love it.
  • by MosesJones ( 55544 ) on Monday August 04, 2003 @06:33AM (#6604475) Homepage

    This is what gets me about Transmeta, saying that they increase their share when in a category called "other" which increases 0.1% means that Transmeta is up...

    How ? Transmeta don't have enough sales to get in a category of their own, they may have DECREASED their marketshare but another minor player could of increased theirs thus making the overall sector go up.

    I know that here at Slashdot we must all bow to the altar of Transmeta because their processor approach is all open sourced and they own no patents and follow the OSS way so purely... oh wait they don't ? You mean they do have patents and they don't release their architecture ? Oh it must be because Linux is their primary OS... nope again. No its because they gave Linus a job.

    The story here is that Intel remain the massive player, AMD has made some minor in-roads but is still not gaining marketshare in the way they would really like, and that the figures actually represent and quarter on quarter DROP in sales percentage for AMD.

    In otherwords a way to say this is that AMD have LOST nearly 1% of share over 3 months which isn't so positive.

    But hey, if we can bash Intel and bump Transmeta why let the facts get in the way.
    • "they may have DECREASED their marketshare but another minor player could of increased theirs thus making the overall sector go up"

      My guess, VIA
    • by silvaran ( 214334 ) on Monday August 04, 2003 @07:31AM (#6604620)
      I know that here at Slashdot we must all bow to the altar of Transmeta because their processor approach is all open sourced and they own no patents and follow the OSS way so purely... oh wait they don't ? You mean they do have patents and they don't release their architecture ? Oh it must be because Linux is their primary OS... nope again. No its because they gave Linus a job.

      Holy chill there batman. Take a look at the article, will you? This isn't editorializing or /. elitism or anything you seem to imply, this is paraphrasing. RTFA:

      Other manufacturers, a grouping that includes Transmeta, increased their collective market share from 1.7 percent to 1.8 percent.

      The slashdot summary, meanwhile, says the same thing:

      While Transmeta and other manufacturers went from 1.7% to 1.8% in a single year.

      Tit for tat -- this is the only mention of Transmeta. You read waaaaay too much into it. Take your allegations elsewhere.
  • by oakad ( 686232 ) <oakad@yahoo.com> on Monday August 04, 2003 @06:33AM (#6604476)
    Is not this terrible that 30 years old, not very good architecture now gained a pass into the 21'st century? Was it not enough to extend the 8085 first to 8086, than to 80286, than to 80386 and now to x86-64? When will this end?
    • by BrainInAJar ( 584756 ) on Monday August 04, 2003 @06:38AM (#6604485)
      As soon as users stop caring about their software investments.
      • It could be much better to take an advanced RISC core (like PPC) and add an additional (legacy) instruction decode unit to it (anyway, x86 does instruction recoding internally). Simply adding stupid extensions to the old instruction set is not the best policy for anybody.
        • by Malor ( 3658 ) * on Monday August 04, 2003 @07:07AM (#6604565) Journal
          (warning: I'm just tossing this out from memory without doing any double-checking on it first, so read with caution and pay attention to replies.)

          I believe that's basically what they're already doing.

          If I understood what I read correctly, the "X86" CPUs on the market aren't really X86 CPUs anymore. Instead, they are essentially a super-fast hardware emulator of an instruction set. The real instruction set of these chips doesn't resemble X86 *at all*; the chip decodes on the fly from the X86 macro-ops down to the chip's native micro-ops, which are smaller and simpler and easier to track when running in parallel across several execution units.

          That's part of why most software emulation is so slow -- you are in essence comparing generalized software solutions to incredibly well-engineered hardware solutions.

          If we had a different instruction set, would we really benefit? For the vast majority of us, even the Slashdot crowd, no. The compiler guys would probably like it a lot, but very few programmers work in anything lower than C. The actual "machine language" is mostly unimportant. And it's not even REALLY the machine language of the chip anymore!

          Even assembly coders, these days, are writing in a form of interpreted language. The "bare metal" guys aren't REALLY at the bare metal anymore; even they are working at a level of abstraction.

          • by afidel ( 530433 ) on Monday August 04, 2003 @07:38AM (#6604639)
            Not only that but x86-64 gets rid of most of the really annoying parts of x86 anyways. There are more registers, they are more sanely layed out, and there are multiple sets of them available. All the people moaning about the cruft build up of x86 living on haven't looked at what AMD did with x86-64. If they are capable of understanding then they should go and look at the AMD whitepapers, if they aren't then they should stop whining because it doesn't effect them anyways =)
          • by Ninja Programmer ( 145252 ) on Monday August 04, 2003 @08:42AM (#6604829) Homepage
            If I understood what I read correctly, the "X86" CPUs on the market aren't really X86 CPUs anymore. Instead, they are essentially a super-fast hardware emulator of an instruction set. The real instruction set of these chips doesn't resemble X86 *at all*; the chip decodes on the fly from the X86 macro-ops down to the chip's native micro-ops, which are smaller and simpler and easier to track when running in parallel across several execution units.
            x86 instructions, are just the architectural instructions and are not called macro-ops. Intel's notation for their internal instructions is to call them microops. AMD's K6 notation was RISC86-ops, and AMD's Athlon notation was to call them macro-ops.

            However, it is very important to point out that they don't resemble RISC instructions either. Although they have many of the same properties, they generally can be over 150 bits in length, for example. These instructions also don't exist on any code address per se, and thus could not really be considered a full instruction set in of themselves.

            Another thing that should be pointed out is that modern post-RISC out-of-order executing RISCs themselves are also forced to have some kind of alternative instruction set representation as well (since some of them perform complex operations, such as the PowerPC's double write instructions, or any "test-and-set" kind of instructions, and they are stored in internal reorder buffers)
          • So does that mean everything will run faster if, say, we port GCC to compile to the chip's "native" instruction set and then recompile the kernel and all apps?
            • So does that mean everything will run faster if, say, we port GCC to compile to the chip's "native" instruction set and then recompile the kernel and all apps?

              No. The "native instruction set" isn't available directly. The CPU is essentially hard-wired as an x86 emulator. This may sound inefficient, but in reality it works quite well. The real instruction set is essentially designed to take the crufty x86 code and siphon off the bathwater, leaving mostly just the baby; it's not meant for direct programming

          • This is absolutely true, the AMD Athlon K6, and the Nexgen Nx586 with its AMD K5 branding were all RISC processors with a translation unit.

            The Nx586 actually had the ability to switch from 386 mode to its non-standard RISC instruction set, and there was some talk of making it PowerPC compatible back in the days of OS/2 PowerPC Edition.

            To my knowledge, AMD removed such functionality from the design after they acquied NexGen...
          • If we had a different instruction set, would we really benefit?

            My limited understanding is that some of the architechture/instruction set of x86 makes it difficult to virtualize. Better virtualization could really benefit us--the Slashdot geek crowd--today. Look at VMWare.
      • As soon as users stop caring about their software investments.

        But, with the CPU power that that there is now why does this have to be an issue anymore? If AMD can make a chip that is 32 bit backwards compatable why can't there be an inbetween chip that moves us to a new architecture? (Yes yes, I know that having the transistors for a fully backwards compatable architecture and having those for a new architecture is not the same thing but don't tell me that it can't be done.)

        And even failing a full
        • Even assuming it _is_ just a recompile that's needed (and, especially for OS:es that is far from the case), you still need to convince those holding the source to actually do it. If MS doesn't see a profitable enough market, they will not "recompile for a different platform", and you are sans Windows for your platform. If Oracle doesn't want to bet on the new architecture, you won't have your database available. The same goes for much of the rest of mainstream computing today.

          OSS is interesting, as it - li
        • by afidel ( 530433 ) on Monday August 04, 2003 @07:15AM (#6604582)
          Intel actually is TRYING to break from x86 with the Itanium line, they invested Billions and billions of dollars to do so. They had a hardware x86 emultator tacked on that is so anemic that it is outperformed by a chip two generations older at the same clock speed (the P3 running at Itanium speeds trounces it for legacy 32bit code), throw in the fact that it is WAY behind the current 32bit chips in clock speed and you get a not-so-impressive product if the majority of your code is legacy. Then they decided their software tech was good enough that they could get better performance out of a software translator, they did, but only about 30% faster average then the hadware unit, still too slow. Compare this to AMD with the Athlon64/Opteron which runs 32bit code at least as fast clock for clock as the previous generation (ususally faster due to larger cache), and is running at about the same clock speeds. On the software emulation as part of a platform switch, it has been done twice, once with Apple and the 68k->PPC transition (quite sucessfull), and once by DEC and the Alpha team with FX!32 which was a software translation layer that would dynamically recompile x86 NT4 programs to native Alpha code, it didn't work all that well despite the Apha being vastly superior to anything Intel made at the time.
        • One of the biggest problems of porting to another instruction set is that ALL drivers would have to be re-written to get the full potential of the hardware (that is the big deal with AMD64 and Windows 64bit edition). Not that 32 bit drivers do not work on a AMD64 chip, but that it would not work as well. Change that to a completely different instruction set, and you get big problems.

          Just look at the Itanium: big, expensive, LOUSY to program for.

          I think the PPC and AMD64 will merge sometime down the road,

    • > Was it not enough to extend the 8085 first to 8086, than to 80286, than to 80386 and now to x86-64? When will this end?

      With the x86-640KB, if a famous prediction attributed to Bill Gates is true.

    • by turgid ( 580780 ) on Monday August 04, 2003 @06:55AM (#6604537) Journal
      When will this end?

      As soon as there is no longer any money to be made.

    • Aren't the modern chips RISC underneath anyway? The underlying architecture hasn't stayed the same, it's just a compatibility interface. Yes?
      • by maraist ( 68387 ) * <{michael.maraist ... mail.n0spam.com}> on Monday August 04, 2003 @10:46AM (#6605822) Homepage
        Aren't the modern chips RISC underneath anyway?

        A) There is an impedence mismatch between the compiler and the CPU when using x86 assembly.

        A.1) The compiler can have a tremendous understanding of how the code can most efficiently be run under most archetectural circumstances, yet has to assume the most common-dumbest implementation (e.g. should it trust hyper-threading, should it trust AMD's or Intels number of virtual/renaming registers). Yess you can recompile dll's/.so's for each projected archetecture but this is rare.

        A.2) Compilers must masquerade assembly to trick the CPU into operating more efficiently; this requries very CPU-version-specific coding.

        A.3) Newer generations of a CPU will react differently to the masqueraded code, and thus the number of CPU-specific DLL's becomes undesirable.

        B) Extra effort on the compiler/developer side is justifiable (Q3 DLL's for each modern CPU, for example). But there is also effort on the CPU side. This effort exists as extra propagation delays (or worse, clock ticks) that are spent guessing how best to translate antiquated x86 code into a form that facilitates modern processing techniques. Stack-based floating point operations for example, explicitly documents backwardly compatible tricks which tell it how to act more like a register file.. There are issues with data-dependency calculations in the CPU such that more than 4/8 general register can be used.

        C) There are enormous losses involved in memory alignment of the instructions. One of the most important aspects of RISC is that all instructions are the same size, so no clocks are wasted figuring out what the next instruction is (to say nothing of the next 3 parallel instructions). Having a "RISC-like core" is somewhat meaningless if you still have to have the instruction-align.

        D) Like the I-align, there are wasted propagations/clocks decoding old x86 multi-step instructions.. AMD/Intel both refer to the vectored-instructions; those that are so complex that they are special cased and who's performance is sacrificed at the benifit of simpler instructions.. No modern Compiler should ever produce these instructions (since they're rather well known), BUT the CPU must still check for them.

        E) Even though the compiler can masquerade code such that the CPU can allocate dozens of registers, there are certain compilation techniques that can only work when you have a large number of addressible registers. Loop-unrolling for example... This is where you have say a nested loop and your inner-most-loop is pretty tight.. If you have dozens of explicitly addressible registers, and the code doesn't have data-dependency issues, then you can have the inner loop only require a single clock tick per iteration; performing all calculations in parallel and into differing registers.

        Modern x86 cpu's can automatically register unroll only the most trivial loops (memory copies and some slightly harder things who's data-dependencies don't span too many instructions). Often a nested loop is written one way for clearity, but a compiler can determine that the nesting can and should be reversed for performance.. But if there just aren't enough registers, it is not worth doing so.. The CPU can not make such a dramatic translation behind-the-scenes.

        F) calling conventions: easily the biggest hit in performance since this consistutes an ever bigger percentage of modern programming (think VM's where every op-code requires multiple function-calls). Larger explicit register sets allows for more optimal setup/tear-down. Some techniques like SUN or Itanium's rolling windows can also be incredible for diverse-but-not-deep coding styles (again VMs). Even simple Alpha/SGI MIPS constant reg-sets with dedicated in/out registers are enormously helpful in avoiding memory access.

        The x86 with it's orignal 4 regs (with 1 dedicated out) requires stack manipulation.. Yes L0/L1 cache concepts help this, but we still have push/pop stack management overhead. Pe
        • A.1) The compiler can have a tremendous understanding of how the code can most efficiently be run under most archetectural circumstances, yet has to assume the most common-dumbest implementation

          That's true for any two chips that run the same architectures; why is it better to have to compile for x86 and MIPS then to compile for x86 and (possibly) x86-64 and have the choice of running the x86 binary on the x86-64?
    • its not as bad as it seems. as much as it sounds like another extension, x86-64 really is innovative. the memory controller is built into the cpu now, there are tons of new instructions, and 64bit registers are something every programmer longs for.

      I think introducing some radically different architecture will never work out (intel kind of proved that), amd is going the right direction innovating inside the box
      • I think introducing some radically different architecture will never work out (intel kind of proved that), amd is going the right direction innovating inside the box.

        You can say that again. What plagued the Itanium CPU was that in order to take full advantage of the CPU you had to essentially write code from scratch, which is an extremely expensive investment, to say the least. It didn't help that the Itanium CPU pricing is somewhere out in the stratospshere, too. =( Small wonder why it took quite a while
    • Just make sure that as you try to replace the 30 year old horror known as X86 that you're not getting yourself in even deeper. Philosophically, Intel appears historically to have placed hardware squarely in the driver's seat. Perhaps that has even been the right move, as X86 has won out over software-wise more elegant competition.

      IA64 (or IPF, if you prefer) continues that tradition of designing hardware, and forcing software to accomodate. In this case, they've gone even further in exposing the intimate d
    • Itanium is basically a new design and it doesn't run much better does it.

      Motorola and IBM are ahead of Intel when it comes to RISC and 64-bit.
      • Itanium is basically a new design and it doesn't run much better does it.

        Actually, the Itanium is a bueatiful CPU design (from a compiler standpoint). It is very radical, though I wouldn't say ideal or even best-in-class. My only beef, really is that it wastes time, effort, and embarrasment by trying to be x86 compatible. There are actually assembly codes in the official documentation to bother with this obviously failed task.

        I don't feel like double checking, but the Itanium was able to achieve rema
    • While people moan about all the architectural defects of the x86 instruction set, they don't want the disruption that recode and recompile requires for the state of the art to evolve to a new architecture.

      Itanium's EPIC (explicitly parallel instruction computing) which requires development tools to stage external instruction organization is a radical departure conventional CISC/RICC deisgn and generates performance that consistantly exceeds vitually all RISC processors.
      (Check the actual benchmarks and you
    • " Is not this terrible that 30 years old, not very good architecture now gained a pass into the 21'st century? Was it not enough to extend the 8085 first to 8086, than to 80286, than to 80386 and now to x86-64? When will this end?"

      When you design a better architecture which can run almost every application made over the past 20 years, and which can be implemented in such a way as to fit the average consumer's computer budget of about $1000 for a complete system while still leaving room for profit.

      Good lu
    • this terrible that 30 years old, not very good architecture now gained a pass into the 21'st century

      Yeah. Those monolithic *NIX kernels have got to go.

      I'd say more, but I've got to disconnect and pull all the copper wire out of my house right now.

  • by sonicattack ( 554038 ) on Monday August 04, 2003 @06:34AM (#6604478) Homepage
    As far as I understand, the kind of applications most likely to benefit from going 64-bit are mostly database apps, where access to a 64-bit address space helps when working with huge datasets, and applications doing a lot of integer computations (cryptography?).

    Could anyone point out for me a list of benefits for going 64-bit on the "desktop" too?

    Regards
    • by Nicolas MONNET ( 4727 ) <nicoaltiva@gmai l . c om> on Monday August 04, 2003 @06:36AM (#6604484) Journal
      The next gen Linux thread library will benefit significantly from having a 64 bit adress space, I remember from reading the whitepaper. Just an example.
    • by afidel ( 530433 ) on Monday August 04, 2003 @07:19AM (#6604591)
      Anything that needs to access files larger than 4GB or which can use more than 4GB of ram will benifit. Desktop programs that fall in this category include anything dealing with video, people dealing with multitracking audio, CAD/CAM, rendering, and others. It also makes software design somewhat simpler because you don't have to worry about paging nearly as much with a 64bit systems.
    • Could anyone point out for me a list of benefits for going 64-bit on the "desktop" too?

      Let's say you're in your cubicle in the year 2015, and someone tells you to write a software application to help manage the virtual DVD player for all the quaint movies. (Who knows? Maybe there will come a day soon when the sum total of, say, AOL/Time/Warner's content can be bought in a boxed set with the box weighing an ounce in its predominant storage media while the content owners will gripe at the "whole farm" be

    • by gfody ( 514448 ) * on Monday August 04, 2003 @07:38AM (#6604641)
      I mainly write asm optimized graphics routines (DSP filters/analysis/occasional special effects) and I can't wait to get my hands on a 64bit cpu. the basic strategy behind writing efficient filters is to process a register at time (32bit register = 4 8bit pixels, 2 16bit pixels, 1 and 1/3 24bit pixels or 1 32bit pixel)

      mmx gives you some 64bit registers but you can only use a handful of instructions with these. with 64bit registers I should be able to double the performance on any filter that isn't already saturating the memory bandwidth (and cut cpu cycles in half regardless). not to mention the new instructions.. ah, anyways what I'm getting at is 64bits will be an extreme improvement in anything dspish (fft/mpeg encoding/streaming music/video/photoshop/filters/effects/etc/etc) but not instantly. most of this stuff is already hand optimized for 32bit mmx/sse and will need to be reoptimized for 64bit. I doubt recompiling some c++ with a 64bit compiler is going to get you any free performance.. maybe on database apps
    • Audio/video editing (Score:5, Interesting)

      by wowbagger ( 69688 ) * on Monday August 04, 2003 @07:47AM (#6604653) Homepage Journal
      If you do any audio/video editing, 64bits is a godsend.

      Consider something relatively simple: transcoding a DV file into an MPEG4 file. For a medium length file you are talking 2-6GB of data.

      Now, for a 32 bit program, the programmer must write his code to either a) process the file in a stream, with little or no memory (which means multiple passes over the file with a log file to record frame size data from pass to pass) or must write his code to work through a small window into the file, loading and reloading that window as needed. Neither approach is really friendly to the file system buffer cache.

      In a 64 bit addressing system, the programmer can simply mmap() the file into his process memory space, and let the OS's VM system handle faulting the pieces of the file in and out. As a result, the OS's buffer cache logic can better manage what parts of the file are cached. Also, from the programmer's perspective the code gets much simpler (and simpler code is better!) - if he wants to access 2 parts of the file at once (for interframe compression, say), he just has 2 pointers. If he wants to seek forward, he increments a pointer. Simple. Easy.

      And lest you say "But that's not something that Joe Average does" - consider the current crop of DV camcorders, DVD burners, and video editing software. Joe Average might not do this yet, but Joe (Average+2*sigma) does, and the threshold is moving downward.

      I expect that when 64 bit Macs and 64 bit MacOS become available, the video editing software on the Mac will become the platinum/iridium standard for the industry.
      • I think people are starting to realize that audio/video editing has extremely high demands on CPU time.

        This is going to become even more important by 2010 because I actually expect people by then to be burning high-capacity optical discs with HDTV data (720p/1080i uncompressed video) on home machines, and gawd will THAT need a huge amount of CPU processing power.
    • call it me being cheap but i can't wait for AMD to release 64bit procs to the desktop. why? cause after a quarter or 2, the cost of AthlonXP's will drop and i'll have a cheaper chip to drop in my mobo.

      that being said, 64bit processing must be good for desktops or why would apple have gone with it? the fact that they run a BSD based os is a Good Thing(TM) because we already know BSD's will support 64bit procs already (and winders has no plans to support it till longhorn, IIRC) such that open source will be
      • There will be an AMD64 port of Windows before Longhorn. It is expected early 2004. Also an IA64 and AMD64 version of the .NET CLR and Libraries so existing .Net applications can be ported with minimal changes if any.
    • Could anyone point out for me a list of benefits for going 64-bit on the "desktop" too?

      CAD, video recording/editing, 3D games, various scientific applications, software development etc.

      I am old enough to remember when 32-bit PCs were just coming out, people had exactly the same questions and scepticism. "Who could possbily need 32-bits," and "16-bit processors are faster at the moment anyway."

      There was a small number of wise and insightful people who adopted 32-bits early. The rest of us had egg on our

    • All programs will benefit. 64 bits allows for more than 4 gigs of RAM, without the nasty paging crap. There are more registers in the CPU, so programs compiled for it will run much faster. Obviously, it's good for computationally-intensive tasks (such as video encoding or 3D graphics). There's a reason why most GPUs are 256 or 512 bits these days -- it really does help.
  • by arcanumas ( 646807 ) on Monday August 04, 2003 @06:35AM (#6604481) Homepage
    Well , it seems that AMD will be doing some serious damage to Intel with it's new Opteron. From what i read the sales haven't yet reached their peak and we might expect a new change to these statistics.
    From what i understand AMD is moving very aggressivley right now and Intel has yet to produce a sign of response.
    One can not help but wonder what the future will hold....
  • by danormsby ( 529805 ) on Monday August 04, 2003 @06:39AM (#6604488) Homepage
    Are these numbers in dollars or processors?

    I guess Intel would increase market share if we get stats on number of transistors sold.

  • Spooky (Score:5, Funny)

    by Anonymous Coward on Monday August 04, 2003 @06:45AM (#6604503)
    If you rearrange the letters in "amd transmeta athlon", you get "a short talent madman"... And here I though Bill Gates had nothing to do with this.
    • Re:Spooky (Score:5, Funny)

      by CausticWindow ( 632215 ) on Monday August 04, 2003 @08:52AM (#6604886)

      Well, if you rearrange it further you get:

      "Amd not thermal satan"

      How does that fit into your theory?


      • Your theory works and is cool, cool until I get

        "Then... A Mad Anal Storm"

        Which is probably due to too many goatse.cx hidden links clicked in my humble life.
      • Do-It-Yourself Kit (Score:3, Informative)

        by MyHair ( 589485 )
        $an "amd transmeta athlon"

        This assumes you have an installed. Debian puts it in /usr/games.

        I got 1,495,995 combinations! Unfortunately you have to weed through them to see what might make sense:

        Rot Manhattan damsel
        Damn anal thermostat
        Matt marshaled no ant
        Toad rant at helmsman
        Tenth NASA marmot lad

        Now to make sure AMD gets in there:

        $an -c amd "amd transmeta athlon"

        Re: AMD lost Manhattan
        Last 10, AMD Marathon
        AMD Earthman lost tan!
        No Hamlet rants at AMD
        AMD harlots met an ant

        Darn, not enough "s"es to ma
        • Gentoo!! Noooooooooo! You've let me down....
          gentoo root # emerge an
          Calculating dependencies
          emerge: there are no masked or unmasked ebuilds to satisfy "an".

          !!! Error calculating dependencies. Please correct.
          gentoo root #

          I will never be able to look my Debian using friends in the eye again... Please make this your top priority!
  • volumes ? (Score:5, Interesting)

    by mirko ( 198274 ) on Monday August 04, 2003 @06:45AM (#6604505) Journal
    What about sales volume ?
    Why do we only have percentages ?
    What does this survey count ?
    IT looks like they forgot ARM half a billion units, or Motorola and IBM increased sales of G[345] procs.

    This 0,1% increase/decrease is unsignificant and this article is as noisy as these meaningless figures.
  • by TrancePhreak ( 576593 ) on Monday August 04, 2003 @06:45AM (#6604506)
    "Floating Point Error found in method to calculate market share." It could happen!
  • by Stubtify ( 610318 ) on Monday August 04, 2003 @06:45AM (#6604508)
    Going from 1.7% to 1.8% is a 6% increase!
  • Light on details.. (Score:5, Interesting)

    by wfberg ( 24378 ) on Monday August 04, 2003 @06:49AM (#6604520)
    Is this marketshare in units or dollars? AMD's prices are lower, so they may ship more units per %point than Intel does. Also, Intel may ship the same amount of processors, or even more, but lose a few bucks because people decide against buying bleeding edge and go for celerons etc.

    Also, which market are we talking about? XBoxes count, but other console chip manufacuters such as Hitachi are not included. Or maybe they're just too cheap and included in the 'other' category?

    Also note that a 0.1%point change doesn't mean anything. 45.63241% of convincing sounding statistics are too accurate to be true (margin of error 41.553%).

    You'd be better of just looking at the fundamentals of the companies (or their divisions), like SEC filings, quarterly results etc. If you add up all the numbers of the competitors you've compared, hey presto, you can determine their relative marketshares in the market comprised of their aggregate customerbase.

    Lies, damn lies, and then this!
    • Is this marketshare in units or dollars?

      Regardless of which measurement is used, that same report [extremetech.com] showed that AMD's market share is much lower than in Q3 2001. While its nice that their situation seems to have stabilised, what counts is whether X86-64 takes off. If it doesn't, they're screwed.
  • by KixXaSs ( 693063 ) on Monday August 04, 2003 @06:57AM (#6604543)
    "the little guys are gaining ground" well my next processor will be one of those "little guys". I especially like the fact that -for example- the VIA C3 generates LESS much heat than amds or intels, which is a good thing for silent computing. For day-to-day work those CPUs should be enough. Maybe more ppl think like me and thats why the smaller chip producers gain ground. :) just my .2 cent
  • by arekusu ( 159916 ) on Monday August 04, 2003 @07:01AM (#6604550) Homepage
    Wait, what market are we talking about?

    Oh, right: "Mercury's numbers include so-called x86 processors shipped for inclusion in desktops, notebooks, servers and Xboxes."

    So, these numbers don't tell us anything about the chips in Macs, Suns, SGIs, mainframes, Crays, Playstations, Palms, VCRs, cars, vaccuum cleaners, or toaster ovens. Just that Wintel stuff.
  • Alternatives (Score:4, Interesting)

    by maroberts ( 15852 ) on Monday August 04, 2003 @07:17AM (#6604587) Homepage Journal
    So what alternatives are there to Intel? I'm obviously aware of AMD, but what other contenders are there?

    I'd be particularly interested in anything which can provide approximately Athlon XP1800 performance with low heat output and comparable cost, since I'd like to build a PVR which is as silent as possible.

    Obviously low noise fans are needed, but I suppose the other alternative is to water cool it.
  • by Junior J. Junior III ( 192702 ) on Monday August 04, 2003 @07:27AM (#6604613) Homepage
    Ok, these are, technically, gains in marketshare for AMD and Transmeta, but they're so small that they are statistically insignificant, aren't they? Why is this article not saying that marketshare is more or less stagnant?
    • What's significant is that these companies have survived one of the worst tech retractions in memory. Not to mention that AMD was selling old non-competitive technology. I think at this point there is no where to go but up.
    • but they're so small that they are statistically insignificant, aren't they?

      No, because the numbers given are not statistical estimates based on a sample. Measures of statistical significance only apply to results that are obtained by sampling a population and then using statistical methods to draw conclusions about the population as a whole. In this case the figures are based on the total sales reported by various companies. No sampling was involved.

      Of course you might argue that 0.1% of a given market
    • Ok, these are, technically, gains in marketshare for AMD and Transmeta, but they're so small that they are statistically insignificant, aren't they?

      I don't think marketshare figures are statistics. They are derived from sales figures. Statistics implies a sample; that's where the error comes from. There's no error if you're going off 100% of the data.

  • by shoppa ( 464619 ) on Monday August 04, 2003 @07:55AM (#6604673)
    Because "Market share" is by total dollars sold, and not by numbers of processors sold, Intel gets a very real boost in these figures.

    OTOH the low-end sellers (like Via and Transmeta who target set-top and embedded devices) end up underrepresented because their processors are so cheap (or in some cases not even sold at retail).

    Now clearly, this is a business report so only those who make big bucks count there. I'm just pointing out that the methodology, by design, ignores trends towards lower-cost pervasive computing.

  • by adzoox ( 615327 ) * on Monday August 04, 2003 @08:18AM (#6604744) Journal
    Okay, first of all, skewed stats come out all the time that Apple only has 3-4% market share. When that is quarterly sales of a MUCH larger pie than 10 years ago when market share was in the 20's. Actual SALES volume of Apple Computers has remained relatively flat to increased. Actual MARKET share of Apple (installed base) hovers at around 11%. --- Do you honestly think on 3% of the USA is buying 8.3 million iTunes songs?

    So buy this report IBM & Motorola have a 0% market share because the total adds up to 100. Moto and IBM make LOTS of CPU's for computers OTHER than Apple as well. This is another statistic probably paid for and sponsored by Intel just as the Billionth processor news was.

    • The view of this (and other reports) is towards the "big fish in the big pond" - in other words, Intel. Anything not Intel-related is simply not on their radar.

      It sucks, but that's the way it is.

      In total units sold, by far the biggest selling microprocessors are 8051 derivatives, there are literally billions sold every year. But these aren't 80x86 compatible so they don't even know how to classify these sales.

  • How long till "classic x86 DOS" shows up when we search for "emulator" and "romz"? I miss prince of persia...
  • Hmmm...

    Looks to me like all these numbers say is that Intel market share dropped by 0.4% of its total over the last year. That's not much of a loss. AMD's market share went up by one tenth of a percent for a percentage increase of 0.6%. That's not much of a gain. Considering that AMD is supposed to be offering better chips at a more reasonable cost, it seems to me that it must be doing something wrong to have an overall growth that's so lousy. At this rate, it will take over a thousand years for Intel to g
  • .1% a year. Heh...

    At this rate, AMD only needs another 668 years to get to where Intel is right now.

    Not bad, at least we're making progress...'gaining ground', as they call it.

    Too bad the snails in my backyard even gain ground faster.
  • Occasionally, the markets operate in a way that defies the observations of conventional pundits. Conventional wisdom says that the primary competitor of the Athlon64 will be the Deerfield (an Itanium chip). Both are 64-bit chips, and both target the same desktop market.

    However IBM's recent entry, the PPC970, has radically altered the desktop landscape. The new Apple computers powered with the PPC970 are genuine workstations sold as desktops. The ARS Technica article [arstechnica.com] indicates that the SPEC2000 perform

Today is a good day for information-gathering. Read someone else's mail file.

Working...