AMD, Transmeta Edge Up In Market Share 206
prostoalex writes "The new Mercury Research report on the microprocessor market is out, and it looks like the little guys are gaining ground. AMD now owns 15.7% of the market, instead of 15.6% a year ago, while Transmeta and other manufacturers went from 1.7% to 1.8% in a single year. Intel owns 82.5% of the market instead of 82.8% a year ago. News.com.com also notices: 'The competition between the two companies will shift into high gear over the remainder of the year. On Sept. 23, AMD will release the Athlon64, a new desktop chip that can run 32-bit and 64-bit software.'"
Hrmm (Score:4, Interesting)
Re:Hrmm (Score:5, Interesting)
Re:Hrmm (Score:4, Insightful)
Re:Hrmm (Score:5, Insightful)
But speaking of benchmarketing, it would be REALLY fun to see some sort of CPU shootout, *all done with gcc*. Most of us either buy applications, or compile them ourselves, using gcc.
Really, Spec means very little to us. Quake, Unreal, etc fps are meaningful to those of us that play those games. To the Linux crowd, at home, business, and universities, gcc is how we get executables.
Apple recently got a black eye for using gcc for benchmarking, but perhaps erroneously. Intel does wonders on benchmarks, but I hear rumblings that they have Spec-tuned compilers that may not yield results as good on things that don't look like Spec.
When the masses, such as we are, compile, we use gcc. (I agree that most masses just buy Quake, Unreal, Photoshop, etc.) But I argue that a small subset compiles, and a smaller subset yet forks over for commercial compilers.
Re:Hrmm (Score:3, Interesting)
Re:Hrmm (Score:2)
I can't speak for G5 or IA64, but they probably have similar limitations, either specific implementation or general design.
Re:Hrmm (Score:2)
Well, what if it actually IS faster? The benchmarks I've seen of the G5 and AMD 64 bit CPU are not that much higher than the current P4. I happen to think either of those 64 bit chips running Linux (or MacOS, if it has proper 64 bit support) will be super cool, but it's hard to put my fin
Re:Hrmm (Score:2)
In fact there is no imperical evidence that any 64bit implementation is faster at doing <=32bit calculations.
What 64bit means is a radical archetectural shift, which affords (both to compiler writers and to the CPU designer's management) an excuse to add other radically new technology that is independent of bit-depth. Namely, we're spending the money, might as well pack-the-punch.
AMD64, for example uses a new CPU-mode which gives them cart-blanche ability to deprecate / add instruc
Re:Hrmm (Score:2)
If Joe Idiot thought that it would be faster, he'd probably be right. I think the important question is why do you assume that the Athlon 64 will automatically be faster than a P4 5GHz? What does "Joe Idiot," doing word processing and browsing the web, need a 64 bit CPU for? Is he creating massive databases?
Course, Joe Idiot doesn't need all the power of a 5GHz P4 either. But he should decide between th
Re:Hrmm (Score:2, Funny)
Bill Gates (1985) - "No one needs more than 640kb of memory"
Intel (2003) - "No one needs more than 32-bit processors"
Re:Hrmm (Score:5, Interesting)
I paid more than $100 for the extra 2 megabytes of RAM necessary to get Turbo C++ 3.0 for DOS working on my 10 MHz Cyrix-based AT clone (i.e., i80286, 80286, '286, 286, depending when you "label"). It was worth every penny.
The thing that might most merit your attention here is something I learned very quickly after getting just the first few programs to work. The permutations of what I could program might as well be considered infinite. Get this: It is difficult to completely reign in (or even fully to comprehend) the vast and diffuse capabilities of a 10 MHz beige box limited to the 80286 instruction set and bend-over-backwards-in-the-Protected-Mode 16 MB of RAM physical ceiling. This weak piddly hardware has--I said has, not had--more capability than I could explore in ten lifetimes as a creator of software. When the companies continue to crank out traincar loads of what (for now in the "Pre Palladium Rollout Era") is still pretty general-purpose hardware, "limitations" are matters of philosophy of science, which is where I started, come to think of it. I guess my age is showing, but I think (that is, when I think well) it is all (literally) awesome, and it has been thus for about a half century and counting.
Surely? (Score:5, Interesting)
Surely those 0.1% differences are below the threshold of noise in the marketplace, if not in the sampling methodology?
BTW, I thought I had heard on the news that AMD was really hurting these days. Again. Anyone know?
Re:Surely? (Score:5, Insightful)
Re:Intel (Score:2, Informative)
Not yet that is.
no time for champagne -- break out the water! (Score:2)
Re:Surely? (Score:2)
You could have 50% market share on a unit volume basis, but if all you're selling are money losing Durons, then that really doesn't help you much.
AMD sells their product for substantially less than Intel. You heard right, they are hurting these days. They're bleeding money and have no product that is currently able to command the higher prices of a Pentium4. They've pr
WTF??? (Score:4, Informative)
Nothing to see here, move along.
Re:WTF??? (Score:3, Insightful)
Re:WTF??? (Score:3, Insightful)
So, even on sales figures, there are sampling effects.
Re:WTF??? (Score:2)
If they cook the books, yeah, you'll have a huge error, but their sales records must be kept proper, w/ no rounding errors. If they make errors, they are paying too much taxes and what not to the gov't.. or to little.
I doubt the rounding errors on numbers THAT big would be 1% but even less significant. But there are lies, bigger lies, and statistics
Umm (Score:4, Redundant)
That said, I've just bought a Dev Kit [transmeta.com] from Transmeta, and I love it.
Other and Transmeta... (Score:5, Insightful)
This is what gets me about Transmeta, saying that they increase their share when in a category called "other" which increases 0.1% means that Transmeta is up...
How ? Transmeta don't have enough sales to get in a category of their own, they may have DECREASED their marketshare but another minor player could of increased theirs thus making the overall sector go up.
I know that here at Slashdot we must all bow to the altar of Transmeta because their processor approach is all open sourced and they own no patents and follow the OSS way so purely... oh wait they don't ? You mean they do have patents and they don't release their architecture ? Oh it must be because Linux is their primary OS... nope again. No its because they gave Linus a job.
The story here is that Intel remain the massive player, AMD has made some minor in-roads but is still not gaining marketshare in the way they would really like, and that the figures actually represent and quarter on quarter DROP in sales percentage for AMD.
In otherwords a way to say this is that AMD have LOST nearly 1% of share over 3 months which isn't so positive.
But hey, if we can bash Intel and bump Transmeta why let the facts get in the way.
Re:Other and Transmeta... (Score:3, Insightful)
My guess, VIA
Re:Other and Transmeta... (Score:3, Informative)
My guess, VIA
Seconded. Their C3 &c chips on the mini-itx boards are going from strength to strength
Re:Other and Transmeta... (Score:5, Informative)
Holy chill there batman. Take a look at the article, will you? This isn't editorializing or
Other manufacturers, a grouping that includes Transmeta, increased their collective market share from 1.7 percent to 1.8 percent.
The slashdot summary, meanwhile, says the same thing:
While Transmeta and other manufacturers went from 1.7% to 1.8% in a single year.
Tit for tat -- this is the only mention of Transmeta. You read waaaaay too much into it. Take your allegations elsewhere.
Hint (Score:3, Interesting)
Re:Other and Transmeta... (Score:2)
x86-64 - horror strikes again (Score:4, Interesting)
Re:x86-64 - horror strikes again (Score:5, Insightful)
Re:x86-64 - horror strikes again (Score:3, Insightful)
Re:x86-64 - horror strikes again (Score:5, Insightful)
I believe that's basically what they're already doing.
If I understood what I read correctly, the "X86" CPUs on the market aren't really X86 CPUs anymore. Instead, they are essentially a super-fast hardware emulator of an instruction set. The real instruction set of these chips doesn't resemble X86 *at all*; the chip decodes on the fly from the X86 macro-ops down to the chip's native micro-ops, which are smaller and simpler and easier to track when running in parallel across several execution units.
That's part of why most software emulation is so slow -- you are in essence comparing generalized software solutions to incredibly well-engineered hardware solutions.
If we had a different instruction set, would we really benefit? For the vast majority of us, even the Slashdot crowd, no. The compiler guys would probably like it a lot, but very few programmers work in anything lower than C. The actual "machine language" is mostly unimportant. And it's not even REALLY the machine language of the chip anymore!
Even assembly coders, these days, are writing in a form of interpreted language. The "bare metal" guys aren't REALLY at the bare metal anymore; even they are working at a level of abstraction.
Re:x86-64 - horror strikes again (Score:5, Informative)
Re:x86-64 - horror strikes again (Score:4, Interesting)
However, it is very important to point out that they don't resemble RISC instructions either. Although they have many of the same properties, they generally can be over 150 bits in length, for example. These instructions also don't exist on any code address per se, and thus could not really be considered a full instruction set in of themselves.
Another thing that should be pointed out is that modern post-RISC out-of-order executing RISCs themselves are also forced to have some kind of alternative instruction set representation as well (since some of them perform complex operations, such as the PowerPC's double write instructions, or any "test-and-set" kind of instructions, and they are stored in internal reorder buffers)
Re:x86-64 - horror strikes again (Score:2)
Re:x86-64 - horror strikes again (Score:2)
No. The "native instruction set" isn't available directly. The CPU is essentially hard-wired as an x86 emulator. This may sound inefficient, but in reality it works quite well. The real instruction set is essentially designed to take the crufty x86 code and siphon off the bathwater, leaving mostly just the baby; it's not meant for direct programming
Re:x86-64 - horror strikes again (Score:2)
The Nx586 actually had the ability to switch from 386 mode to its non-standard RISC instruction set, and there was some talk of making it PowerPC compatible back in the days of OS/2 PowerPC Edition.
To my knowledge, AMD removed such functionality from the design after they acquied NexGen...
Re:x86-64 - horror strikes again (Score:2)
My limited understanding is that some of the architechture/instruction set of x86 makes it difficult to virtualize. Better virtualization could really benefit us--the Slashdot geek crowd--today. Look at VMWare.
Re:x86-64 - horror strikes again (Score:3, Interesting)
But, with the CPU power that that there is now why does this have to be an issue anymore? If AMD can make a chip that is 32 bit backwards compatable why can't there be an inbetween chip that moves us to a new architecture? (Yes yes, I know that having the transistors for a fully backwards compatable architecture and having those for a new architecture is not the same thing but don't tell me that it can't be done.)
And even failing a full
Re:x86-64 - horror strikes again (Score:3, Informative)
OSS is interesting, as it - li
Re:x86-64 - horror strikes again (Score:5, Insightful)
One word - DRIVERS (Score:2)
Just look at the Itanium: big, expensive, LOUSY to program for.
I think the PPC and AMD64 will merge sometime down the road,
Re: x86-64 - horror strikes again (Score:2, Funny)
> Was it not enough to extend the 8085 first to 8086, than to 80286, than to 80386 and now to x86-64? When will this end?
With the x86-640KB, if a famous prediction attributed to Bill Gates is true.
Re:x86-64 - horror strikes again (Score:4, Funny)
As soon as there is no longer any money to be made.
Re:x86-64 - horror strikes again (Score:2, Informative)
Re:x86-64 - horror strikes again (Score:4, Informative)
A) There is an impedence mismatch between the compiler and the CPU when using x86 assembly.
A.1) The compiler can have a tremendous understanding of how the code can most efficiently be run under most archetectural circumstances, yet has to assume the most common-dumbest implementation (e.g. should it trust hyper-threading, should it trust AMD's or Intels number of virtual/renaming registers). Yess you can recompile dll's/.so's for each projected archetecture but this is rare.
A.2) Compilers must masquerade assembly to trick the CPU into operating more efficiently; this requries very CPU-version-specific coding.
A.3) Newer generations of a CPU will react differently to the masqueraded code, and thus the number of CPU-specific DLL's becomes undesirable.
B) Extra effort on the compiler/developer side is justifiable (Q3 DLL's for each modern CPU, for example). But there is also effort on the CPU side. This effort exists as extra propagation delays (or worse, clock ticks) that are spent guessing how best to translate antiquated x86 code into a form that facilitates modern processing techniques. Stack-based floating point operations for example, explicitly documents backwardly compatible tricks which tell it how to act more like a register file.. There are issues with data-dependency calculations in the CPU such that more than 4/8 general register can be used.
C) There are enormous losses involved in memory alignment of the instructions. One of the most important aspects of RISC is that all instructions are the same size, so no clocks are wasted figuring out what the next instruction is (to say nothing of the next 3 parallel instructions). Having a "RISC-like core" is somewhat meaningless if you still have to have the instruction-align.
D) Like the I-align, there are wasted propagations/clocks decoding old x86 multi-step instructions.. AMD/Intel both refer to the vectored-instructions; those that are so complex that they are special cased and who's performance is sacrificed at the benifit of simpler instructions.. No modern Compiler should ever produce these instructions (since they're rather well known), BUT the CPU must still check for them.
E) Even though the compiler can masquerade code such that the CPU can allocate dozens of registers, there are certain compilation techniques that can only work when you have a large number of addressible registers. Loop-unrolling for example... This is where you have say a nested loop and your inner-most-loop is pretty tight.. If you have dozens of explicitly addressible registers, and the code doesn't have data-dependency issues, then you can have the inner loop only require a single clock tick per iteration; performing all calculations in parallel and into differing registers.
Modern x86 cpu's can automatically register unroll only the most trivial loops (memory copies and some slightly harder things who's data-dependencies don't span too many instructions). Often a nested loop is written one way for clearity, but a compiler can determine that the nesting can and should be reversed for performance.. But if there just aren't enough registers, it is not worth doing so.. The CPU can not make such a dramatic translation behind-the-scenes.
F) calling conventions: easily the biggest hit in performance since this consistutes an ever bigger percentage of modern programming (think VM's where every op-code requires multiple function-calls). Larger explicit register sets allows for more optimal setup/tear-down. Some techniques like SUN or Itanium's rolling windows can also be incredible for diverse-but-not-deep coding styles (again VMs). Even simple Alpha/SGI MIPS constant reg-sets with dedicated in/out registers are enormously helpful in avoiding memory access.
The x86 with it's orignal 4 regs (with 1 dedicated out) requires stack manipulation.. Yes L0/L1 cache concepts help this, but we still have push/pop stack management overhead. Pe
Re:x86-64 - horror strikes again (Score:2)
That's true for any two chips that run the same architectures; why is it better to have to compile for x86 and MIPS then to compile for x86 and (possibly) x86-64 and have the choice of running the x86 binary on the x86-64?
Re:x86-64 - horror strikes again (Score:3, Informative)
I think introducing some radically different architecture will never work out (intel kind of proved that), amd is going the right direction innovating inside the box
Intel Itanium is not really a success. (Score:3, Interesting)
You can say that again. What plagued the Itanium CPU was that in order to take full advantage of the CPU you had to essentially write code from scratch, which is an extremely expensive investment, to say the least. It didn't help that the Itanium CPU pricing is somewhere out in the stratospshere, too. =( Small wonder why it took quite a while
Re:x86-64 - horror strikes again (Score:2)
IA64 (or IPF, if you prefer) continues that tradition of designing hardware, and forcing software to accomodate. In this case, they've gone even further in exposing the intimate d
Re:x86-64 - horror strikes again (Score:2)
Motorola and IBM are ahead of Intel when it comes to RISC and 64-bit.
Re:x86-64 - horror strikes again (Score:2)
Actually, the Itanium is a bueatiful CPU design (from a compiler standpoint). It is very radical, though I wouldn't say ideal or even best-in-class. My only beef, really is that it wastes time, effort, and embarrasment by trying to be x86 compatible. There are actually assembly codes in the official documentation to bother with this obviously failed task.
I don't feel like double checking, but the Itanium was able to achieve rema
Re:x86-64 - horror strikes again (Score:2)
Itanium's EPIC (explicitly parallel instruction computing) which requires development tools to stage external instruction organization is a radical departure conventional CISC/RICC deisgn and generates performance that consistantly exceeds vitually all RISC processors.
(Check the actual benchmarks and you
Re:x86-64 - horror strikes again (Score:2)
When you design a better architecture which can run almost every application made over the past 20 years, and which can be implemented in such a way as to fit the average consumer's computer budget of about $1000 for a complete system while still leaving room for profit.
Good lu
Re:x86-64 - horror strikes again (Score:2)
this terrible that 30 years old, not very good architecture now gained a pass into the 21'st century
Yeah. Those monolithic *NIX kernels have got to go.
I'd say more, but I've got to disconnect and pull all the copper wire out of my house right now.
Re:x86-64 - horror strikes again (Score:2)
Yes there really was a 80186 but it was never used officially by IBM in any PC model, hence very few clone makers used it either.
An ex-colleague had an 80186 PC in that small timeframe before the 80286 was available. He had been a contractor at the time, and just had to have the latest bit of kit. He used the 186 box for a few months and then it went into the back of his garage when the 286 came out. When he told me this about six years ago I thought he was bullshitting me, but he dug the machine out to
64-bit apps/CPU on the desktop (Score:4, Interesting)
Could anyone point out for me a list of benefits for going 64-bit on the "desktop" too?
Regards
Re:64-bit apps/CPU on the desktop (Score:5, Interesting)
Re:64-bit apps/CPU on the desktop (Score:5, Informative)
Re:64-bit apps/CPU on the desktop (Score:2)
Re:64-bit apps/CPU on the desktop (Score:2, Interesting)
Let's say you're in your cubicle in the year 2015, and someone tells you to write a software application to help manage the virtual DVD player for all the quaint movies. (Who knows? Maybe there will come a day soon when the sum total of, say, AOL/Time/Warner's content can be bought in a boxed set with the box weighing an ounce in its predominant storage media while the content owners will gripe at the "whole farm" be
Re:64-bit apps/CPU on the desktop (Score:5, Informative)
mmx gives you some 64bit registers but you can only use a handful of instructions with these. with 64bit registers I should be able to double the performance on any filter that isn't already saturating the memory bandwidth (and cut cpu cycles in half regardless). not to mention the new instructions.. ah, anyways what I'm getting at is 64bits will be an extreme improvement in anything dspish (fft/mpeg encoding/streaming music/video/photoshop/filters/effects/etc/etc) but not instantly. most of this stuff is already hand optimized for 32bit mmx/sse and will need to be reoptimized for 64bit. I doubt recompiling some c++ with a 64bit compiler is going to get you any free performance.. maybe on database apps
Audio/video editing (Score:5, Interesting)
Consider something relatively simple: transcoding a DV file into an MPEG4 file. For a medium length file you are talking 2-6GB of data.
Now, for a 32 bit program, the programmer must write his code to either a) process the file in a stream, with little or no memory (which means multiple passes over the file with a log file to record frame size data from pass to pass) or must write his code to work through a small window into the file, loading and reloading that window as needed. Neither approach is really friendly to the file system buffer cache.
In a 64 bit addressing system, the programmer can simply mmap() the file into his process memory space, and let the OS's VM system handle faulting the pieces of the file in and out. As a result, the OS's buffer cache logic can better manage what parts of the file are cached. Also, from the programmer's perspective the code gets much simpler (and simpler code is better!) - if he wants to access 2 parts of the file at once (for interframe compression, say), he just has 2 pointers. If he wants to seek forward, he increments a pointer. Simple. Easy.
And lest you say "But that's not something that Joe Average does" - consider the current crop of DV camcorders, DVD burners, and video editing software. Joe Average might not do this yet, but Joe (Average+2*sigma) does, and the threshold is moving downward.
I expect that when 64 bit Macs and 64 bit MacOS become available, the video editing software on the Mac will become the platinum/iridium standard for the industry.
Re:Audio/video editing (Score:2)
This is going to become even more important by 2010 because I actually expect people by then to be burning high-capacity optical discs with HDTV data (720p/1080i uncompressed video) on home machines, and gawd will THAT need a huge amount of CPU processing power.
Re:64-bit apps/CPU on the desktop (Score:2, Insightful)
that being said, 64bit processing must be good for desktops or why would apple have gone with it? the fact that they run a BSD based os is a Good Thing(TM) because we already know BSD's will support 64bit procs already (and winders has no plans to support it till longhorn, IIRC) such that open source will be
Re:64-bit apps/CPU on the desktop (Score:2, Interesting)
Re:64-bit apps/CPU on the desktop (Score:2)
CAD, video recording/editing, 3D games, various scientific applications, software development etc.
I am old enough to remember when 32-bit PCs were just coming out, people had exactly the same questions and scepticism. "Who could possbily need 32-bits," and "16-bit processors are faster at the moment anyway."
There was a small number of wise and insightful people who adopted 32-bits early. The rest of us had egg on our
Re:64-bit apps/CPU on the desktop (Score:2)
Re:64-bit apps/CPU on the desktop (Score:2)
Amd has the Opteron Weapon. (Score:4, Interesting)
From what i understand AMD is moving very aggressivley right now and Intel has yet to produce a sign of response.
One can not help but wonder what the future will hold....
Transistor stats (Score:4, Funny)
I guess Intel would increase market share if we get stats on number of transistors sold.
Spooky (Score:5, Funny)
Re:Spooky (Score:5, Funny)
Well, if you rearrange it further you get:
"Amd not thermal satan"
How does that fit into your theory?
Re:Spooky (Score:2)
Your theory works and is cool, cool until I get
"Then... A Mad Anal Storm"
Which is probably due to too many goatse.cx hidden links clicked in my humble life.
Do-It-Yourself Kit (Score:3, Informative)
This assumes you have an installed. Debian puts it in
I got 1,495,995 combinations! Unfortunately you have to weed through them to see what might make sense:
Rot Manhattan damsel
Damn anal thermostat
Matt marshaled no ant
Toad rant at helmsman
Tenth NASA marmot lad
Now to make sure AMD gets in there:
$an -c amd "amd transmeta athlon"
Re: AMD lost Manhattan
Last 10, AMD Marathon
AMD Earthman lost tan!
No Hamlet rants at AMD
AMD harlots met an ant
Darn, not enough "s"es to ma
Re:Do-It-Yourself Kit (Score:2)
gentoo root # emerge an
Calculating dependencies
emerge: there are no masked or unmasked ebuilds to satisfy "an".
!!! Error calculating dependencies. Please correct.
gentoo root #
I will never be able to look my Debian using friends in the eye again... Please make this your top priority!
Re:Do-It-Yourself Kit (Score:2)
I found it by seraching Debian's packages for "anagram". It doesn't appear to be in Freshmeat.
volumes ? (Score:5, Interesting)
Why do we only have percentages ?
What does this survey count ?
IT looks like they forgot ARM half a billion units, or Motorola and IBM increased sales of G[345] procs.
This 0,1% increase/decrease is unsignificant and this article is as noisy as these meaningless figures.
x86 processors (Score:2)
(Trying to pull up Cyrix to see if they still make anything x86, but the page isn't loading.)
In Other News (Score:5, Funny)
Wow Transmeta's popularity is through the roof!! (Score:5, Funny)
Light on details.. (Score:5, Interesting)
Also, which market are we talking about? XBoxes count, but other console chip manufacuters such as Hitachi are not included. Or maybe they're just too cheap and included in the 'other' category?
Also note that a 0.1%point change doesn't mean anything. 45.63241% of convincing sounding statistics are too accurate to be true (margin of error 41.553%).
You'd be better of just looking at the fundamentals of the companies (or their divisions), like SEC filings, quarterly results etc. If you add up all the numbers of the competitors you've compared, hey presto, you can determine their relative marketshares in the market comprised of their aggregate customerbase.
Lies, damn lies, and then this!
Re:Light on details.. (Score:2)
Regardless of which measurement is used, that same report [extremetech.com] showed that AMD's market share is much lower than in Q3 2001. While its nice that their situation seems to have stabilised, what counts is whether X86-64 takes off. If it doesn't, they're screwed.
Re:Light on details.. (Score:2)
It may well be significant. Lets rewrite these interms of actually numbers sold. At a guess say 10,000,000 chips sold. Now 1.7% of 10 millions 170,000 and 1.8% is 180,000 would you say the difference between 170,000 and 180,000 is significant? As another post said thats a 6% increase enough to put a smile of Linus's face.
Percentages are not a golden sta
different CPUs, different appliances (Score:5, Interesting)
Not just good for silent computing. (Score:2)
"Microprocessor Market"? (Score:5, Insightful)
Oh, right: "Mercury's numbers include so-called x86 processors shipped for inclusion in desktops, notebooks, servers and Xboxes."
So, these numbers don't tell us anything about the chips in Macs, Suns, SGIs, mainframes, Crays, Playstations, Palms, VCRs, cars, vaccuum cleaners, or toaster ovens. Just that Wintel stuff.
Alternatives (Score:4, Interesting)
I'd be particularly interested in anything which can provide approximately Athlon XP1800 performance with low heat output and comparable cost, since I'd like to build a PVR which is as silent as possible.
Obviously low noise fans are needed, but I suppose the other alternative is to water cool it.
How is 0.1% significant? (Score:5, Insightful)
Re:How is 0.1% significant? (Score:2)
Re:How is 0.1% significant? (Score:2)
No, because the numbers given are not statistical estimates based on a sample. Measures of statistical significance only apply to results that are obtained by sampling a population and then using statistical methods to draw conclusions about the population as a whole. In this case the figures are based on the total sales reported by various companies. No sampling was involved.
Of course you might argue that 0.1% of a given market
Re:How is 0.1% significant? (Score:2)
I don't think marketshare figures are statistics. They are derived from sales figures. Statistics implies a sample; that's where the error comes from. There's no error if you're going off 100% of the data.
"Market share" favors big-bucks Intel processors (Score:5, Interesting)
OTOH the low-end sellers (like Via and Transmeta who target set-top and embedded devices) end up underrepresented because their processors are so cheap (or in some cases not even sold at retail).
Now clearly, this is a business report so only those who make big bucks count there. I'm just pointing out that the methodology, by design, ignores trends towards lower-cost pervasive computing.
So, by this information Apple has 0% Market share (Score:5, Informative)
So buy this report IBM & Motorola have a 0% market share because the total adds up to 100. Moto and IBM make LOTS of CPU's for computers OTHER than Apple as well. This is another statistic probably paid for and sponsored by Intel just as the Billionth processor news was.
Re:So, by this information Apple has 0% Market sha (Score:2)
It sucks, but that's the way it is.
In total units sold, by far the biggest selling microprocessors are 8051 derivatives, there are literally billions sold every year. But these aren't 80x86 compatible so they don't even know how to classify these sales.
Anyone wanna take a guess? (Score:2)
Gaining ground? (Score:2)
Looks to me like all these numbers say is that Intel market share dropped by 0.4% of its total over the last year. That's not much of a loss. AMD's market share went up by one tenth of a percent for a percentage increase of 0.6%. That's not much of a gain. Considering that AMD is supposed to be offering better chips at a more reasonable cost, it seems to me that it must be doing something wrong to have an overall growth that's so lousy. At this rate, it will take over a thousand years for Intel to g
Awesome! Just another 66.8% to go... (Score:2)
At this rate, AMD only needs another 668 years to get to where Intel is right now.
Not bad, at least we're making progress...'gaining ground', as they call it.
Too bad the snails in my backyard even gain ground faster.
Athlon64 will be Crushed by PPC970, not Deerfield (Score:2)
However IBM's recent entry, the PPC970, has radically altered the desktop landscape. The new Apple computers powered with the PPC970 are genuine workstations sold as desktops. The ARS Technica article [arstechnica.com] indicates that the SPEC2000 perform
Re:noise, heat and damage.... (Score:5, Interesting)
If yours have been overheating like this then you've installed it incorrectly, simple as that. The current retail (read, cheap) heatsink/fan combos AMD ship with are already quiet - and plenty of aftermarket quieter ones are available if you want near-silent.
I've had 1700's overclocked to 2200 speeds running in a normal mini tower with only a single case fan to ventilate the case and they typically hit mid-fifties *at the most* under load, well within normal specs. They also work fine up into the 80s if you really want to push them.
If you want to get really paranoid about heat, make sure your case is well vented and stick a zalman flower passive cooler on it.
Re:AMD's so called "performance". (Score:2)
To me the real question is how fast c