More Details Emerge on AMD's Hammer 396
Diabolus writes "Anandtech have more information on AMD's upcoming Hammer processors. " Talking with several engineers who are in the know about it, the Hammer looks pretty frickin' amazing. Itanium will have a run for its money, I suspect.
Whoa! (Score:1)
MHz Myth.... (Score:1)
Re:MHz Myth.... (Score:1)
Re:MHz Myth.... (Score:2)
/Brian
I am a bit fearful (Score:1, Funny)
FUDpacker... (Score:4, Funny)
What I wanna know (Score:2, Funny)
[/sarcasm]
Re:What I wanna know (Score:2)
Dual 1600 * 1200 displays would be quite nice -- so I can play Quake 3 in one and a Divx movie in the other...
Fire (Score:1)
An AMD clause "If AMD CPUs are within the perimiter of the house, you arnt insured (Act of God?)
AMD's Future (Score:4, Funny)
Re:AMD's Future (Score:3, Insightful)
/Brian
Re:AMD's Future (Score:2)
For the record, before I got my used P2 I was in the market for an Athlon, and AMD would still be my first choice (mind you I'd check the heat sink out of the box to make sure it was in place). But I do know what the facts are.
/Brian
NUMA ... shudder .... (Score:2)
Re:NUMA ... shudder .... (Score:2)
Sgi actually has a 64 node (128 proc) numa working on their Origin Mips line, you might want to checkout http://oss.sgi.com/projects/numa/ I think SGI is looking at the way leading this charge, and as long as SGI can stay alive long enough they'll have a good implementation. There's one thing I can say about SGI, they're scalable NUMA tech is almost beyond reproach (too bad I can't get squat for 3rd party Irix apps).
Here's the link to SGI's cat cpuinfo of their 128 proc Linux numa system running
http://oss.sgi.com/projects/LinuxScalability/do
Re:NUMA ... shudder .... (Score:2)
Between this and Hyperthreads, new OS designs should be able to take advantage of at least multiple processors, even on the desktop. Of course, the Pentium was supposed to be the first CPU to enable SMP For The Rest Of Us, so we'll just have to see what happens.
Re:NUMA ... shudder .... (Score:2)
However, from what I understood of the description, memory access should all be taken care of in hardware with no OS support. The CPU interconnects are supposed to make even remote memory transactions very, very fast, not much additional latency than to direct-accessed memory.
Linux would therefore "need" no explicit NUMA code, and could improve things just a bit by setting CPU affinity of a process to the one which has the process in local memory, very similar to the CPU affinity code which is already in place for keeping a process on the same processor which has its data in the CPU cache....
Maybe someone else who knows more can weigh in on this, but to me it looks like a small issue.
PeterM
Backwards compatability big advantage (Score:2, Redundant)
From the look of it, both the Hammer and the G5 can run old, 32bit code natively. This means that today's apps will continue to be able to run at top speed on the new chips, because the instructions still exist in hardware. This is definitely good for people with lots of older apps(ie, almost all of us.) However, a lot of the reports on the Itanium seem to indicate that, in making a completely clean break, it is forced to emulate older 32bit instructions, resulting in an actual -slowdown- for many programs. Eventually, Intel's clean break might give it some advantage, and that advantage might come quickly for the big metal server market. However, it seems that AMD will be able to win out on the desktop. Of course, here we are comparing rumors on a rumored chip to a different unreleased chip, only Bob knows exactly what will happen between now and release time...
Re:Backwards compatability big advantage (Score:4, Informative)
x86 code on Itanium (Score:2)
Re:Backwards compatability big advantage (Score:2)
Yes. Amazingly, though, it runs x86 slower than a software-emulation package on competitors' RISC chips.
Well, yes, Itanium runs x86 at the speed of a P100 (Score:2)
Are you willing to spend over $3000 for P100 speeds for your x86 code?
Neither is anybody else. The emperor has no clothes.
Re:Backwards compatability big advantage (Score:2)
Initially, the ALPHA's speed was due to "leaving complexities out"; it's minimalist approach to assembly (including a fuzzy FPU which was very fast if you didn't need ieee precision). But it definately didn't leave the emulation out cold. If I'm not mistaken, the ALPHA had a huge side ROMish type of thing that allowed VAX complex instruction translation lookups.
Fx86 worked fast because it incrementally translated x86 to native ALPHA. Drivers and OS libraries were already native. Thus only a moderate fraction of your code ever ran under emulation (given a long enough lifetime).
The reason hardware compatability is an issue is that if you don't have the R&D to port to multiple platforms, you choose the one that'll make the most money.. It's rarely your problem that things run too slowly; especially if the uppitiest customer will be willing to shell out for a maxed out current-state-of-the-art x86. (with proprietary motherboards that use faster memory, etc).
But even the ALPHA has legacy problems, as they're violating their "minamlist" approach by introducing out of order execution in their latest processors...
Oh how Alpha is missed.. I cheered the K7 because if I saved up enough money, I could get the Alpha version
-Michael
Re:Backwards compatability big advantage (Score:2)
/Brian
Re:Backwards compatability big advantage (Score:2)
OK -- where's the software support? Where's Windows/AMD64? Where's the need for 64-bit desktop chips?
I like AMD's strategy in theory, however it will be marketed like a box of Cheerios that says "NEW - Now With More Bits!!" and nothing really to back it up.
(I should note that Apple has a similar problem with the G5, except they will ship native OS support, and it's concievable that a 64-bit CPU will have an advantage for media applications, which is pretty much their only market. Will 64-bit Quake or 64-bit OpenGL drivers help that much?)
Re:The need will come. (Score:2)
That's exactly the problem I'm talking about:
Intel: Here's a cool 32-bit chip, somebody write some software.
Microsoft and IBM: We don't need to support 32-bit for the next 10 years, so you get a bunch of crappy compatibility hacks and spurious "out of memory" errors. The hacks will make it _more_ difficult to support 32-bits in the future. Enjoy!
Now, the exact same thing is going to happen all over again for 64-bit chips. And I'm supposed to be excited?
Re:Backwards compatability big advantage (Score:2)
No, the present is 64bit+. The peecee is the only type of workstation or server still shipping with 32-bit CPUs. Sun killed their last 32-bit workstation in 1998. Alpha's been 64-bit forever. SGI has shipped 64bit CPUs since the Indy/Indigo2 and has been running 64bit IRIX by default on everything since last year (the holdout? O2, interestingly enough, because of bugs). I could go on... The reality is that you can already buy 64bit workstations running 64bit OSs with good performance for less than $1000, and in some cases only a few hundred.
The reality is that the peecee is way behind the times.
Therefore, how each of these processors runs legacy code is important.
Very true. Unfortunately neither is really getting it right. For examples of how to support mixed 32bit and 64bit binaries and even OSs on the one 64bit CPU, see the MIPS3 documentation. For a cleaner transition that required changes at the OS level only, take a look at the SPARC V8 -> V9. Can you say "seamless?" I knew you could.
The trick, naturally, is to design a proper instruction set to begin with. Then you can extend and enhance it easily without having to break backward compatibility. Too bad Intel didn't realize that.
Re:Backwards compatability big advantage (Score:3, Interesting)
The SPARC and many other RISCs had a "seamless" 32 -> 64 bit transition mostly by doing two things.
There is no reason Intel/AMD couldn't make new 64 bit load and store instructions, and redefine all references to EBX (and the other 3 registers) to be 64 bits. That would work just fine.
The part that would suck is Intel and AMD do not own the OS, or even the bootloaders that runs on their CPUs! MS, and a handful of BIOS makers do. They would have to be convinced it is worth it to do anything.
NOTE: I'm not saying the x86 instruction set is anything close to well designed. It is a shambling horror, but extending it to 64 bits is not really harder then extending the SPARC to 64 bits. In fact if you look at what AMD did it is a pretty easy change (and I think the article is wrong, you can use the new 4 GPRs without having to do any 64 bit stuff, but the OS still needs to be changed to save and load the extra registers).
Intel merely decided the 32 bit to 64 bit change seemed like a good time to try to make a play for the high end market, and to do that with a new instruction set. That might have even been a good idea if they hadn't screwed it up enough that the itanimum earned the nickname the itanic...
ISA bus (Score:2)
The limited number of PCI slots (on home systems) vs ISA slots makes it an issue for people who want to have a system like this
Yes I know people who would do things like that. Ultimately this one guy will have his capabilities spread over two systems, because he cannot fit it all into one, not with a major balancing act.
ISA = Instruction Set not the ISA bus (Score:2, Informative)
Motherboard makers are free (or not) to put an ISA bus on the board. I'd be surprised at the time of Hammer to see such a board, though
Re:ISA bus (Score:2)
Sorry but your example only holds water for people stuck in the stone age of motherboards. Some motherboards have good integrated peripherals. People who want everything on a card can buy two or three systems as far as I'm concerned. Who cares about the few nimrods who want to do this?
Re:ISA bus (Score:2)
Perhaps you could tell me how much i can upgrade a decent 10/100 NIC or a ATA-100 ide controller.
Re:ISA bus (Score:2)
You missed the question - how much can these components be upgraded? ATA-100 is pretty much the top for most systems (unless you get a board with ide raid) and a decent 10/100 net card is as much as you're likely to ever need. My point is that mature stuff goes on the motherboard.
Re:ISA bus (Score:2)
The limited number of PCI slots (on home systems) vs ISA slots makes it an issue for people who want to have a system like this
1. PCI SCSI
2. PCI Modem
3. PCI Firewire
4. PCI IDE Accelerator
5. PCI NIC
6. PCI Sound Card
7. etc
I presume the video is AGP.
Gee I forgot what it meant not to own an Apple PowerMac. All those items you mentioned are stock on my Dual G4/500 motherboard excluding my Adaptec SCSI PCI Card. I feel for you man, I would have to be saddled with ISA slots. What a waste.
Re:ISA bus (Score:2)
And NetBSD already runs on it (Score:3, Informative)
Hammer will rock! (Score:3, Insightful)
Each feature of the Hammer taken alone is evolutionary, but the overall effect should be revolutionary (at least with regard to Intel server market share;).
AMD stock is looking like quite a bargain at around $10/share... :-)
299,792,458 m/s...not just a good idea, its the law!
Re:Hammer will rock! (Score:2, Informative)
Re:Hammer will rock! (Score:2)
is there catch phrase going to be... (Score:2)
Re:is there catch phrase going to be... (Score:2)
Re:is there catch phrase going to be... (Score:2)
I guess no one knows yest if this will run at egg-frying temps like past AMD chips.
Re:is there catch phrase going to be... (Score:2)
My Pentium 450 runs a ~40c
Itanium, etc. (Score:4, Interesting)
The ability to build a desktop workstation with the ability to run all my old x86 crap, fast, and move into 64bit software, also fast, is highly attractive. Athlon or P4 will undoubtably be the choices for the next year, but when AMD gets the Hammer out into the mainstream with a mainstream price, Intel watch out.
Lastly, Microsoft, last I read, didn't indicate any interest in doing a version of XP for the Hammer. Perhaps that hasn't changed. If not, there's a potential hole through which someone may exploit Microsoft's disinterest. Linux, sure, AOL, Hmmm, you know that's a mean fight going on between Reston, VA and Redmond, WA, if the Hammer is attractive to home users, don't be surprise if AOL chooses to support it. It's entertaining to think about, anyway, however you feel about the combatants.
Re:Itanium, etc. (Score:2, Interesting)
Itanium isn't just for the server market now. IBM [ibm.com], SGI [sgi.com] and several others are marketing Itanium technical workstations. Intel has also stated that it sees Itanium making it to the desktop at some point in the future, replacing x86.
Hammer, on the other hand (specifically Clawhammer) has always been targeted at the desktop from the get-go (along with server and workstation). Check it out on the AMD processor roadmap [amd.com] (which I just managed to find again;).
Another point to keep in mind is that the ability to compete in the server marketplace is a key for AMD. It will provide them with the same ability as Intel to subsidize desktop processors with expensive server processors. Right now Intel can sell P4s at a loss and still turn an overall profit, while AMD suffers. Once Hammer ships, the dynamic will change quite a bit... ;-)
Perhaps McKinley (the joint project with HP) is Intel's idea of the post P4 desktop processor, as I've seen elsewhere that Itanium's x86 emulation makes a PIII look attractive.
I thought McKinley was just the .13 micron version of Itanium, perhaps with more cache. Does it have an enhanced ability to do IA32?
The ability to build a desktop workstation with the ability to run all my old x86 crap, fast, and move into 64bit software, also fast, is highly attractive. Athlon or P4 will undoubtably be the choices for the next year, but when AMD gets the Hammer out into the mainstream with a mainstream price, Intel watch out.
I couldn't agree more!
Lastly, Microsoft, last I read, didn't indicate any interest in doing a version of XP for the Hammer. Perhaps that hasn't changed. If not, there's a potential hole through which someone may exploit Microsoft's disinterest. Linux, sure, AOL, Hmmm, you know that's a mean fight going on between Reston, VA and Redmond, WA, if the Hammer is attractive to home users, don't be surprise if AOL chooses to support it. It's entertaining to think about, anyway, however you feel about the combatants.
I think Linux will be strong presence on the Hammer, along with potentially (wild prediction here) MacOS X. Microsoft will support it as soon as it begins to take marketshare like the US Rangers taking Omar's palace (not that I particularly care if Microsoft supports it). As for AOL, it should just get busy porting it's interface to Java like it said it would a year or so ago. That alone would be a big blow to Microsoft, and would simplify software development quite a bit for AOL as well as widening the number of AOL platforms substantially.
299,792,458 m/s...not just a good idea, its the law!
Re:Itanium, etc. (Score:4, Interesting)
McKinley is a whole mess of add-ons.. Not least of which is the idea that it can issue more EPIC instruction / clock than the Itanium. The original idea was that Itanium would chapion the instruction set, but would be an unwieldy beast with all it's new features.. But it would be enough to transition the market place (too bad it's practical performance sucked). McKinley would then be the knock-out punch that fully utilized it's potential (though at greater cost due to increased numbers of components). From this Itanium would be a low end that allowed "entry-level servers". Then they'd have time to go redesign new features for their next [incremental] generation... Their EPIC instruction set has templates so that adding whole new classes of functionality "should" be trivial.
Course I don't think they expected having to relegate Itanium as a "pilot" CPU with embarrasingly low frequency ratings (but MHZ is all that matters, right Intel?). Doesn't sound like the P4 guys are under the same marketing department as the Itanium guys (GM in the making?)
-Michael
Re:Itanium, etc. (Score:2)
It seems to me that power users and businesses would have most of the interest in using 64-bit processors.
AOL's target market probably has more modest requirements and maybe AOL should be looking into buying up XBoxes, loading them up with Linux and Mozilla, and selling them as set-top surfer boxes.
Re:Itanium, etc. (Score:4, Funny)
Actually, they could just distribute millions of CDs that do that.
Re:Itanium, etc. (Score:2)
And you need access to 16 exabytes (or 8 w/ signed pointers) of address space in your desktop applications because....? (not total memory, but memory per application as you can have more than 4 gigs of memory on a x86 processor in a single machine.)
I don't know where this idea that 64 bit memory addressing makes programs run faster came from, but there is nothing inherent about 64 bit addressing that would make it faster for your average integer based desktop applications.
Of course, I guess it all depends on your definition of a "64 bit" chip architecture. I tend to define it as an architecture's registers, data bus and ALU are all 64 bits wide.
I don't know about you, but unless I need more than 4 gigabytes of memory per process or I'm doing some heavy floating point where I need 64 bits of precision, I don't particularly want my data structure heavy applications using up to twice the memory they used to.
Of course that's just my opinion; I could be wrong.
Re:Itanium, etc. (Score:2)
Re:what they would do (Score:2)
What do you think they are doing with this whole AOL interface?
A few years back I was in a discussion with some guy with blinders on who issued a statement that no home-user would ever need a system with 1 Gig of memory. The old 640k-should-be-enough-for-anyone quote mis(?)attributed to Mr. Gates is dredged up as an example of shortsighted thinking. Same for this fellow, as he had no concept of where sound and video would go, and subsequent demands on memory. Ok, maybe you have a 1.8GHz P4 or a 1.5GHz Athlon smoking through your sound/video/apps/whatever, but, as I've learned over the years, no architecture remains fast for long. Eventually applications come along, which were written off as impractical or impossible before, and tax the resources to the max.
Imagine AOL viewing Hammer-based systems and the thing with enough horsepower to provide some service while Microsoft views it beneath their dignity to do a port of XP. If it draws customers you'll see some real change in the thinking in Redmond. I think the Hammer is another excellent move by AMD, as it's likely to hit the consumer market, perhaps not first, but with a lot of force when it does.
Re:what they would do (Score:2)
Oh come on... An operating system == AOL interface? I understand where you are going with your idea but I still doubt that AOL is going to want to support an entire operating system.
A few years back I was in a discussion with some guy with blinders on who issued a statement that no home-user would ever need a system with 1 Gig of memory. The old 640k-should-be-enough-for-anyone quote mis(?)attributed to Mr. Gates is dredged up as an example of shortsighted thinking. Same for this fellow, as he had no concept of where sound and video would go, and subsequent demands on memory. Ok, maybe you have a 1.8GHz P4 or a 1.5GHz Athlon smoking through your sound/video/apps/whatever, but, as I've learned over the years, no architecture remains fast for long. Eventually applications come along, which were written off as impractical or impossible before, and tax the resources to the max.
Where are you going with this? I think anyone with half a clue on
Imagine AOL viewing Hammer-based systems and the thing with enough horsepower to provide some service while Microsoft views it beneath their dignity to do a port of XP. If it draws customers you'll see some real change in the thinking in Redmond. I think the Hammer is another excellent move by AMD, as it's likely to hit the consumer market, perhaps not first, but with a lot of force when it does.
Windows XP should run just fine in 32 bit mode on the Hammer like linux runs in 32 bit mode on some 64 bit chips. The whole point of Hammer is that it is so backwards compatible compared to Itanium that it won't be a big pain to upgrade for the end user. Anyone have any proof that XP is going to have a hard time running on Hammer? If I were running Microsoft I'd have someone keep up to speed on Hammer. Then if it is released as expected and sells well I'd make sure we support the processor in 64 bit mode. It shouldn't be too hard after all because Microsoft is working on the Itanium support. What advantage does Microsoft get by supporting Hammer right now?
What I don't see is how you think Hammer suddenly makes possible for AOL a number of things that aren't possible today. I think the big break throughs will come with inexpensive and highly accessible bandwidth. The bandwidth will make the difference for AOL - not the cpu speed. In either case high speed CPUs will be here no matter what...
What about the pricing? (Score:2)
Does the fact that it is new technology, and that it's a big (or bigger) die size automatically mean it's going to be very high priced? I remember when the P2 came out, I paid CAD $1200 for the 300Mhz, about 2 weeks after it was released. Now the P4 costs about the same (although a bit less than that) for the highest speed (2Ghz?)
So my question is this: will this processor be affordable (somewhere between a top of the line Athlon and a P4), or is it going to be much more? I think it's a very safe bet to assume that it will cost more than the Athlon.
If somebody has a real answer for this, please reply. It would be interesting to hear some opinions from the more knowledgeable.
Re:What about the pricing? (Score:2)
Expect AMD's Hammer chips to cost much less than Intel's Itanium CPUs. Intel spent several years and likely Billions to develop the Itanium, whereas AMD needed only about one year for the Hammer. The first model will be the Sledgehammer, targeted to Servers, so those won't be exactly cheap. But the second model Clawhammer CPU will be for Workstations/Desktops and probably comparable in price to current high-end Athlons.
Re:What about the pricing? (Score:2)
AMD doing good, but (Score:2)
I have yet to see an AMD commercial, and word of mouth (yes, even mine) only carries so far.
AMD processors are simply increadible, IMO, but how to get the word out? Marketing, commercials and ads.
It is a simple question, really. What is the point of having such a great processor, if no one knows it?
I think a simple commercial like this would work wonders:
Open on a little tv playing the p4 "blue man group" commercial....have a "sledge hammer" and a "claw hammer" (both with big AMD stickers) smash the tv into oblivion.
(fade to black with the AMD logo and a "well known voice")
The AMD Hammer series and XP series, smashing 'you know who's higher numbers".
Power is *sexy*, AMD.
Or, as a demo, us the the ending of "The fast and the Furious' " car race.
Amd would be the Black Toranado(?) and Intel the Honda(?)...Raw Horsepower vs high rpm and technology+"cheats" (inflated Ghz = NOS, perhaps.)
Essentially, it was a tie.
Draw your own conclusions, or come up with something better.
Moose, out.
Point of view from a electronic/computing engineer (Score:2, Interesting)
Currently, 2/3 of a CPU is used to analyse/understand/reschedule the code send to the CPU. This part is very important and AMD seams to be better at this game than Intel. The code has to be reschedule so the different parts of the CPU that can work at the same time are efficiently loaded
OK, let stop right now : why isn't the code already efficient ? Because the compiler does NOT care about the inner structure of the CPU so the CPU has to do all the real work.
By keeping with the "good old architecure", AMD is trying to do in hard and in real time what a software (let's say a compiler:)) can do much more easily in a very long time. And a CPU can't see more than a few operations ahead whereas the compiler can see the WHOLE code.
So, by removing all the optimisation crap from the CPU and showing the compiler what's reallly inside, Intel is on the right way. In current CPU, you have more than 40 registers, but you can access only 8 of them and the CPU has to "guess" what could be the best use of them.
So, I think Intel's approch is the right one. Just recompile all your software : to run old stuff, use old hardware.
I have datasheets and documents to comment about this and I would glady do it.
Re:Point of view from a electronic/computing engin (Score:2)
> what a software (let's say a compiler:)) can do
> much more easily in a very long time.
NO. It is very hard for a compiler to accurately predict what will happen at run time (for example, which loads will hit in the cache and which will miss). It is much easier for the CPU to collect, predict and use this information at run time.
IA64 pushers talk all the time about how smart the compiler "can" be, but they don't actually have any such smart compiler. That is why their performance sucks.
Furthermore compilers are not going to get much smarter in the near future; just because the technology is needed does not mean it will suddenly appear. Compiler researchers aren't stupid and they haven't been sitting on their hands for the last forty years.
> And a CPU can't see more than a few operations
> ahead whereas the compiler can see the WHOLE
> code.
... until the program makes a call into a shared library that was compiled by someone else.
> Just recompile all your software : to run old
> stuff, use old hardware.
Uh huh. So every single time a new chip comes out, Microsoft et al are going to release new compiled versions of all their software. I don't think so.
Re:Point of view from a electronic/computing engin (Score:2)
New instruction for branch? (Score:2)
Suppose you have a branch on checking the error code returned by a function. That is what the article called a "static" branch: it almost always branches one way, assuming the function rarely fails. The Hammer will try to detect static branches, but might it be useful to let the compiler use different instructions, the static branch instructions, to tell the branch prediction hardware to assume a certain branch is static?
I guess I don't have a good handle on how difficult it is for the branch prediction hardware to sort out static branches vs. the other kind. Would the new instructions help enough to be worth the costs of extra instructions?
steveha
Re:New instruction for branch? (Score:2)
I received an email telling me that the IA64 already has this: you can specify a static branch that is likely to be taken, a static branch that is unlikely to be taken, and dynamic branches in both likely/unlikely flavors.
Also, even on x86, there are some tricks worth doing. The Linux kernel hackers have started using likely() and unlikely() macros around some branches in the kernel source. GCC can arrange the generated code somewhat differently and it will do some good.
steveha
Two problems... (Score:2)
For a really viable chip, you need the support of the mainstream, and like it or not, that's Microsoft. If they don't support it with their OSes and compilers, then this will be the death of AMD. I'd hate to see that, but those are the facts.
Re:Two problems... (Score:2)
It would HELP if MS had their OS and compiler support Hammer's extensions, but even if MS sits on its ass, that huge legacy market will belong to AMD.
And what about the server market? Well, the server market is *much* more accepting of non-MS operating systems.
I do not see lack of MS support as a certain sign of doom for AMD....
surprised no one's mentioned this yet (Score:2)
An AMD sponsored web site with the goal of porting free/open-source software to x86-64. Self-serving publicity stunt? Maybe, but it's nice anyway, and certainly more than we can ever expect to see from Intel.
Thoughts on the 64-bit architecture split (Score:2)
AMD is taking the incremental improvement route, which makes a lot of sense. But can the non-standard x86 extensions--practically a whole new processor in itself--ever be more than a niche? The 3DNow! extensions were more a novelty than anything else. Some drivers used them, most programs didn't. It's difficult as it is to support all the different computers running similar chips without getting into extensions that only work on a certain percentage of them. Is it worth shipping 64-bit Hammer code just for one market segment? It's not just a recompile; it's an entirely separate QA cycle. Thinking about hobbyists: Will they have both Itaniums and Pentiums around for testing?
And then there's the nagging doubt that we're talking about chips that are already so fast that no one cares--except a certain fanboy crowd--so now we're talking about the difference between 10x more speed than I know what to do with and 20x more speed than I know what to do with. Sure, games and some crazy high-end airflow simulation, but this begs the question of "Is it worth overturning the entire PC market just for those two minorities?"
Re:Thoughts on the 64-bit architecture split (Score:2)
Re:Thoughts on the 64-bit architecture split (Score:2)
Sigh. I am a software developer. I write applications in Lisp. I make heavy use of graphic arts tools like Corel Draw. I also use 3D modelling packages. What machine do I do all this on? A 333MHz Pentium II that I bought new in 1998.
Do I have _any_ speed complaints at all? None. It is a zippy system. I can recompile the Lisp system I use--which is written in Lisp--in twenty seconds. I also do a lot of work in Delphi and I've never had a perceptible compile time yet (read "for all intents and purposes, compile time is instantaneous"). Corel Draw just zips along. The 3D modeller is more dependent on the video card than anything, so I put in a GeForce 2 and haven't had any--and I mean *any*--issues with speed.
People who talk of using the power of their 1.4 GHz processor don't have a clue. They like to think that they are a power user of some sort, and in all honesty they don't want to hear otherwise.
Re:Thoughts on the 64-bit architecture split (Score:2)
I run a dual Xeon 1.6 GHz machine, and it isn't enough. If we could afford a 6-CPU Alpha AXP box, it wouldn't be enough either. My customers use render farms of 100+ CPUs @ 1+ GHz each, and even that still takes days, nay, weeks to render the hundreds of layers of globally-illuminated 3D that they use. Sure, I can compile adequately fast (though a full build of our whole software tree still takes hours), but to test my image processing code on a sequence of 200 MB film-resolution images requires considerable patience.
Just because your needs don't require anything more than last year's gfx card and last decade's CPU does not mean others are happy to sit around and wait for their more complex tasks to complete. More CPU power means more possibilities. That's why we can now produce visual effects like Final Fantasy, Swordfish and SW:TPM instead of Tron and Wargames, to pick examples from just my industry out of hundreds.
In 1980, I read a column in an early computer magazine wondering why people were so keen on the newfangled 16 bit CPUs, with awesomely powerful 32 bit CPUs on the horizon too! He felt that his 4 MHz Z-80 ran his CP/M word processor & spreadsheet quite adequately, thank you very much. Perhaps you too would be happy with that setup for your current line of work?
State of the Linux port (Score:2, Informative)
Back in march we saw the first printf ("Hello World\n") succeed in the simulator. This is quite a big thing because it needs a working compiler, binutils, glibc and kernel. Since then we have steadily improved the system. By now we're running a full fledged Linux system in the simulator. The system is partly 64 bit and partly 32 bit. We will use the native 32 bit capabilities of the chip to use 32 bit binaries when that makes the most sense (who needs a 64 bit ls when a 32 bit ls does 64 bit filesystems fine).
By now gcc (C and C++ support), binutils, glibc, gdb, the kernel, ncurses, bash, util-linux, vim etc. have all been ported almost completely. And X runs happily in 64 bit too. Now we need the desktop systems, apache, databases etc.
Shameless plug: I'm giving a one-hour talk about Linux on X86-64 at Linux World Frankfurt next tuesday, october 30th. Here I'll show the system running, give an overview of what porting Linux is and describe the new features for Linux that we have implemented.
Bo Thorsen,
SuSE Labs.
Itanium isn't about 64bit (Score:2)
Then looking at the hammer: AMD offers 64bit as its main new feature, but keeping the fat x86 instructionset. Nice, but not a product that will survive for at least 10 years from now, resulting in a quick set of bucks fast, but a slow death in the long run...
About 64-bit ISA and Hammer's MP support... (Score:2)
By the way, does anybody know if I can run Hammer in 32 and 64 bit modes "simultaneously" so that I have some of my apps fully 64 bit and other legacy apps? Does it hurt my task switching performance seriously that processor is running in different modes with different processes? Will Hammer be faster for 32-bit or 64-bit code? If I don't need 64-bit address space should I compile my code for 32-bit instead for better performance? I would guess that even though 64-bit instructions are a bit harder to execute due to 2x memory requirements the increased register count would balance the things.
According to the article when Hammer is working in MP system each CPU handles part of the memory; should OS be able to send an application to specific processor according to physical memory it has allocated instead of current load of processor for best performance? If so, does any OS currently support this kind of arrangement? How hard it would be to make Linux support this?
Need direct access to the RISC Core (Score:2)
What I think needs to be done to the next couple generation of Athalons is to allow programs to bypass the x86 decode stage and access the RISC core directly. This will allow the chip to run legacy x86 executables, as well as new RISC executables in a completely transparent manner. After a couple of years, the x86 decoder can be phased out of the primary product line. This would reduce cost significantly, considering that (IIRC) about 20% of the transistor count on the Athalon is dedicated to the x86 decoder.
Re:Corporate monopolies stopping progress (Score:2, Informative)
I remember in the late 50's and the 1960's, when computing technologies were dominated by the Universities and the public ethos was uppermost. Freedom of information reigned, and thousands of little computing groups competed to bring the new era.
What the hell are you talking about? Can you say "IBM"? That was the era of "you can have any color you want as long as its blue", unless you went with one of the seven dwarfs. Universities didn't contribute jack to anything. IBM invented just about everything during that time.
Unix, Multics, CP/M, Hard Drives, the Mouse, CRT displays, all these and more were made during this time.
...by corporations. Perhaps you've heard of AT&T (Unix, Multics)? Hard drives -- IBM. CRT -- who knows. Mouse -- this might have actually been invented at a university, I can't remember.
The socialist control of the means of production of hardware will allow for innovation in that realm, just as the socialist control of the means of production in software has i thanks to the GNU liscence.
Yeah, I know this proves it was a troll, but just in case anyone was going to believe any of that historical bullshit.
Re:Wise Intel (Score:2)
Yup, and it would have worked too (if it wasn't for you pesky kids) had the chip come out when it was supposed to. Two, maybe three years ago, with the current level of performance.
Part of what pisses me off about this whole IA-64 thing is that it was actually quite a good idea.
Dave
Re:Wise Intel (Score:5, Insightful)
I feel that at some point the best thing to do is walk away from the old architecture and make a fresh start with a new one. Commodore did this when they went from the C-64 to the Amiga. Users grumbled for a while, but I think that in hindsight it turned out to be the right choice - once people began to exploit the capabilities of the new platform, compatibility with the old one became irrelevant. And there's always software emulation for those cases when you really do need to preserve the old stuff.
Note that I don't actually know how much "legacy" x86 code is in the Hammer, but even the article's little picture of the register structure makes be think the answer is "too much". Anyway, when did a lack of factual knowledge ever stop someone from ranting on Slashdot?
Re:Wise Intel (Score:2)
Re:Wise Intel (Score:2)
So did DEC (later Compaq) with the Alpha. It was pretty much the fastest single CPU for floating point over most of it's life span (sometimes a new CPU would come out and beat it, but normally there would be a new Alpha within a month or two to smash it). Similarly for integer performance, but not quite as well (for example the fastest P4 systems have been beating the dead bloated corpse of the Alpha for a while in integer, but still lose out in FP). If ditching the old in favor of the new works, why are we not running Alpha machines now?
Personally I hate the x86 instruction set. I really do. I also think AMD's choice of doing the x86-64 rather then Intel's choice of doing the iTanic is a great business choice, even though it dooms us to spend another decade with the crappy 8086 compatible instruction set. Gack.
I'll spare the "look where it got them" bit, and just go for...nah, just look where it got them.
Of corse as a counter point we have the Mac and it's total incompatibility with the Apple II...unless you count sharing of the ImageWriter...
Re:Wise Intel (Score:2)
I for one think that its cool that we are using a vestige of the first microprocessor at 5 orders of magnitude faster speeds. It's a tribute to the human ability to create a good kludge. I wouldn't want it any other way.
Re:Wise Intel RE:Do we need to carry on x86? (Score:2)
Wise AMD (Score:2)
If you're trying to advance technology, no we don't need that.
If you're trying to sell a product and make money, Yes, you definately need that.
Intel and Microsoft have proven it over and over and over: the market does not want progress. The market will only accept incremental evolutionary change.
Somehow Intel has forgotten this, and they are going down the road to technology instead. Meanwhile AMD is going to "out-Intel" them and get all of Intel's customers.
>
Yes, but you're a damn fool idealist who likes computers and wants to see them run well. You're not trying to sell chips. So while Intel goes off to recreate the marketing success that Commodore had in the 90s, AMD will go off to recreate the marketing success that Intel had in the 90s.
Then go buy an Alpha while you still can. (Score:2)
Abandoning a user base is an extremely dangerous thing to do.
DEC orphaned a whole platform (MIPS DECStation) with a long stream of broken promises when Alpha was brought out. The seeds of Alpha's destruction were sown the moment of its birth. If DEC had been wise enough to develop an FX!32 for MIPS and an ability to run Ultrix binaries under OSF/1|Digital UNIX|Tru64, then the end of Alpha might have been a very different story indeed.
And now Intel/HP/DEC/Compaq has aspirations of repeating this sad history.
If AMD can deliver on even half of their promises, then Itanium is finished.
Re:Wise Intel (Score:2)
Probably 90% of all consumers. Ever hear of windows millenium? That new fangled OS that I don't yet need to upgrade to? It still supports all those ugly 16-bit DOS features.. Sure they did away with the DOS boot-process. But DOS is most certainly there.. And until DOS is gone (a la NT / XP), CPU manufacturers still have to support it. Never mind the fact that even Linux is based on an initial x86 boot-process. (Though obviously it's not tied to it given it's multi-platform support). But out-of-the-box x86 Linux wan't 16-bit x86 supports.
Sure win 9x is "mostly" 32 bit if not all. But it most assuredly supports the sort of legacy x86 features that both software and HARDWARE developers take advantage of.
The AH,AL 8 bit registers you see are essential to call the CPU an x86 anything. If for no other reason than IO support (don't remember the exact instructions.. it's been a while since I've read an 8086 assembly book). Note that IO is pretty much unchanged in the Athlon (since so little actually uses it anymore; relegating to windows drivers and shared memory regions).. Interrupts are also used by these 8-bit registers. In fact, pretty much anything relating to the hardware drivers (minus AGP) depends on it.
I think the loss of the ISA slots should help ease the transition.. PCI with plug and play shouldn't be too hard to port to which-ever technology superceeds. But my point is that there isn't an absence of current-market vendors that still depend on these legacy features.
Aside from hardware, x86 had lots of macro-instructions, such as using CX as a 16 bit counter, and SP, BP for string comparisons. I'm sure these are micro-op vectors in the Penium on, but they still need to be emulated and debugged somehow, thus the register set still needs to be in tact. The real question is whether they make 64bit the fast-path (requring an extra logic probagation for 32bit), or if 64bit is considered the exception.
Aside from that, I agree with you that "staging out" is the way to go. XP should help (sadly) most consumers get rid of any remaining ties to the hardware (via hardware abstraction layers; assuming that's still there). But MS has no vested interest in making the same OS for servers as for consumers. They'd love to have a win3k that only runs on expensive hardware (where they can charge a premium), with their win4Suckers running on a legacy platform that allows them to boast over 1 trillion apps served. You can't buy that sort of marketing. Heck their current strategy is to not even acknowledge that other OS's exist. When was the last time you saw an MS commercial advocate themselves over someone else. (Like AOL still has to do. "No wonder we're number 1").
Sure Linux'll support what-ever and when-ever.. That's one of it's trademarks. But 64bit has a couple down-sides (including memory / cache requirements), and having a 64bit time-stamp or file-descriptor just isn't going to impress the other 99% of the code enough to run faster - the key is going to be end-user benchmarks and or raw MHZ. That's what draws peoples attention. And people's attention is what draws MicroSoft. And as we all know.. MicroSoft rules the world. (well, it's own world at any rate).
-Michael
Re:Wise Intel (Score:2)
One of the nicest things about using Linux on a non-x86 platform is that you often get to use a much more advanced bootloader. E.g. on the (now defunct) StrongARM-based Netwinder, you could do a diskless boot (TFTP+NFS), specify the name of the kernel image you wanted to run (dynamically, instead of having to put it in a conf and run 'lilo'), get full serial-console support, etc. Similarly for the Mac's "Open Firmware".
The only reason x86 Linux uses the "16 bit" cruft is because it has to.
As for the Windows market, they're moving to a "subscription" model anyway in order to get a more continuous revenue stream. Once consumers are in the habit of updating all their software every (x) months whether they need it or not, it becomes easier to switch the underlying architecture. You'd use a software emulator or 'virtual machine' model to support the "legacy" software. Sure it would slow down the old apps a lot, but that's what the manufacturers want anyway so they can sell you a new chip / application with 'go faster stripes'.
Interesting points about how all the registers are used... I've never actually been brave enough to get into x86 assembly. I have a Motorola background, so I'm used to things like a flat 4G address space, "data" registers and "address" registers, and memory-mapped IO. My brain just balked at the x86 world of "memory segments", "al/ah/ax/eax", etc.
Re:Wise Intel (Score:2)
Well, simply it makes for faster CPUs. (unless most of the time you're physically interchanging from GPR to Address Registers). The Digital Alpha, for example, went even further and utilized two completely separate register sets for GPRs. I don't remember if the programmer was required to not perform operations that would pull from both register sets or not (e.g. was it just a caching localization, or was the bottom half and top half of the register address space physically separate).
The main advantage is a minimazation of ports on the register set and a reduction in the number of buses. Each execution unit typically requires one write port for each register. If you have 6 integer execution units, then that's 6 write ports (and probabably something like 6 read ports, but in theory 12 read ports). Each port requires an address decoder and extra levels of probagation in the register fetch stage.
Back in the old days, where we didnt see heavy pipelining (especially in first generation 68K), this was expensive and slow. The 68K was clean in many ways, which included separation of dissimilar functionality to segregated addresses and buses (and possibly execution units). Since there's no contention between addressing and general ALU operation, it's closer to true divide and conquor. Mix in the fact that the 68K CISC core could utilize op [Mem] = [Mem], [Mem], the load on address registers and logic was pretty heavy (at least in comparison to RISC architectures).
I once did a simple CPU design project which unified the FP regs and the int regs. The focus was on interchangability of data-types, and simplicy of design.. But what I quickly found was that in almost all cases (except register exchange) things were worse off.. The large register set had to have exteraneous fields to handle the various datatypes (even if they weren't used 99% of the time). That logic took extra probagation layers. Additionally, the number of address bits in the assembly code was upped (since fp ops couldn't assume a separate address space than int ops). Plus I found that the number of ports I had was horrendous.
Arguably, address calculation more regularly requires utilization of integer units, and thus there will be a significantly higher percentage of swapping betwen GRP and AR than between FPR and GRP. None the less, Motorola found it advantageous to do it that way.
Once Load/Store become popular (as with the PowerPC), the benifits of separate addressing fell off. (Number of mem accesses / instruction was now well below one).
So What's a generation? (Score:2)
Actualy I think Intel should rot in hell for putting the CPU vectors at the top of memory space at 1 Meg and working down instead of the more logical bottom working up.
as a ps the only reason I have a windows partion is to run one win16 application.
Re:So What's a generation? (Score:2)
Hey, I still have a 80186 that's been one of my best purchases. It's on an old Intel SatisFAXion 14.4 fax modem. It really was able to receive a fax in the background without slowing down the system even on an old 486.
Re:So What's a generation? (Score:2)
Re:Wise Intel (Score:2)
The Amiga is dead primarily for marketing/mismanagement reasons, but also in part because it tied its OS very tightly to custom hardware. This gave it an early advantage, blowing almost everyone else out of the water in terms of graphics, sound, multitasking, etc. However it became a liability as time went on and the competing hardware improved - certain parts of the Amiga were still tightly tied to the old custom chipset. I do not believe that "inability to run C64 programs natively" was a significant factor in the ultimate demise of the Amiga.
As for why the C64 itself died, mainly just because it reached the end-point of its evolution, and the rest of the world moved on. It was an 8-bit machine, and that imposed certain fundamental limitations on it. Yes they could've clocked it up to 25 MHz, strapped on big-ass heatsinks, and added more and more bank-switched RAM, but it just wasn't worth it. Sometimes you have to walk away and start from a clean slate.
Nowadays, both the C64 and the Amiga can be emulated in software. I don't remember what capabilities the Amiga had for emulating the C64. Quite frankly, I didn't really care anymore after I had had the Amiga for a while.
Re:Wise Intel (Score:2)
In other words: Some diehard fans actually found it worth it...
While the C64 and Amiga scenes may be mere shadows of what they were in the past, they still exists.
Re:Wise Intel (Score:2, Funny)
But this point mmontour was trying to make could have been better made with the transition from Apple's ][ series to the Macintosh architecture. Other than a few hardware interfaces, there was almost no backwards compatibility, and Apple planned it that way.
The Amiga was not developed by Commodore as a break from their venerable C-64, rather, the Amiga was a distinct machine from a failing company which Commodore bought, and then championed as superior to their previous offerings. Unfortunately, they just succeeded in carrying on the Amiga curse.
I never had an Amiga... I couldn't betray my Commodore 64 by dating its sexy cousin like that. Instead, I later ended up skulking around with some skanky PC I picked up at CompUSA's red light dictrict. I'm sure fond of that slinky Mac, and PCs can keep my attention by parading around in NetBSD, or some indecent Linux rags. But even in the face of a new 64 bit whore of a PC, my true love will always by my Commodore.
I dream in 8 bits.
Re:Not really (Score:2)
Also, Apple's software emulator ran the 68k code on PPC at speeds that were roughly equivalent to the 68k. Itanium, when running 32bit and 64 bit programs at the same time, performs very poorly. Itanium also does not have the benefit of the MacOS engineering team that did a remarkable job making the transition seamless...
I hope that Intel finds a way to reduce the power consumption of their 64bit chips.
Re:The Underdogs (Score:2)
Re:The Underdogs (Score:2)
Because success of the underdogs, splits the industry and makes it less committed to any one party. Right now, the 386's instruction set is king of binaries. But in a future world of two mutually-incompatable descendents of the 386 duking it out, software companies will be less able to be able to commit to one or the other.
And if they don't/can't commit to a single instruction set, then they're going to have to deal with their problem some way. Scrapping the idea of native binaries, is one way of dealing with the problem. Ship source that the user has to compile, or ship some kind of intermediate pcode or Java bytecode that is cross-platform. Once the need for binary comformity is broken, then you can buy a real computer and run mainstream software on it.
Or maybe stay with binaries, but accept that you have to deal with more than one. Computer dudes only know three numbers: Zero, one, and many. You can get away with telling your customers "We only support one architecture and if you don't like it, then your money is no good here. We don't want the expense and complexity of dealing with more than one." But once you have to handle the situation of more than one, then you can also handle three or ten. Surely you can see where that could lead...
So back the underdog. AMD's success (and I think they will flourish with this CPU) will hold back progress for a while, but as long as it doesn't completely clobber Intel, and instead they end up splitting the market between them, it could lead, long-term, to progress.
Yes, and they have it: legacy speed. Gee whiz, you think AMD overheating problem is really a big deal? Consumers have long tolerated silly things like that. If people cared about heat, most people would be running PPC or MIPS right now. They're not. If people cared about short lifetimes of computers, then most would be running something other than MS Windows. They're not.
But you're right, those things matter a little, I guess, so Intel will have some customers. Good. If there's no clear winner, then the winner is us.
Re:The Underdogs (Score:2)
Re:Integrated Northbridge (Score:2)
With regards to locking a processor into a particular memory architecture, that shouldn't be a huge issue. For one, most processor architectures stay with the same memory architecture in the chipsets for a useful span of time. So a non-issue that way, IMHO.
Now, about changing CPUs and getting a better memory architecture, that's not extremely likely. A newer memory architecture will probably have different shielding/terminating/etc. requirements. The l33t motherboard manufacturers will probably make their boards have enough headroom that it might be able to take the new memory architectures.
But that's virtually impossible to work. If it works on my buddy's p1mp ASUS motherboard, so if I have a cheappie bargan-basement motherboard, I'll expect it to work. Except that the cheappie motherboard wasn't designed with headroom.
AMD nets one happy customer and one very pissed off customer. So they will probably change things or put configuration pins in there so that the first crop of DDR333 motherboards will do a maximum of DDR333, no matter what.
Plus, most rational people upgrade processor and motherboard at the same time anyways.
So it's probably a non-issue. I personally think the integrated northbridge has been a good idea for a while. I want a 4 or 8 CPU Hammer.
Re:Integrated Northbridge (Score:2)
Re:Just another extension? (Score:2)
If the Hammer cleans up, the K9 will build on it, leaving any possibility for a whole new platform to the K10. If the Itanic architecture starts to gain speed, the K9 will probably be an IA64 machine.
I think the key thing is that the instruction sets are mattering less. You can put optimizers in hardware that convert a messy x86 architecture to a nice RISC one. Think of the x86 arhictecture as a compression format for nice RISC opcodes. Or you can do various kinds of software morphing, which are getting more advanced as time goes on. The only real advantage to the IA64 is that it has the likelyhood of allowing the compiler to make better optimizations that will leaverage the processor more.
Re:Applications (Score:2)
Re:Double-check your assumptions (Score:2)
> rest (such as those used to handle BCD
> arithmetic, hardly used today)
In fact, in Hammer's 64-bit mode, the BCD instructions (and some others) are not supported.
> the drawbacks are evident: higher complexity,
> power dissipation, etc.
Check out the heat dissipation on Itanium! One guy I know has a box that puts out 120W per CPU.
A simpler architecture is a nice thing, but experience seems to have shown that it doesn't matter that much in practice.
Re:A Better Topology (Score:2)
FUD (Score:2)
Mckinley is the day after tomorrow.