Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD

More Details Emerge on AMD's Hammer 396

Diabolus writes "Anandtech have more information on AMD's upcoming Hammer processors. " Talking with several engineers who are in the know about it, the Hammer looks pretty frickin' amazing. Itanium will have a run for its money, I suspect.
This discussion has been archived. No new comments can be posted.

More Details Emerge on AMD's Hammer

Comments Filter:
  • I just knew the Hammer was up to it, but now I'm totally convinced. x86 isn't dead yet, and IA-64 will never live!
  • The Itanium is going to have some competition with itself as well, comsidering the prospect of having a much "slower" (MHz wise) processor. As far as I know AMD's Hammer chips will not be slower than the Athlon XP's. This is one more area that AMD can gain a little more ground on intel.
    • I really hope those on the market for Itanium machines are smart enough to look beyond MHz. On the other hand, most of them will probably be looking to install Windows, so my optimism may be unfounded.
  • It will probably work well until your fan happens to go out, then it catches your house on fire. Are they selling fire extinguishers with these things? Or case smoke detectors?
  • Has AMD unfairly optimized the processor for Quake 3?

    [/sarcasm]
    • Exactly.. If they have, then I'll probably get it just for that feature.

      Dual 1600 * 1200 displays would be quite nice -- so I can play Quake 3 in one and a Divx movie in the other...
  • And comming soon to a home insurance contract near you.

    An AMD clause "If AMD CPUs are within the perimiter of the house, you arnt insured (Act of God?) ;D
  • by Renraku ( 518261 ) on Wednesday October 24, 2001 @05:39PM (#2474638) Homepage
    Why is AMD making these things so sensitive to heat? I'll bet they're also sensitive to vibration, electricity, and about anything that its competitors handle every day. Most hammers can resist hundreds of degrees before they melt/disentigrate.
    • Re:AMD's Future (Score:3, Insightful)

      by connorbd ( 151811 )
      It's not that they're sensitive to heat per se; they just lack the safeguards Intel chips have. It's all on board on the P4, for example.

      /Brian
  • I know Linus's been talking NUMA for 2.5 - looks like there's more and more reason for it ... still historically it's been a hard nut to crack well
    • Why shudder, works sweet on my Irix boxes and not too shabby on our Sequent cluster.

      Sgi actually has a 64 node (128 proc) numa working on their Origin Mips line, you might want to checkout http://oss.sgi.com/projects/numa/ I think SGI is looking at the way leading this charge, and as long as SGI can stay alive long enough they'll have a good implementation. There's one thing I can say about SGI, they're scalable NUMA tech is almost beyond reproach (too bad I can't get squat for 3rd party Irix apps).

      Here's the link to SGI's cat cpuinfo of their 128 proc Linux numa system running
      http://oss.sgi.com/projects/LinuxScalability/dow nl oad/mips128.out
      • With respect to the Hammer, it still works in a SMP-model-like, not a NUMA model. It's not entirely SMP, but enough like SMP to make the optimizations not as hard.

        Between this and Hyperthreads, new OS designs should be able to take advantage of at least multiple processors, even on the desktop. Of course, the Pentium was supposed to be the first CPU to enable SMP For The Rest Of Us, so we'll just have to see what happens.
    • Hardware-wise, multi-CPU Hammers will indeed resemble NUMA. Each CPU will be directly controlling its own set of DIMMS.

      However, from what I understood of the description, memory access should all be taken care of in hardware with no OS support. The CPU interconnects are supposed to make even remote memory transactions very, very fast, not much additional latency than to direct-accessed memory.

      Linux would therefore "need" no explicit NUMA code, and could improve things just a bit by setting CPU affinity of a process to the one which has the process in local memory, very similar to the CPU affinity code which is already in place for keeping a process on the same processor which has its data in the CPU cache....

      Maybe someone else who knows more can weigh in on this, but to me it looks like a small issue.

      PeterM
  • It looks like the future of CPUs is definitely 64bit+. The Itanium, Hammer, and G5 are all 64bit processors. However, it will be a long time before a lot of applications are rewritten to take advantage of 64bit architectures. In addition, some applications won't actually benefit at all, and are therefore unlikely to be recoded for quite a while. Therefore, how each of these processors runs legacy code is important.


    From the look of it, both the Hammer and the G5 can run old, 32bit code natively. This means that today's apps will continue to be able to run at top speed on the new chips, because the instructions still exist in hardware. This is definitely good for people with lots of older apps(ie, almost all of us.) However, a lot of the reports on the Itanium seem to indicate that, in making a completely clean break, it is forced to emulate older 32bit instructions, resulting in an actual -slowdown- for many programs. Eventually, Intel's clean break might give it some advantage, and that advantage might come quickly for the big metal server market. However, it seems that AMD will be able to win out on the desktop. Of course, here we are comparing rumors on a rumored chip to a different unreleased chip, only Bob knows exactly what will happen between now and release time...

    • by jmauro ( 32523 ) on Wednesday October 24, 2001 @05:55PM (#2474695)
      Itanium can run un-modified x86 and in certain cases PA-RISC binaries unmodified. Look at the specs, there was no clean break. Intel learned with the i960 and the 8080 that clean breaks are not liked by those designing the systems at all. The x86 stayed around and will continue to stay around for as long as Intel stays around. Intel will have nothing else.
      • Yeah, I've seen the benchmarks. An 800 MHz Itanium is trounced by a 133 MHz Pentium when it comes to running x86 code. This hardly passes for backward compatibility.
      • Itanium can run un-modified x86

        Yes. Amazingly, though, it runs x86 slower than a software-emulation package on competitors' RISC chips.
      • Are you willing to spend over $3000 for P100 speeds for your x86 code?

        Neither is anybody else. The emperor has no clothes.

    • However, it seems that AMD will be able to win out on the desktop

      OK -- where's the software support? Where's Windows/AMD64? Where's the need for 64-bit desktop chips?

      I like AMD's strategy in theory, however it will be marketed like a box of Cheerios that says "NEW - Now With More Bits!!" and nothing really to back it up.

      (I should note that Apple has a similar problem with the G5, except they will ship native OS support, and it's concievable that a 64-bit CPU will have an advantage for media applications, which is pretty much their only market. Will 64-bit Quake or 64-bit OpenGL drivers help that much?)
    • It looks like the future of CPUs is definitely 64bit+

      No, the present is 64bit+. The peecee is the only type of workstation or server still shipping with 32-bit CPUs. Sun killed their last 32-bit workstation in 1998. Alpha's been 64-bit forever. SGI has shipped 64bit CPUs since the Indy/Indigo2 and has been running 64bit IRIX by default on everything since last year (the holdout? O2, interestingly enough, because of bugs). I could go on... The reality is that you can already buy 64bit workstations running 64bit OSs with good performance for less than $1000, and in some cases only a few hundred.

      The reality is that the peecee is way behind the times.

      Therefore, how each of these processors runs legacy code is important.

      Very true. Unfortunately neither is really getting it right. For examples of how to support mixed 32bit and 64bit binaries and even OSs on the one 64bit CPU, see the MIPS3 documentation. For a cleaner transition that required changes at the OS level only, take a look at the SPARC V8 -> V9. Can you say "seamless?" I knew you could.

      The trick, naturally, is to design a proper instruction set to begin with. Then you can extend and enhance it easily without having to break backward compatibility. Too bad Intel didn't realize that.

      • The trick, naturally, is to design a proper instruction set to begin with. Then you can extend and enhance it easily without having to break backward compatibility. Too bad Intel didn't realize that.

        The SPARC and many other RISCs had a "seamless" 32 -> 64 bit transition mostly by doing two things.

        1. They added 64bit load and 64 bit store instructions (existing load and store remained 32 bits). All the other stuff (register to register instructions) went to 64 bits.
        2. Made large (incompatible!) changes to the supervisor mode. This only matters to the OS and boot loader, and Sun owned the dominant OS on the SPARC boxes, SGI owned the dominant OS on the MIPS boxes, and they made all the changes to the OS as needed.

        There is no reason Intel/AMD couldn't make new 64 bit load and store instructions, and redefine all references to EBX (and the other 3 registers) to be 64 bits. That would work just fine.

        The part that would suck is Intel and AMD do not own the OS, or even the bootloaders that runs on their CPUs! MS, and a handful of BIOS makers do. They would have to be convinced it is worth it to do anything.

        NOTE: I'm not saying the x86 instruction set is anything close to well designed. It is a shambling horror, but extending it to 64 bits is not really harder then extending the SPARC to 64 bits. In fact if you look at what AMD did it is a pretty easy change (and I think the article is wrong, you can use the new 4 GPRs without having to do any 64 bit stuff, but the OS still needs to be changed to save and load the extra registers).

        Intel merely decided the 32 bit to 64 bit change seemed like a good time to try to make a play for the high end market, and to do that with a new instruction set. That might have even been a good idea if they hadn't screwed it up enough that the itanimum earned the nickname the itanic...

  • There is some criticism of the Hammer chip in the support of ISA. There is a more basic problem because limitations of what things you might want to add on, vs what you want integrated in the motherboard. The problem of what to do with the old ISA bus is mostly an issue of the old installed base. But it is still useful for some basic cards.

    The limited number of PCI slots (on home systems) vs ISA slots makes it an issue for people who want to have a system like this

    1. PCI SCSI
    2. PCI Modem
    3. PCI Firewire
    4. PCI IDE Accelerator
    5. PCI NIC
    6. PCI Sound Card
    7. etc
    I presume the video is AGP.

    Yes I know people who would do things like that. Ultimately this one guy will have his capabilities spread over two systems, because he cannot fit it all into one, not with a major balancing act.

    • This has nothing to do with what bus is supported. Hammer is continuing and expanding on the x86 instruction set. It has nothing to do with the old ISA (Industry Standard Architecture bus).

      Motherboard makers are free (or not) to put an ISA bus on the board. I'd be surprised at the time of Hammer to see such a board, though
    • Or he could buy a good motherboard with decent onboard IDE, NIC, Sound, even Modem (or get an external modem), and Firewire. Then all he has to do is put in a SCSI board. Even that can be integrated on an expensive board...

      Sorry but your example only holds water for people stuck in the stone age of motherboards. Some motherboards have good integrated peripherals. People who want everything on a card can buy two or three systems as far as I'm concerned. Who cares about the few nimrods who want to do this?

    • The limited number of PCI slots (on home systems) vs ISA slots makes it an issue for people who want to have a system like this

      1. PCI SCSI
      2. PCI Modem
      3. PCI Firewire
      4. PCI IDE Accelerator
      5. PCI NIC
      6. PCI Sound Card
      7. etc
      I presume the video is AGP.


      Gee I forgot what it meant not to own an Apple PowerMac. All those items you mentioned are stock on my Dual G4/500 motherboard excluding my Adaptec SCSI PCI Card. I feel for you man, I would have to be saddled with ISA slots. What a waste.
    • You're confused. In this context ISA means Instruction Set Architecture not ISA bus. It is the job of an IO controller chip (traditionally the South Bridge) to provide IO buses. The CPU has nothing to do with it unless it's an embedded or system-on-a-chip type of thing.
  • by jacexpo069 ( 521719 ) on Wednesday October 24, 2001 @05:50PM (#2474676) Homepage
    Even before the processor is out, NetBSD already runs on it. See here [netbsd.org]
  • Hammer will rock! (Score:3, Insightful)

    by Glock27 ( 446276 ) on Wednesday October 24, 2001 @05:56PM (#2474704)
    Linux has already been ported to the simulator, and supports 511 GB of memory per process. That should do for a start!

    Each feature of the Hammer taken alone is evolutionary, but the overall effect should be revolutionary (at least with regard to Intel server market share;).

    AMD stock is looking like quite a bargain at around $10/share... :-)

    299,792,458 m/s...not just a good idea, its the law!

  • Itanium, etc. (Score:4, Interesting)

    by ackthpt ( 218170 ) on Wednesday October 24, 2001 @06:06PM (#2474745) Homepage Journal
    While the thought of Itanium duking it out with Hammer may encourage visions of one company stomping another, plus heated discussions, flame wars, and so on, my interest has always of having a 64bit desktop. Intel some time back indicated that the Itanium was targetted exclusively at the server market, is likely rethinking that point. Perhaps McKinley (the joint project with HP) is Intel's idea of the post P4 desktop processor, as I've seen elsewhere that Itanium's x86 emulation makes a PIII look attractive.

    The ability to build a desktop workstation with the ability to run all my old x86 crap, fast, and move into 64bit software, also fast, is highly attractive. Athlon or P4 will undoubtably be the choices for the next year, but when AMD gets the Hammer out into the mainstream with a mainstream price, Intel watch out.

    Lastly, Microsoft, last I read, didn't indicate any interest in doing a version of XP for the Hammer. Perhaps that hasn't changed. If not, there's a potential hole through which someone may exploit Microsoft's disinterest. Linux, sure, AOL, Hmmm, you know that's a mean fight going on between Reston, VA and Redmond, WA, if the Hammer is attractive to home users, don't be surprise if AOL chooses to support it. It's entertaining to think about, anyway, however you feel about the combatants.

    • Re:Itanium, etc. (Score:2, Interesting)

      by Glock27 ( 446276 )
      While the thought of Itanium duking it out with Hammer may encourage visions of one company stomping another, plus heated discussions, flame wars, and so on, my interest has always of having a 64bit desktop. Intel some time back indicated that the Itanium was targetted exclusively at the server market, is likely rethinking that point.

      Itanium isn't just for the server market now. IBM [ibm.com], SGI [sgi.com] and several others are marketing Itanium technical workstations. Intel has also stated that it sees Itanium making it to the desktop at some point in the future, replacing x86.

      Hammer, on the other hand (specifically Clawhammer) has always been targeted at the desktop from the get-go (along with server and workstation). Check it out on the AMD processor roadmap [amd.com] (which I just managed to find again;).

      Another point to keep in mind is that the ability to compete in the server marketplace is a key for AMD. It will provide them with the same ability as Intel to subsidize desktop processors with expensive server processors. Right now Intel can sell P4s at a loss and still turn an overall profit, while AMD suffers. Once Hammer ships, the dynamic will change quite a bit... ;-)

      Perhaps McKinley (the joint project with HP) is Intel's idea of the post P4 desktop processor, as I've seen elsewhere that Itanium's x86 emulation makes a PIII look attractive.

      I thought McKinley was just the .13 micron version of Itanium, perhaps with more cache. Does it have an enhanced ability to do IA32?

      The ability to build a desktop workstation with the ability to run all my old x86 crap, fast, and move into 64bit software, also fast, is highly attractive. Athlon or P4 will undoubtably be the choices for the next year, but when AMD gets the Hammer out into the mainstream with a mainstream price, Intel watch out.

      I couldn't agree more!

      Lastly, Microsoft, last I read, didn't indicate any interest in doing a version of XP for the Hammer. Perhaps that hasn't changed. If not, there's a potential hole through which someone may exploit Microsoft's disinterest. Linux, sure, AOL, Hmmm, you know that's a mean fight going on between Reston, VA and Redmond, WA, if the Hammer is attractive to home users, don't be surprise if AOL chooses to support it. It's entertaining to think about, anyway, however you feel about the combatants.

      I think Linux will be strong presence on the Hammer, along with potentially (wild prediction here) MacOS X. Microsoft will support it as soon as it begins to take marketshare like the US Rangers taking Omar's palace (not that I particularly care if Microsoft supports it). As for AOL, it should just get busy porting it's interface to Java like it said it would a year or so ago. That alone would be a big blow to Microsoft, and would simplify software development quite a bit for AOL as well as widening the number of AOL platforms substantially.

      299,792,458 m/s...not just a good idea, its the law!

      • Re:Itanium, etc. (Score:4, Interesting)

        by maraist ( 68387 ) <michael.maraistN ... m ['AMg' in gap]> on Wednesday October 24, 2001 @07:43PM (#2475189) Homepage
        I thought McKinley was just the .13 micron version of Itanium, perhaps with more cache. Does it have an enhanced ability to do IA32?

        McKinley is a whole mess of add-ons.. Not least of which is the idea that it can issue more EPIC instruction / clock than the Itanium. The original idea was that Itanium would chapion the instruction set, but would be an unwieldy beast with all it's new features.. But it would be enough to transition the market place (too bad it's practical performance sucked). McKinley would then be the knock-out punch that fully utilized it's potential (though at greater cost due to increased numbers of components). From this Itanium would be a low end that allowed "entry-level servers". Then they'd have time to go redesign new features for their next [incremental] generation... Their EPIC instruction set has templates so that adding whole new classes of functionality "should" be trivial.

        Course I don't think they expected having to relegate Itanium as a "pilot" CPU with embarrasingly low frequency ratings (but MHZ is all that matters, right Intel?). Doesn't sound like the P4 guys are under the same marketing department as the Itanium guys (GM in the making?)

        -Michael
    • AOL, Hmmm

      It seems to me that power users and businesses would have most of the interest in using 64-bit processors.

      AOL's target market probably has more modest requirements and maybe AOL should be looking into buying up XBoxes, loading them up with Linux and Mozilla, and selling them as set-top surfer boxes.
    • my interest has always of having a 64bit desktop.

      And you need access to 16 exabytes (or 8 w/ signed pointers) of address space in your desktop applications because....? (not total memory, but memory per application as you can have more than 4 gigs of memory on a x86 processor in a single machine.)

      I don't know where this idea that 64 bit memory addressing makes programs run faster came from, but there is nothing inherent about 64 bit addressing that would make it faster for your average integer based desktop applications.

      Of course, I guess it all depends on your definition of a "64 bit" chip architecture. I tend to define it as an architecture's registers, data bus and ALU are all 64 bits wide.

      I don't know about you, but unless I need more than 4 gigabytes of memory per process or I'm doing some heavy floating point where I need 64 bits of precision, I don't particularly want my data structure heavy applications using up to twice the memory they used to.

      Of course that's just my opinion; I could be wrong.
      • One nifty thing about 64bit memory addressing that is often missed is that it makes OS design tons easier. First, you can just contiguously map all of physical RAM permanantly instead of dynamically mapping in needed regions (like Linux high-mem). Even on many desktop machines, people are coming up to Linux's 1GB kernel-space address limit and having to use the more complex highmem code. Also, the 4GB address space of 32bit procs can become exteremly limiting when you have to deal with memory mapping large files and such. Lastly, library management becomes tons easier. Usually, libraries on 32 bit systems have to be relocated because the bases of their compiled images can conflict with those of another library. On a 64-bit arch, it is feasible to assign each library a unique base address and never have to relocate after the first time.
  • There isn't any mention in the article about the expeted prices of the Hammer, so I thought I'd ask here. What are the price expectations for a processor like this? I mean from the specs alone (with so much stuff integrated into the die), it's going to be a fairly big beast.

    Does the fact that it is new technology, and that it's a big (or bigger) die size automatically mean it's going to be very high priced? I remember when the P2 came out, I paid CAD $1200 for the 300Mhz, about 2 weeks after it was released. Now the P4 costs about the same (although a bit less than that) for the highest speed (2Ghz?)

    So my question is this: will this processor be affordable (somewhere between a top of the line Athlon and a P4), or is it going to be much more? I think it's a very safe bet to assume that it will cost more than the Athlon.

    If somebody has a real answer for this, please reply. It would be interesting to hear some opinions from the more knowledgeable.

    • What are the price expectations for a processor like this? I mean from the specs alone (with so much stuff integrated into the die), it's going to be a fairly big beast.

      Expect AMD's Hammer chips to cost much less than Intel's Itanium CPUs. Intel spent several years and likely Billions to develop the Itanium, whereas AMD needed only about one year for the Hammer. The first model will be the Sledgehammer, targeted to Servers, so those won't be exactly cheap. But the second model Clawhammer CPU will be for Workstations/Desktops and probably comparable in price to current high-end Athlons.
    • The price will be dictated by the market. The first Hammer release will likely be the server version SledgeHammer which will be priced to be competitive with Itanium and P4 Xeons. The desktop version ClawHammer will start out pricey as AMD look to clear existing Athlon stock. It'll probably still be at a similar or slightly lower price than the top of the range P4s at the time. It is meant to be a replacement for the Athlon so it will eventually need to be priced accordingly (i.e. cheap) to suceed.
  • someone has to say the "M" word...marketing.

    I have yet to see an AMD commercial, and word of mouth (yes, even mine) only carries so far.

    AMD processors are simply increadible, IMO, but how to get the word out? Marketing, commercials and ads.

    It is a simple question, really. What is the point of having such a great processor, if no one knows it?

    I think a simple commercial like this would work wonders:
    Open on a little tv playing the p4 "blue man group" commercial....have a "sledge hammer" and a "claw hammer" (both with big AMD stickers) smash the tv into oblivion.
    (fade to black with the AMD logo and a "well known voice")
    The AMD Hammer series and XP series, smashing 'you know who's higher numbers".
    Power is *sexy*, AMD.

    Or, as a demo, us the the ending of "The fast and the Furious' " car race.
    Amd would be the Black Toranado(?) and Intel the Honda(?)...Raw Horsepower vs high rpm and technology+"cheats" (inflated Ghz = NOS, perhaps.)

    Essentially, it was a tie.

    Draw your own conclusions, or come up with something better.

    Moose, out.

  • I had to do a review of IA64 and I wanted to know what was AMD's response to Intel 64bit CPU and what was behind the "old" generations.

    Currently, 2/3 of a CPU is used to analyse/understand/reschedule the code send to the CPU. This part is very important and AMD seams to be better at this game than Intel. The code has to be reschedule so the different parts of the CPU that can work at the same time are efficiently loaded ....

    OK, let stop right now : why isn't the code already efficient ? Because the compiler does NOT care about the inner structure of the CPU so the CPU has to do all the real work.
    By keeping with the "good old architecure", AMD is trying to do in hard and in real time what a software (let's say a compiler:)) can do much more easily in a very long time. And a CPU can't see more than a few operations ahead whereas the compiler can see the WHOLE code.

    So, by removing all the optimisation crap from the CPU and showing the compiler what's reallly inside, Intel is on the right way. In current CPU, you have more than 40 registers, but you can access only 8 of them and the CPU has to "guess" what could be the best use of them.

    So, I think Intel's approch is the right one. Just recompile all your software : to run old stuff, use old hardware.

    I have datasheets and documents to comment about this and I would glady do it.
    • > AMD is trying to do in hard and in real time
      > what a software (let's say a compiler:)) can do
      > much more easily in a very long time.

      NO. It is very hard for a compiler to accurately predict what will happen at run time (for example, which loads will hit in the cache and which will miss). It is much easier for the CPU to collect, predict and use this information at run time.

      IA64 pushers talk all the time about how smart the compiler "can" be, but they don't actually have any such smart compiler. That is why their performance sucks.

      Furthermore compilers are not going to get much smarter in the near future; just because the technology is needed does not mean it will suddenly appear. Compiler researchers aren't stupid and they haven't been sitting on their hands for the last forty years.

      > And a CPU can't see more than a few operations
      > ahead whereas the compiler can see the WHOLE
      > code.

      ... until the program makes a call into a shared library that was compiled by someone else.

      > Just recompile all your software : to run old
      > stuff, use old hardware.

      Uh huh. So every single time a new chip comes out, Microsoft et al are going to release new compiled versions of all their software. I don't think so.
    • And the next time you change the internal structure of your CPU, everyone with binaries optimized for the older structure are screwed unless they recompile...
  • Reading the discussion of improvements to the branch prediction, I had an idea: might it be useful to add some new branch instructions, which serve as hints to the branch prediction hardware?

    Suppose you have a branch on checking the error code returned by a function. That is what the article called a "static" branch: it almost always branches one way, assuming the function rarely fails. The Hammer will try to detect static branches, but might it be useful to let the compiler use different instructions, the static branch instructions, to tell the branch prediction hardware to assume a certain branch is static?

    I guess I don't have a good handle on how difficult it is for the branch prediction hardware to sort out static branches vs. the other kind. Would the new instructions help enough to be worth the costs of extra instructions?

    steveha
    • might it be useful to add some new branch instructions[?]

      I received an email telling me that the IA64 already has this: you can specify a static branch that is likely to be taken, a static branch that is unlikely to be taken, and dynamic branches in both likely/unlikely flavors.

      Also, even on x86, there are some tricks worth doing. The Linux kernel hackers have started using likely() and unlikely() macros around some branches in the kernel source. GCC can arrange the generated code somewhat differently and it will do some good.

      steveha
  • OSes and Compilers. If MS doesn't support the hammer architecture in its OSes and compilers, then AMD is screwed. You can talk all you want about "here's a great chance for Linux to hit the desktop." Ain't gonna happen. Look, I love Linux as much as the next guy, but it's not ready for the desktop. The people that run Linux are primarily programmers and geeks.

    For a really viable chip, you need the support of the mainstream, and like it or not, that's Microsoft. If they don't support it with their OSes and compilers, then this will be the death of AMD. I'd hate to see that, but those are the facts.
    • Actually, Hammer doesn't need OS and compiler support from MS. Hammer runs 32-bit code and existing software fast.

      It would HELP if MS had their OS and compiler support Hammer's extensions, but even if MS sits on its ass, that huge legacy market will belong to AMD.

      And what about the server market? Well, the server market is *much* more accepting of non-MS operating systems.

      I do not see lack of MS support as a certain sign of doom for AMD....
  • http://www.x86-64.org/ [x86-64.org]

    An AMD sponsored web site with the goal of porting free/open-source software to x86-64. Self-serving publicity stunt? Maybe, but it's nice anyway, and certainly more than we can ever expect to see from Intel.
  • On the one hand you have Intel, who is trying to move into *completely* new territory, at least as far as breaking with the x86 past. Scary? Very. When Apple transitioned from the 68K to the PowerPC it was rough going for a long time. The PowerPC was much better for native applications, but those took their time in showing up. And it was much, much slower than a real 68K machine when it was emulating older code.

    AMD is taking the incremental improvement route, which makes a lot of sense. But can the non-standard x86 extensions--practically a whole new processor in itself--ever be more than a niche? The 3DNow! extensions were more a novelty than anything else. Some drivers used them, most programs didn't. It's difficult as it is to support all the different computers running similar chips without getting into extensions that only work on a certain percentage of them. Is it worth shipping 64-bit Hammer code just for one market segment? It's not just a recompile; it's an entirely separate QA cycle. Thinking about hobbyists: Will they have both Itaniums and Pentiums around for testing?

    And then there's the nagging doubt that we're talking about chips that are already so fast that no one cares--except a certain fanboy crowd--so now we're talking about the difference between 10x more speed than I know what to do with and 20x more speed than I know what to do with. Sure, games and some crazy high-end airflow simulation, but this begs the question of "Is it worth overturning the entire PC market just for those two minorities?"
    • You and all the other idiot whiners who think they have "too much power" need to stop and think about what you are saying. Can you do realtime JPEG2000 encoding of 1920x1080x60fps on your computer? I didn't think so. Can you even do realtime encoding of 320x240 encoding of DivX ? No you cannot. If you could, you would be much better suited for video conferencing at a higher quality. 3D rendering (non-realtime) will never have enough speed for at the very least, the next 20 years, probably more. Some people said the same thing about 486's, pentiums, and everything else until mp3's came about, until divX and mpeg2 (which still use dedicated hardware) came about, until emulation came about, until desktop pubishing came about, until digital video editing came about, until GUI's came about. JPEG2000 creates an image about 1/4th the size of a jpeg with better quality, but it is very slow to decompress and compress. It will hopefully replace jpegs, and then your webpages will load slower. Just because you don't use your computer's power doesn't mean that there aren't other people pushing the envelope with every extra bit of power they get. I am getting tired of answering posts like this just because you can't think more than 6 months down the road.
      • You and all the other idiot whiners who think they have "too much power" need to stop and think about what you are saying.

        Sigh. I am a software developer. I write applications in Lisp. I make heavy use of graphic arts tools like Corel Draw. I also use 3D modelling packages. What machine do I do all this on? A 333MHz Pentium II that I bought new in 1998.

        Do I have _any_ speed complaints at all? None. It is a zippy system. I can recompile the Lisp system I use--which is written in Lisp--in twenty seconds. I also do a lot of work in Delphi and I've never had a perceptible compile time yet (read "for all intents and purposes, compile time is instantaneous"). Corel Draw just zips along. The 3D modeller is more dependent on the video card than anything, so I put in a GeForce 2 and haven't had any--and I mean *any*--issues with speed.

        People who talk of using the power of their 1.4 GHz processor don't have a clue. They like to think that they are a power user of some sort, and in all honesty they don't want to hear otherwise.
        • Sorry, I have to agree with donglekey here. I too am a software developer, in the digital content creation market, and yes, I (and my customers) want every CPU cycle I can get my grubby hands on.

          I run a dual Xeon 1.6 GHz machine, and it isn't enough. If we could afford a 6-CPU Alpha AXP box, it wouldn't be enough either. My customers use render farms of 100+ CPUs @ 1+ GHz each, and even that still takes days, nay, weeks to render the hundreds of layers of globally-illuminated 3D that they use. Sure, I can compile adequately fast (though a full build of our whole software tree still takes hours), but to test my image processing code on a sequence of 200 MB film-resolution images requires considerable patience.

          Just because your needs don't require anything more than last year's gfx card and last decade's CPU does not mean others are happy to sit around and wait for their more complex tasks to complete. More CPU power means more possibilities. That's why we can now produce visual effects like Final Fantasy, Swordfish and SW:TPM instead of Tron and Wargames, to pick examples from just my industry out of hundreds.

          In 1980, I read a column in an early computer magazine wondering why people were so keen on the newfangled 16 bit CPUs, with awesomely powerful 32 bit CPUs on the horizon too! He felt that his 4 MHz Z-80 ran his CP/M word processor & spreadsheet quite adequately, thank you very much. Perhaps you too would be happy with that setup for your current line of work?

  • by thorsen ( 9515 )
    I have been working for SuSE Labs on the X86-64 port for about a year now, and I thought you might be interested in hearing about the state of the port.

    Back in march we saw the first printf ("Hello World\n") succeed in the simulator. This is quite a big thing because it needs a working compiler, binutils, glibc and kernel. Since then we have steadily improved the system. By now we're running a full fledged Linux system in the simulator. The system is partly 64 bit and partly 32 bit. We will use the native 32 bit capabilities of the chip to use 32 bit binaries when that makes the most sense (who needs a 64 bit ls when a 32 bit ls does 64 bit filesystems fine).

    By now gcc (C and C++ support), binutils, glibc, gdb, the kernel, ncurses, bash, util-linux, vim etc. have all been ported almost completely. And X runs happily in 64 bit too. Now we need the desktop systems, apache, databases etc.

    Shameless plug: I'm giving a one-hour talk about Linux on X86-64 at Linux World Frankfurt next tuesday, october 30th. Here I'll show the system running, give an overview of what porting Linux is and describe the new features for Linux that we have implemented.

    Bo Thorsen,
    SuSE Labs.
  • It has 64bit support, just because Intel thought it would be great to put it in, but the MAIN point about the Itanium is the EPIC instruction set: move back to simple RISC like instructions and let the compiler do all the math about branchprediction etc etc. F.e.: when you have a program compiled with a good EPIC compiler, you'll have 8 instructions executed PER CLOCK, thus in theory running your program on 8 CPU's at once. It's 64bit too, but that's just a 'nice feature', not the main issue.

    Then looking at the hammer: AMD offers 64bit as its main new feature, but keeping the fat x86 instructionset. Nice, but not a product that will survive for at least 10 years from now, resulting in a quick set of bucks fast, but a slow death in the long run...
  • If x86-64 succeeds would it be possible to get rid of x86-32? Could SSE and SSE2 be used to get rid of x87 entirely? It would be much easier to compile code for a CPU with 16 64-bit integer registers and 16 128-bit FPU/SIMD registers with direct access without stack or similar kludges. After OS and most of the apps we use support x86-64, AMD could sell a "crippled" Hammer that would be missing x86-32 support including x87 - you'd have to emulate x86-32 but hopefully you'd only have to run old 32-bit apps so your performance would be good enough. You would need new bios without requirement for 16-bit or 32-bit instructions to boot with but your CPU could be cheaper. I don't know if Hammer enforces to use of only "new" instructions when in x86-64 mode, but I would hope so. The question is how much burden the unneeded part of ISA is in the end?

    By the way, does anybody know if I can run Hammer in 32 and 64 bit modes "simultaneously" so that I have some of my apps fully 64 bit and other legacy apps? Does it hurt my task switching performance seriously that processor is running in different modes with different processes? Will Hammer be faster for 32-bit or 64-bit code? If I don't need 64-bit address space should I compile my code for 32-bit instead for better performance? I would guess that even though 64-bit instructions are a bit harder to execute due to 2x memory requirements the increased register count would balance the things.

    According to the article when Hammer is working in MP system each CPU handles part of the memory; should OS be able to send an application to specific processor according to physical memory it has allocated instead of current load of processor for best performance? If so, does any OS currently support this kind of arrangement? How hard it would be to make Linux support this?

  • In case you didn't know it, Athalon processors don't run x86 code natively. They decode the x86 variable-length instructios into an internal constant-length RISC format. The CPU core then executes these RISC instructions.



    What I think needs to be done to the next couple generation of Athalons is to allow programs to bypass the x86 decode stage and access the RISC core directly. This will allow the chip to run legacy x86 executables, as well as new RISC executables in a completely transparent manner. After a couple of years, the x86 decoder can be phased out of the primary product line. This would reduce cost significantly, considering that (IIRC) about 20% of the transistor count on the Athalon is dedicated to the x86 decoder.

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...