Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AMD

If I Had a Hammer 208

adpowers writes: "Anandtech is running an article about their preview of AMD's Hammer. They had one machine running 32-bit Windows and the other running 64-bit Linux. The Linux machine had a 32 bit program and an identical program that was compiled for 64-bit processor support. Both processors were less than 30 days old and running without any crashes, but they weren't at full speed." We did one Hammer story a day or two ago, but there have been several more posted since then (wild guess: the NDA expired). Tom's Hardware has a story, so does Gamespot.
This discussion has been archived. No new comments can be posted.

If I Had a Hammer

Comments Filter:
  • by tempmpi ( 233132 ) on Thursday February 28, 2002 @06:16AM (#3083156)
    The extended paging bug wasn't a simple cpu bug, it was a complex bug between CPU, chipset and videocard. Because the Hammer has a very different i/o architecture compared to the current athlon, the parts of the cpu & chipset that caused the bug should be new designs anyway.
    AGP seems to be a problem on the first sample as all of the demonstration system were running without AGP videocards.
  • by XBL ( 305578 ) on Thursday February 28, 2002 @06:22AM (#3083167)
    Umm, first of all it's hard enough to engineer a 64-bit CPU with related components. Then there is the manufactoring details, etc, etc. From that standpoint, it's not economical try to to do a 128 bit CPU now.

    Second, there is no point in 128 bit for software right now. We are going to have a hard time even writing software that even requires a 64 bit processor. If we were stuck on 32 bit processors for another 5 years (yet with increasing speed), I really doubt that we would be much futher behind.

    I am no expert, but I can't even begin to see the need for 128 bit processors right now. It's better to focus on making the current designs faster.
  • by Anonymous Coward on Thursday February 28, 2002 @06:29AM (#3083182)
    As you double the width you increase memory consumption without necessarily also doubling the performance. Going 8 to 16 and 16 to 32 gave you better instruction set maps, going from 32 to 64 didn't offer much more, going to 128 bits is for general purpose processors more costly than beneficial.

    For instance: when you switch tasks you have to save old registers. Numerous and huge register spills (as this is called) costs a lot of bandwidth and time and cuts into your latency.

    For graphics processors, 128 bit datapaths can make sense, yet 128 bit instructions are enormous, even for VLIW. For microcontrollers 8-bits are still very much in use. For DSPs you also see funny bithlengths such as 24, 48, 56 and 96 bits.

    These are common topics in news:alt.arch which nominally is about computer architecture, though usually it does look like computer archaeology. Current topics include PDP10 (almost always), VAX and M68000.
  • by Anonymous Coward on Thursday February 28, 2002 @06:51AM (#3083231)
    Heh.. why do I only equate that with laughter with my engineering degree in mind? That link is the wrong image, try this [anandtech.com] one instead..
  • by ToLu the Happy Furby ( 63586 ) on Thursday February 28, 2002 @06:58AM (#3083242)
    64 bits should be enough for anyone.

    No really, I mean it.


    Clever, Ed. For those who don't get it, he's quite right: 64 bits *will* be enough for anyone.

    For those still stuck in mid-90's video game wars, "bit-edness" in the real world refers (technically) to the size of your general purpose integer registers, which, for most intents and purposes, refers to how many memory addresses you can easily and quickly address. 32 bit addressing tops out at 4GB, a value which is often too small for e.g. large databases, which thus tend to live on 64-bit big iron machines. (MS has a hack to give x86 processes access to 36 bits of space, but it requires OS intervention.)

    64 bits, on the other hand, works out to 16 billion GB. (That's 16 exobytes IIRC.) For reference, that's roughly 40 times as much memory capacity as there currently is DRAM produced (of all types, for all markets) worldwide in a year, at this January's rate [dramexchange.com].

    I don't have the figures on hand for hard drive production, but I would guess as a first approximation that 16 billion GB is not quite equal to the total number of bits of digital storage of all kinds manufactured throughout computing history up until today. (I'd guess it's too small by a factor of 3 or so.)

    In other words, it's quite a lot. Presumably computing will have run into some very different paradigm (wherein the bit-edness of the "CPU" is no longer an applicable term) before any computer has a use for >64 bit addressing.

    (FWIW, today's 64-bit processors don't offer all 64 bits of data addressing yet, because no one has a need for more than 40-something, so that's what they offer.)
  • by tempmpi ( 233132 ) on Thursday February 28, 2002 @06:59AM (#3083247)
    There's a lot of difference between 32 bit optimized code compiled for 64 Bit, and code written and optimized for 64 bit and compiled for 64 bit.
    That might be true if the only thing that changed were the register,adress space and ALU size, but AMD also removed many flaws of the x86 instruction set. x86 cpus got only 7 registers (EAX,EBX,ECX,EDX,ESI,EDI,EBP) for general purpose use. Other CPUs have much more registers, the lack of registers makes it very hard for compilers or assembler programmers to write efficient code for multiscalar cpus. AMD added more registers. AMD also made a more efficient fpu. You can really get a nice performance boost from these changes with just a rebuild of your software.
    Applications need to be programmed and optimized to make use of the extra registers, extra info paths, extra instructions available on the new platform. Without that, the application speeds can't be compared, even though the base code and output is the same.
    That isn't true, almost all programms, even games, are now programmed in C(++).(Or something like Java or Perl, but these programms doesn't matter here) The compiler can really use the extra registers/better fpus without any aid from the programmer(OK, maybe a compiler switch). Things like using the "register" keyword in C isn't really needed as good C compilers are better than most programmers at choosing which variables to keep in registers.

    You also compared the transition from x86 to x86-64 to the transition for PSX to PS2. That is also something very different. The PS2 is hard to code because the design of the graphic subsystem and vector cpus make it very fast on the one hand but also very hard to use the full potential. The PS2 CPUs also hard to use because the caches are too small.
    Put it in perspective....why don't 16 bit games re-compiled for 32 bit give a "major" performance boost...unless optimised code is included...??
    When the 386 was introduces things like games were coded in assembler, at least the performance critical parts. Something that is coded in assembler can't be recompiled. Now even games are coded in high level languages.
  • by Mike Connell ( 81274 ) on Thursday February 28, 2002 @08:14AM (#3083378) Homepage
    No need to wait.

    There are already applications that could use > 64 bits of address space. Whilst 16 Exobytes might sound like a BIGNUM for RAM, it isn't that much of a bignum for large scale disk arrays.

    At the moment there is an addressing disparity between RAM and storage, but there shouldn't be. Ideally you should be able to memory map everything you need, including the entire filesystem. If you have a FS with 64bit addresses to 512bit blocks, or something larger, you might already need bigger address spaces.

    Of course 64 bits sounds like more than we'll ever need, but a bit of imagination is all that's needed to see possible uses of >64bit space today. If you can think about needing to do it now, it's fairly space to say that it will be done in the future.

    Modulo one fact: maybe we wont have >64bit addressing. Maybe we'll have XX qubit addressing instead ;-)

    0.02, etc.
  • by Chris Burke ( 6130 ) on Thursday February 28, 2002 @10:20AM (#3083804) Homepage
    One thing to note is that when you have 64-bit addressing, you only get 2^63 worth of storage. Why? Because it's a signed int so you can express a negative offset from current location.

    Wrong. Are you perhaps thinking of the offset that is used in address calculation? Or perhaps by your reference to "current location" you are thinking of branch offsets, which are relative to the current IP (or PC, but this is an x86 article)? Regardless, the resulting address is 64 bits, and unsigned. And the base register (as in the instruction "mov rex, [rax +40]") is an unsigned 64 bit integer.

  • by thorsen ( 9515 ) on Thursday February 28, 2002 @10:33AM (#3083851) Homepage
    You're wrong in this.

    I have been working for SuSE on porting gcc and binutils for x86-64 for over a year now, and it has been pretty painless. After we had the basic system running, I ported a fullblown but small linux system to it (sysvinit, linux-utils, vim etc.) and the only thing I had to do was to make configure scripts grok the x86_64-unknown-linux architecture.

    If you take a look at the design papers on x86-64.org or amd.com, you will find that the architecture is very easy to port to. It's basically an athlon with 64 bit adressing modes on top (very simplified way of looking at it). What AMD has done is to do the exact same transition that Intel did from i286 to i386 - 16 to 32 bit.

    The new architecture is impressively easy to handle, and gcc can by now optimize almost as good for x86-64 as for i386. It's really just a matter of recompiling.

    And if you don't want to do that, run the 32 bit binary. The x86-64 architecture includes running i386 binaries at native speed. This is no marketing crap, it really is the same as you would expect from an athlon.

    Of course, if your application has assembler in it, you have to port this. But take a look at the docs again, and you'll feel very much at home there. Actually the extra registers will give you a warm fuzzy feeling inside :-) But my point here is that there is no change in the way you think - no change in the coding philosophy.

    I appreciate your point, because for a lot of platform it would be true. But on this one it simply isn't.

    Bo Thorsen,
    SuSE Labs.
  • by Amazing Quantum Man ( 458715 ) on Thursday February 28, 2002 @02:33PM (#3085575) Homepage
    Sorry, my bad. You're partially right and I'm partially wrong.

    I took a closer look. The architecture goes up to 52-bit VAs. There are 12 "available" bits that OS'en can use. 12 bits are state bits. 12 bits are Reserved(MBZ), leaving 28 bits currently defined for a 40 bit addressing (28 bit page base + 12bit offset). However, when you add in those MBZ bits, you get a 52 bit address (40bit pagebase +12 bit offset).

Scientists will study your brain to learn more about your distant cousin, Man.

Working...