If I Had a Hammer 208
adpowers writes: "Anandtech is running an article about their preview of AMD's Hammer. They had one machine running 32-bit Windows and the other running 64-bit Linux. The Linux machine had a 32 bit program and an identical program that was compiled for 64-bit processor support. Both processors were less than 30 days old and running without any crashes, but they weren't at full speed." We did one Hammer story a day or two ago, but there have been several more posted since then (wild guess: the NDA expired). Tom's Hardware has a story, so does Gamespot.
Re:How was the test performed on linux (Score:4, Informative)
AGP seems to be a problem on the first sample as all of the demonstration system were running without AGP videocards.
Re:Two transition periods? (Score:5, Informative)
Second, there is no point in 128 bit for software right now. We are going to have a hard time even writing software that even requires a 64 bit processor. If we were stuck on 32 bit processors for another 5 years (yet with increasing speed), I really doubt that we would be much futher behind.
I am no expert, but I can't even begin to see the need for 128 bit processors right now. It's better to focus on making the current designs faster.
Re:Two transition periods? (Score:2, Informative)
For instance: when you switch tasks you have to save old registers. Numerous and huge register spills (as this is called) costs a lot of bandwidth and time and cuts into your latency.
For graphics processors, 128 bit datapaths can make sense, yet 128 bit instructions are enormous, even for VLIW. For microcontrollers 8-bits are still very much in use. For DSPs you also see funny bithlengths such as 24, 48, 56 and 96 bits.
These are common topics in news:alt.arch which nominally is about computer architecture, though usually it does look like computer archaeology. Current topics include PDP10 (almost always), VAX and M68000.
Re:Who says AMD don't have a sense of humour (Score:2, Informative)
Re:Two transition periods? (Score:5, Informative)
No really, I mean it.
Clever, Ed. For those who don't get it, he's quite right: 64 bits *will* be enough for anyone.
For those still stuck in mid-90's video game wars, "bit-edness" in the real world refers (technically) to the size of your general purpose integer registers, which, for most intents and purposes, refers to how many memory addresses you can easily and quickly address. 32 bit addressing tops out at 4GB, a value which is often too small for e.g. large databases, which thus tend to live on 64-bit big iron machines. (MS has a hack to give x86 processes access to 36 bits of space, but it requires OS intervention.)
64 bits, on the other hand, works out to 16 billion GB. (That's 16 exobytes IIRC.) For reference, that's roughly 40 times as much memory capacity as there currently is DRAM produced (of all types, for all markets) worldwide in a year, at this January's rate [dramexchange.com].
I don't have the figures on hand for hard drive production, but I would guess as a first approximation that 16 billion GB is not quite equal to the total number of bits of digital storage of all kinds manufactured throughout computing history up until today. (I'd guess it's too small by a factor of 3 or so.)
In other words, it's quite a lot. Presumably computing will have run into some very different paradigm (wherein the bit-edness of the "CPU" is no longer an applicable term) before any computer has a use for >64 bit addressing.
(FWIW, today's 64-bit processors don't offer all 64 bits of data addressing yet, because no one has a need for more than 40-something, so that's what they offer.)
Re:Compiled for 64 Bit...and Programmed for 64 Bit (Score:5, Informative)
You also compared the transition from x86 to x86-64 to the transition for PSX to PS2. That is also something very different. The PS2 is hard to code because the design of the graphic subsystem and vector cpus make it very fast on the one hand but also very hard to use the full potential. The PS2 CPUs also hard to use because the caches are too small.
When the 386 was introduces things like games were coded in assembler, at least the performance critical parts. Something that is coded in assembler can't be recompiled. Now even games are coded in high level languages.
Re:Two transition periods? (Score:3, Informative)
There are already applications that could use > 64 bits of address space. Whilst 16 Exobytes might sound like a BIGNUM for RAM, it isn't that much of a bignum for large scale disk arrays.
At the moment there is an addressing disparity between RAM and storage, but there shouldn't be. Ideally you should be able to memory map everything you need, including the entire filesystem. If you have a FS with 64bit addresses to 512bit blocks, or something larger, you might already need bigger address spaces.
Of course 64 bits sounds like more than we'll ever need, but a bit of imagination is all that's needed to see possible uses of >64bit space today. If you can think about needing to do it now, it's fairly space to say that it will be done in the future.
Modulo one fact: maybe we wont have >64bit addressing. Maybe we'll have XX qubit addressing instead
0.02, etc.
Re:Two transition periods? (Score:4, Informative)
Wrong. Are you perhaps thinking of the offset that is used in address calculation? Or perhaps by your reference to "current location" you are thinking of branch offsets, which are relative to the current IP (or PC, but this is an x86 article)? Regardless, the resulting address is 64 bits, and unsigned. And the base register (as in the instruction "mov rex, [rax +40]") is an unsigned 64 bit integer.
Re:Compiled for 64 Bit...and Programmed for 64 Bit (Score:2, Informative)
I have been working for SuSE on porting gcc and binutils for x86-64 for over a year now, and it has been pretty painless. After we had the basic system running, I ported a fullblown but small linux system to it (sysvinit, linux-utils, vim etc.) and the only thing I had to do was to make configure scripts grok the x86_64-unknown-linux architecture.
If you take a look at the design papers on x86-64.org or amd.com, you will find that the architecture is very easy to port to. It's basically an athlon with 64 bit adressing modes on top (very simplified way of looking at it). What AMD has done is to do the exact same transition that Intel did from i286 to i386 - 16 to 32 bit.
The new architecture is impressively easy to handle, and gcc can by now optimize almost as good for x86-64 as for i386. It's really just a matter of recompiling.
And if you don't want to do that, run the 32 bit binary. The x86-64 architecture includes running i386 binaries at native speed. This is no marketing crap, it really is the same as you would expect from an athlon.
Of course, if your application has assembler in it, you have to port this. But take a look at the docs again, and you'll feel very much at home there. Actually the extra registers will give you a warm fuzzy feeling inside
I appreciate your point, because for a lot of platform it would be true. But on this one it simply isn't.
Bo Thorsen,
SuSE Labs.
Re:Two transition periods? (Score:3, Informative)
I took a closer look. The architecture goes up to 52-bit VAs. There are 12 "available" bits that OS'en can use. 12 bits are state bits. 12 bits are Reserved(MBZ), leaving 28 bits currently defined for a 40 bit addressing (28 bit page base + 12bit offset). However, when you add in those MBZ bits, you get a 52 bit address (40bit pagebase +12 bit offset).