Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AMD

If I Had a Hammer 208

adpowers writes: "Anandtech is running an article about their preview of AMD's Hammer. They had one machine running 32-bit Windows and the other running 64-bit Linux. The Linux machine had a 32 bit program and an identical program that was compiled for 64-bit processor support. Both processors were less than 30 days old and running without any crashes, but they weren't at full speed." We did one Hammer story a day or two ago, but there have been several more posted since then (wild guess: the NDA expired). Tom's Hardware has a story, so does Gamespot.
This discussion has been archived. No new comments can be posted.

If I Had a Hammer

Comments Filter:
  • So it will be a few years until we all have 64-bit PC's with applications written for them. I don't understand why the development work wasn't put into 128-bit processors in the first place. Wouldn't this avoid the next transition period when most applications are written for 64-bit machines?

    Maybe I'm over simplifying it.

    • by Ed Avis ( 5917 ) <ed@membled.com> on Thursday February 28, 2002 @06:18AM (#3083160) Homepage
      64 bits should be enough for anyone.

      No really, I mean it.
      • by ToLu the Happy Furby ( 63586 ) on Thursday February 28, 2002 @06:58AM (#3083242)
        64 bits should be enough for anyone.

        No really, I mean it.


        Clever, Ed. For those who don't get it, he's quite right: 64 bits *will* be enough for anyone.

        For those still stuck in mid-90's video game wars, "bit-edness" in the real world refers (technically) to the size of your general purpose integer registers, which, for most intents and purposes, refers to how many memory addresses you can easily and quickly address. 32 bit addressing tops out at 4GB, a value which is often too small for e.g. large databases, which thus tend to live on 64-bit big iron machines. (MS has a hack to give x86 processes access to 36 bits of space, but it requires OS intervention.)

        64 bits, on the other hand, works out to 16 billion GB. (That's 16 exobytes IIRC.) For reference, that's roughly 40 times as much memory capacity as there currently is DRAM produced (of all types, for all markets) worldwide in a year, at this January's rate [dramexchange.com].

        I don't have the figures on hand for hard drive production, but I would guess as a first approximation that 16 billion GB is not quite equal to the total number of bits of digital storage of all kinds manufactured throughout computing history up until today. (I'd guess it's too small by a factor of 3 or so.)

        In other words, it's quite a lot. Presumably computing will have run into some very different paradigm (wherein the bit-edness of the "CPU" is no longer an applicable term) before any computer has a use for >64 bit addressing.

        (FWIW, today's 64-bit processors don't offer all 64 bits of data addressing yet, because no one has a need for more than 40-something, so that's what they offer.)
        • Then do tell me why ipv6 haz 128 bitz fer de address?
          Zooner or lader ve vill find use fer all of those 64 address lines. Sooner than one would expect, I presume. So really, why not push those barriers 'a lot' further. Why not abandon 'bittedness' altogether?
          I'd bet there's some decent research on computing sans bittedness. Even perl does that. Damn, some ancient LISP machine could do that.

          Well, hell yes, now that all processors hurl at GHz+, well get more and more interpreted languages, it's just so much simpler.
          • If it gets to a point where processors need to do arithmetic on 128-bit IPv6 addresses for a large part of the time, and these operations have to be speed critical, and for some really odd reason you have to store things at an address pointed to by a 128-bit number (which would be an unfeasibly large address space), then yes, 128-bit CPUs might be handy. Until then, any integer quantity anyone wants to handle fits in 64 bits. Floating point is a different matter...
            • The point wasn't about handling ipv6 addresses, but about th reason behind choosing 128 bits for _addressing computer interfaces_. There's no much extrapolating in idea that one day a computer will have so much memory that 64bits would not be enough. Yes, it may take many years, but it's still not that far away.
              Anyway, there's plenty of use for 128 bit arithmetics, we're just not using things to their full potential.
              • I'm not so sure that people will ever need even the full 64 bits, let alone 128 or more. You start getting to the point where every atom in the known universe could have its own video diary and you'd still have used only a fraction of the space.

                If you're talking about just a large address space rather than a large memory then it's more reasonable (eg every TCP port on every IPv6 address, that's about 80 bits of space), but there's no pressing reason why the processor itself should have a 128-bit wide memory bus just for that reason. It's just a waste of silicon.

                And as for 128-bit *integer* arithmetic: can you give some examples? Certainly there are integer computations going on with more than that accuracy using bignums (often to represent rational numbers as a pair of large integers), but 128 bits won't be enough for those either, you'll still have to use bignums. At best a 128-bit CPU could do those calculations twice as fast as a 64-bit one.
                • > I'm not so sure that people will ever need even the full 64 bits, let alone 128 or more. You start getting to the point where every atom in the known universe could have its own video diary and you'd still have used only a fraction of the space.

                  Step 1: Build a 128-bit CPU.
                  Step 2: ????^H^H^H^HConvince every atom in the known universe to keep a video diary.
                  Step 3: Profit!!!!

                  When do we get funding?

        • I'm no expert in CPUs whatsoever, but I don't think adressable memory is the only issue. There's computation also.

          128 bits CPUs would allow for 128 bit operation computations faster than 32 or 64 bit processors, I think. And in fields where extreme precision is important and speed is also important, this might be an issue.

          Then again, I'm no expert and Errare humanum est...

          Just my 0.02

        • No need to wait.

          There are already applications that could use > 64 bits of address space. Whilst 16 Exobytes might sound like a BIGNUM for RAM, it isn't that much of a bignum for large scale disk arrays.

          At the moment there is an addressing disparity between RAM and storage, but there shouldn't be. Ideally you should be able to memory map everything you need, including the entire filesystem. If you have a FS with 64bit addresses to 512bit blocks, or something larger, you might already need bigger address spaces.

          Of course 64 bits sounds like more than we'll ever need, but a bit of imagination is all that's needed to see possible uses of >64bit space today. If you can think about needing to do it now, it's fairly space to say that it will be done in the future.

          Modulo one fact: maybe we wont have >64bit addressing. Maybe we'll have XX qubit addressing instead ;-)

          0.02, etc.
          • by foobar104 ( 206452 ) on Thursday February 28, 2002 @09:55AM (#3083681) Journal
            Whilst 16 Exobytes might sound like a BIGNUM for RAM, it isn't that much of a bignum for large scale disk arrays.

            Actually, it is a very large number for disk arrays.

            I'm unaware of a filesystem that can scale as large as XFS [sgi.com]; there may be others, though. XFS uses 64-bit addressing, allowing the filesystem to scale to 18 million terabytes (or 18 exabytes, if you prefer). No filesystem in the world has ever remotely approached that size. According to this [berkeley.edu] nifty site, total worldwide disk drive production for the year 2000 only totalled 2.5 million terabytes. So to build a filesystem that's 18 million TB big, you'd have to commandeer all hard drive production, worldwide, for about 12 years.

            They estimate that the total amount of data stored on hard drives in the entire world is only about 4 million TB. That means you could theoretically put all the data in the world that is currently stored on hard drives-- all the pr0n, all the MP3s, all the source code, all the PowerPoints, everything-- on one server with one big filesystem, and only use about 1/4 of the filesystem's capacity. Mount it under /earth and set the permissions to 700, please.

            Of course, this fact fails to address your basic premise, which seems to be that assigning unique integer addresses to every byte that a computer can access would be a reasonable thing to do.

            Even if there were a reason to do such a thing, don't forget that increasing your pointer size decreases your cache efficiency; you can fit twice as many 32-bit pointers in your cache as you can 64-bit pointers, which results in fewer cache misses and overall better performance. (How much better depends on how cache-friendly your task is in the first place, but 32-bit will never be less cache-friendly than 64-bit.)
            • Actually I think I would prefer the permissions to be 777, or at least 770 if I was lucky enough to be a part of the earth group. Since I really don't think I'm the user. :)
            • by bhurt ( 1081 )
              Actually, assuming Moore's Law (in it's debased form) continues, we can calculate how long it will be before we need to go to 128 bits. The calculation is easy- 18 months of growth for every extra bit.

              So to go from 8 bits to 16 bits took a mere 12 years. Say 1966 to 1978. Going from 16 to 32 took a little bit longer- 24 years (1978-2002). Going from 32 to 64 bits will take 48 years- it'll be 2050 before we outgrow 64 bits.

              And it'll be 2242 before we outgrow 128 bits, 2626 before we outgrow 256 bits, 3394 before we outgrow 512 bits, etc.

              Of course, there are slight physics problems with Moore's law continuing for the next 1,392 years. But that's for a different post...
              • "slight physics problems" is right; and how!

                I'm very doubtful that 128-bit machines will *ever* be built; though only a fool would say they definitely won't be built, this early in the game.

                32-bit CPU's still take large chunks of silicon, and their features are approaching 1E-7 meters in size. 64-bit machines will not be severely limited until they are trying to manage about 10 orders of magnitude (1E10 times - well over 2^32 times) more circuit elements. If circuits are still basically planar in physical layout, this implies circuit features approaching 1E-12 meters (1E-7 / sqrt( 1E10 ))...

                Since silicon atoms are roughly 2.5E-10 meters across, there might be a slight problem with building circuit features this small. ;-)

                Put another way, the realistic limit for further process shrink is about 2 more orders of magnitude (the circuits would be just a few atoms across) - only 4 more orders of magnitude in total number of circuit elements, not 10.

                So I really have a hard time seeing how a computer built with *chips*, that is smaller than a skyscraper, would ever need more than 64 address bits.
                • Other's have mentioned this, but I'll say it anyway. 64bit addressing has no limitations except for the physical number of pins comming out of the CPU; and that can be overcome by simply serializing 2 x 32. As for caching (which other's have brought up), obviously a given architecture would have to use segmented regions (say of 4 Gig max cachable, etc). The main advantage of full 64bit addressing would be proprietary motherboard / PCI cards that allow NUMA style memory architecture's etc. It's not that a full 64bits are needed, but that an arbitrary memory segmentation might be desired where the high bits divide the different nodes, etc.

                  Further, 64bit nubmers are -extremely- common for database architectures (primarily as ID's or what-have-you).. Hell, we're seeing a lot of 128bit numbers already, and 32bit architectres have to resort to slow big-num algorithms. This sort of operation requires 64bit INTs as opposed to void*'s but the two typically coincide within a given CPU.

                  -Michael
              • Where'd you pull those numbers/dates from? Somewhere dark, I imagine.

                If we confine the discussion to microprocessors (reasonable, since there were 60-bit mainframes in the 1960s) we don't even start counting until the 8-bit 8008 circa 1972 (although we could start with the 4-bit 4004 in 1971 -- giving a four-bit increase in one year. Oops). (And actually, the picture is more confusing than that -- most of the early 8-bit chips supported 16-bit addressing, although some of them multiplexed the address lines).

                If we stick with the Intel line, the (also 8-bit) 8080 appeared in 1974, the 16-bit 8086 in 1978, and the 32-bit 80386 in 1985. At that rate, doubling in bit width roughly every 7 years, Intel is ten years late with its 64-bit chip, and should have already introduced a 128-bit chip.

                If we ignore the 8008, we get a width increase of about 2 bits/year, which actually works out closer to the real numbers -- the 64-bit x86 successor appearing circa 2001/2002. At that rate, look for a 128-bit Intel chip circa 2034, or just a few years short of the Unix clock rollover date (but by then we'll all be on 64-bit time).
            • A few corrections...

              So to build a filesystem that's 18 million TB big, you'd have to commandeer all hard drive production, worldwide, for about 12 years.

              Isn't true: HDD capacity is growing at (at least) ~60% PER YEAR. Even with the conservative figures here [dell.com] (HD size for 2002 given as 36Gb), that means selling average home user drives of 1.5 Tb in 2010 (starting from 60Gb disks in 2002, you come up with 2.5Tb disks in 2010).

              The UCB site manages to grasp that in general, the conceptual failure is to imagine that there is some linearity in information growth. There isn't. Chart anything you like in this area and the graph will be a big sqr(x) type of affair with a scary looking rate of growth at the end. Hold on tight! As the UCB site says "shipments of digital magnetic storage are essentially doubling every year" doubling!

              Q. What does every extra bit in an address give you?
              A. Double the address space.
              Q. How many more bits are there in 64 bit addressing from 32?
              A. 32 bits
              Q. Which means...?
              A. We have 32 years until we're back to where we are today regarding information size vs addressing space.

              Oh yeah, 32 years is a real long time. Y2K anyone?

              Of course, this fact fails to address your basic premise, which seems to be that assigning unique integer addresses to every byte that a computer can access would be a reasonable thing to do.

              I think you misread me. I dont advocate the need to be able to mmap the web; but local devices? Surely. this has been a problem in 32bits for a long time. There's nothing magic about 64bits. It's just bigger. It too will fall.

              And your last comment about cache misses! Are you joking? If you need more than 64bits of space it doesn't matter how much better a 64bit address system cache would work because it can't do the job. Even with the "cache misses", a >64bit system is infinitly faster because it will work. If you need the space, you need the space.

              0.02
              • Everyone assumes that "Moore's Law" is infallible, where this simply is not true. It's more of a pattern than a law, and it'll come screeching to a halt within the next decade. As soon as we hit 1 atom-wide circuits, that's it, we can't get any smaller. Currently we're about 40 atoms across, though it's hard to 100% accurately count.

                Same thing with hard drives. A single atom can't hold more than one charge at once (well, it can, but that's a whole different concept,) and we've gotten pretty small on the hard drives as well. The only way to increase capacity is to increase the number of platters or the diameter (and the diameter is limited as well, I'd hate to see what centrifugal force would do to a 5' wide iron disk spinning at 7200 RPM.. or the amount of power required to spin it)

                Of course, there are always quantum computers, holographic storage, etc, so then I could just be totally wrong. But with our current technology, we're close to maxing out. We should probably dedicate more time and effort to these new fields rather than just extending and band-aiding what we already have.
            • Buy a petaserver from Sony. You use (say) 10Tb of
              cache for the tape robots, and store up to a petabyte of data.

              It may not be the fastest fileserver in the world, but nonetheless there are inodes which uniquely identify every file. The filesystem we used was SAM-FS, running on 64-bit solaris.

              Simon
              • Buy a petaserver from Sony. You use (say) 10Tb of cache for the tape robots, and store up to a petabyte of data.

                A petabyte is only 1024 TB. The sizes we're talking about are about seven orders of magnitude larger than that.

                Unless, of course, you have 18,000 Sony Petasites at 1000 TB each.

                Do you have 18,000 Petasites?
            • "all the pr0n, all the MP3s, all the source code, all the PowerPoints, everything-- on one server with one big filesystem"

              Isnt that what .NET is for?
          • There are already applications that could use > 64 bits of address space. Whilst 16 Exobytes might sound like a BIGNUM for RAM, it isn't that much of a bignum for large scale disk arrays.

            I hope I'm not the only one looking at that and thinking 'What the hell kind of media besides HDD am I going to back this up on?

            I recently purchased two 120GB IDE drives to hold my MP3 collection ripped from my CD collection.

            I've been ripping for about 5 days now, and I'm in the C's. (320KB encoding, Athlon 1.33 running RH Linux is doing the ripping, about 8 hrs a day)

            I started looking for a backup method besides HDD. Tapes are at best at 110/220GB with SuperDLT. But for home use spending about $5000 for a single tape drive when a hard drive of that size is $200 is out of sight.

            Tape tech has GOT to catch up somehow and get down to the cost/MB that HDDs are or we're going to be in an interesting quandary for backing stuff up for DR purposes.
            • Have you checked out r3mix.net yet? It's a good place to start before a big encoding project.

              Even if you use 320kbps MP3s, you can't get near CD quality unless you use certain encoders. (Use Lame, don't use AudioCatalyst.) And if you use a good encoder you can probably get all the quality you need at 192VBR.

              Whatever you do, use VBR. That way whatever you encode doesn't use the full bandwidth for silence, and doesn't feel limited to that for the complex bits.

              You may know all this already, but if you don't you'll be a lot happier to find out in the Cs than the Xs.
              • Thanks for the link & the suggestion. I'll check it out.
        • No, 64-bits is not enough. There were optical storage arrays built nearly 10 years ago that used the full 64-bit address space (yes, it was largely done to prove you could, but there are systems being built now to use the entire 64-bit address space).

          One thing to note is that when you have 64-bit addressing, you only get 2^63 worth of storage. Why? Because it's a signed int so you can express a negative offset from current location.

          Sure, you can munge it so that your physical storage space isn't represented by a single pointer (most 32-bit OS's do that now, since otherwise you'd be limited to a 2G partition and files), but it's a lot easier on everyone if you just handle it with a large enough pointer.

          I'll admit, I'm having a hard time coming up with a real use for a 128-bit integer operation (crypto maybe; perhaps neural networks). Engineering and FP ops are different - they use different registers and the FPU, so talking about more precision isn't relevant here. Of course, I suspect people had a difficult time thinking of a use for 64-bit operations back when we were using 8 or 16 bit general purpose CPUs.
          • by Chris Burke ( 6130 ) on Thursday February 28, 2002 @10:20AM (#3083804) Homepage
            One thing to note is that when you have 64-bit addressing, you only get 2^63 worth of storage. Why? Because it's a signed int so you can express a negative offset from current location.

            Wrong. Are you perhaps thinking of the offset that is used in address calculation? Or perhaps by your reference to "current location" you are thinking of branch offsets, which are relative to the current IP (or PC, but this is an x86 article)? Regardless, the resulting address is 64 bits, and unsigned. And the base register (as in the instruction "mov rex, [rax +40]") is an unsigned 64 bit integer.

        • I don't have the figures on hand for hard drive production, but I would guess as a first approximation that 16 billion GB is not quite equal to the total number of bits of digital storage of all kinds manufactured throughout computing history up until today. (I'd guess it's too small by a factor of 3 or so.)

          Given my own numbers and the rapid acceleration of drive capacties over the past 5 years, I think you're wrong.

          I support servers in a Fortune 500 financial services corporation. My rough, low-ball estimates of our current hard disk storage space is 1.4PB (petabytes, of about 1.4 million gigabytes) on desktops, servers, big iron, DASD and SAN. It's probably closer to 2PB... you can't imagine the amount of drivespace a huge corporate enterpise requires. If I have 2PB of data storage in one 30,000 employee US company, that's already one eight-thousandth of the 16EB "worldwide ever" total you're working with.

          Think about it, take all the private desktop PCs bought in the past three years; they're probably averaging 15-20 GB per unit in drive space. If there were 70 million PCs sold worldwide in 1999 (found via Google), and we triple that (again, probably low for the last coule of years), 210 millions PCs times 20 GB is 4.2EB, again a quarter of the 16EB you are working with.

          Between corporate and private puchases, I'd bet 16EB worth of digital storage has been manufactured and sold in the past 24 months.
        • (FWIW, today's 64-bit processors don't offer all 64 bits of data addressing yet, because no one has a need for more than 40-something, so that's what they offer.)

          That's not true. MIPS R1x000 processors, which I use exclusively at work, support either a 32-bit mode with a 32-bit pointer, or a 64-bit mode with a pointer that is a full 64 bits wide. You can malloc() all day long in 64-bit mode.
        • All true and relevant, However...

          Hammer has 40 bits of addressing space (as you mentioned) and 48 bits of virtual space (memory map tables) so it's not quite out of the memory limit woods yet.

          64 Bits has other advantages. File and partition sizes can now be extended more easily. File limits of 2 and 4 GB were common before (Linux had this problem until recent patches) Large files are usefull in databases.

          One could claim that Scientific calculations are where the true advantages lie. But most complex calculations might not even fit well with 64 (or even 128) bit values (cryptography). Most programs have custom integer/floating point libraries that handle large values, and will likely continue to need these libraries even with 64 or 128 bit CPUs. A jump to 128 bit CPU wouldn't help much here, and wouldn't mean much to the database vendors who are still happily trying to fill up a 64 bit address space.

          We're not really ready for 128 yet. Tho only major advantage of 128 would be in data bus size, to move more data into the cache per fetch. Some processors already do this for speed reasons.

          • Hammer has 40 bits of addressing space (as you mentioned) and 48 bits of virtual space (memory map tables) so it's not quite out of the memory limit woods yet.

            That sounds like an implementation detail; it makes sense to only support 40 address lines when that's a fair bit more than you're likely to see before a process shrink or revision of the architecture. Extra lines cost money, and there's no reason to spend that money if it won't be put to use.

            • The 40 bit address and 48 bit virtual limits are inherent in the page table mappings. (See the AMD Spec [amd.com])

              I found it dissapointing that these limits were in the design up front without apparent room for expansion (free bits to be used in the future) This means that when they do decide to expand the addressing range, they will have to redisign the page table layout and force OS writers to change their code to use it. It would have been nicer to have expansion room built in to the page table design and simply have the CPU implementation have limited pinouts.

              While I'm griping, I'll also mention that the X86-16 (Virtual real mode) support has been dropped when in 64 bit mode. I know that noone uses it much anymore, but there are still old legacy games that I have that run in DOS mode and it would be nice to be able to support them.

              • While I'm griping, I'll also mention that the X86-16 (Virtual real mode) support has been dropped when in 64 bit mode. I know that noone uses it much anymore, but there are still old legacy games that I have that run in DOS mode and it would be nice to be able to support them.

                Not a problem, boot in legacy mode and Virtual
                Real mode is still there. Your old DOS games wouldn't work under a 32 bit OS,let alone a 64 bit one.

              • The 40 bit address and 48 bit virtual limits are inherent in the page table mappings. (See the AMD Spec [amd.com])
                I found it dissapointing that these limits were in the design up front without apparent room for expansion (free bits to be used in the future) This means that when they do decide to expand the addressing range, they will have to redisign the page table layout and force OS writers to change their code to use it. It would have been nicer to have expansion room built in to the page table design and simply have the CPU implementation have limited pinouts.


                WTF are you talking about? The page tables clearly show 12 Reserved (MBZ) bits in all page tables. That allows for a full 64-bit paged address space (40+12 = 52 bits, with 12-bits for offset within page = 64).
                • Sorry, my bad. You're partially right and I'm partially wrong.

                  I took a closer look. The architecture goes up to 52-bit VAs. There are 12 "available" bits that OS'en can use. 12 bits are state bits. 12 bits are Reserved(MBZ), leaving 28 bits currently defined for a 40 bit addressing (28 bit page base + 12bit offset). However, when you add in those MBZ bits, you get a 52 bit address (40bit pagebase +12 bit offset).
                  • OK, My turn. It's been a while since I actually read that document (like, over a year) So I refresh.

                    Figures 17 and 18 (Pg 58, 59).
                    The virtual address is 48 bits marked as sign extended, not reserved MBZ. I guess they could add a Level 5 page map and go to a 56 bit virtual and a Level 6 page table for 64 bit virtual. Then they could run into the same problem that *Motorola had 68000 -> 68020.

                    I think this is what I was remembering when I said the OS would have to be rewritten to use more memory.

                    As for Physical addressing, Figure 13 PTE (p. 51)shows 4kb Pages using 28 bit base pointer with 12 bit offset for total of 40 bit physical. With 12 bits reserved, you could extend that to 52 bit physical.
                    However, Figure 14 (P. 53) PDE shows the 2MB page directory entry with 19 bit base and 21 offset for 40 total, but with another 9 bits not used. This 40 plus the previously unused 12 to 52. Plus the new unused 9 gets you to 61. Almost there, but only in 2MB page mode.

                    Once again the OS would have to be rewritten to use more memory.

                    * 68000 used 24 of 32 bits, ignoring the high bits. Some programmers tried to use them for other things. When 68020 came out and started using those bits, the old code wouldn't run and had to be rewritten.

        • Follow Moore's Law and assume that memory chip density will double every 18 months.

          This means that rate of growth of addressable bits is 1 bit per 18 months.

          Do the arithmetic and we may see the 64 bit address bit limit getting hit within our lifetimes...assuming that we currently need 40 bits of addressable memory.
      • by Anonymous Coward

        64 bits should be enough for anyone.

        Why was this declared funny by the moderators? It's true. Most of the code out there (even code for existing 64Bit architectures) uses 64Bit (integer) data types almost exclusively to address memory. 32Bits are not enough anymore to address every byte on your hard disk and on high end machines it's not enough to address every byte of RAM.
        64Bits give you 4 billion times more address space. Development in the PC sector is fast but it's not that fast.
        Since the invention of the PC, RAM size in the average PC has increased by a factor <10000. Hard disk size has increased by a factor of <100000. That's over 20 years development. An increase in address space of factor 4 billion is going to last us a very long time. We might never hit this mark in consumer machines, because there is only so much information humans can deal with. Even if you start archiving HDTV video uncompressed, the storage addressable with 64Bits is enough for 1 million hours.

        I'd also like to give a little perspective regarding computation. If a 3D shooter used 64Bit integers, it could still model the complete earth with sub-nanometer precision.

        Even if you work with 1/1000 of a cent precision, 64Bits are going to be enough for all financial computations there will ever be.

        More than 64Bits does not make sense for a general purpose microprocessor. You are better off with a processor that has multiple 64Bit integer units than a processor that has fewer 128Bit units.

      • Or maybe it would be a good idea to start making good asynchronous systems based on collaborating heterogeneous processors ?
    • by XBL ( 305578 ) on Thursday February 28, 2002 @06:22AM (#3083167)
      Umm, first of all it's hard enough to engineer a 64-bit CPU with related components. Then there is the manufactoring details, etc, etc. From that standpoint, it's not economical try to to do a 128 bit CPU now.

      Second, there is no point in 128 bit for software right now. We are going to have a hard time even writing software that even requires a 64 bit processor. If we were stuck on 32 bit processors for another 5 years (yet with increasing speed), I really doubt that we would be much futher behind.

      I am no expert, but I can't even begin to see the need for 128 bit processors right now. It's better to focus on making the current designs faster.
    • I don't understand why the development work wasn't put into 128-bit processors in the first place. Wouldn't this avoid the next transition period when most applications are written for 64-bit machines?

      Excellent point. It seems a shame that we had to have 16 bit processors while we migrated from 8 bits to 32. I mean, really, what was the big deal in quadrupling the data lines instead of doubling.

    • by Anonymous Coward
      As you double the width you increase memory consumption without necessarily also doubling the performance. Going 8 to 16 and 16 to 32 gave you better instruction set maps, going from 32 to 64 didn't offer much more, going to 128 bits is for general purpose processors more costly than beneficial.

      For instance: when you switch tasks you have to save old registers. Numerous and huge register spills (as this is called) costs a lot of bandwidth and time and cuts into your latency.

      For graphics processors, 128 bit datapaths can make sense, yet 128 bit instructions are enormous, even for VLIW. For microcontrollers 8-bits are still very much in use. For DSPs you also see funny bithlengths such as 24, 48, 56 and 96 bits.

      These are common topics in news:alt.arch which nominally is about computer architecture, though usually it does look like computer archaeology. Current topics include PDP10 (almost always), VAX and M68000.
    • Memory density doubles every 18-24 months which means we need an extra address bit every 18-24 months. If 32 bits is required today, then we won't need more than 64 bits for another 48 to 64 years.

      I'll be long retired by then. Your grandchildren can deal with it then.
    • In the late '50s/early 60s, when the first mainframes were built, they were all approx 60 bits. Thereafter, all "cost is no object" computers were 60/64 bits. There is not much evidence that anyone will ever want to go further than 64 bits. There are significant overheads to longer words (ever heard of "carry propagation"?).

      In fact, the proposed 64 bit processors will pretty much be doing all known processor design techniques on a chip. At that point, we have used all the ideas that were known when the Vax was designed (approx 1980). Since then, nothing much new has been invented. The only missing piece of technology is content addressable memories (ie execute jump table in single cycle instead of stepping though each option and comparing. These have also been known since about 1980, including how to make them. Used as cache tag ram, they would make a HUGE performance improvement. There is no obvious reason for not using them apart from the fact that its a European development (mostly UK and Germany), and America has a problem with NIH.

      I dont deny there are special cases where 128 bits (or even 1024) might pay, but to sell, you need a general purpose machine, and 64 bits is the top whack as far as we know. After that, masssively parallel is more cost effective (ICL DAP, etc).

    • You're not oversimplifying, you're simply wrong :-) I believe the cause for your mistake is that you are listening to marketing guys without realizing it. So, some facts to set the record straight.

      The definition of a 64 bit processor is that it has 64 bit adressing - 64 bit pointers. Everything else, 64 bit registers etc - is just the icing on the cake and the things you would expect.

      With this in mind, it's easy to see why you're wrong. At this point in time, there does not exist enough memory in the world to fill the 64 bit addressing space. So why on earth would anyone want a larger pointer, when we don't have anything to use it for?

      While I'm sure this will change at one point (since 640 kbyte really isn't enough for everyone), it doesn't make sense to build a processor for requirements that might be 20 years away.

      And in case you're wondering; the so-called 128 bit processors of today are really only 32 or 64 bit processors, but since we lack the terminology for describing a processor with 64 or 128 bit registors, memory bus width, internal processing capabilities etc., the marketing dudes get away with calling them 128 bit processors.

      End of dry definition.

      Bo Thorsen,
      SuSE Labs.
  • Did they have to add "option=nopentium" to the lilo boot parameterlist? :-)

    (Seriously though, I hope they haven't left the extended paging bug in)
    • by tempmpi ( 233132 ) on Thursday February 28, 2002 @06:16AM (#3083156)
      The extended paging bug wasn't a simple cpu bug, it was a complex bug between CPU, chipset and videocard. Because the Hammer has a very different i/o architecture compared to the current athlon, the parts of the cpu & chipset that caused the bug should be new designs anyway.
      AGP seems to be a problem on the first sample as all of the demonstration system were running without AGP videocards.
    • (Seriously though, I hope they haven't left the extended paging bug in)

      Since that bug is already fixed on current Athlons, I seriously doubt it'll be a problem with Hammer.

      299,792,458 m/s...not just a good idea, its the law!

  • There's a lot of difference between 32 bit optimized code compiled for 64 Bit, and code written and optimized for 64 bit and compiled for 64 bit.

    Applications need to be programmed and optimized to make use of the extra registers, extra info paths, extra instructions available on the new platform. Without that, the application speeds can't be compared, even though the base code and output is the same.

    Let's take the example of some of the 1st. generation playstation II code...which was actually code written for a 32 Bit machines, on a different platform like the PC, or the old PSX, now..pure recompiling won't get you any major performance boost, so all the developers had to "re-do" the code to make use of the 128 bit emotion engine.

    Exactly the reason why all these gamedev guys kept screaming it is much harder to code for the PS2 than for other platforms....one part of that whole hing is this...the other part is changing graphics APIs.

    PCs is dirextx/opengl....and PS2 can be either custom renderers, or Open GL.

    Put it in perspective....why don't 16 bit games re-compiled for 32 bit give a "major" performance boost...unless optimised code is included...??
    • by Space cowboy ( 13680 ) on Thursday February 28, 2002 @06:42AM (#3083213) Journal
      Applications need to be programmed and optimized to make use of the extra registers, extra info paths, extra instructions available on the new platform


      This is the job of the compiler... If I recompile source code I expect the compiler to optimise the object code in the best way for the target!

      Let's take the example of some of the 1st. generation playstation II code...

      No, let's not. The PS2 was so radically different from the PS1 (I've coded both) that it amounted to an architecture change, not just a platform upgrade. The PS1 is a pretty much bog standard CPU+VRAM+DRAM device. The PS2 is a dataflow architecture, with the idea being to set up datastreams, (with the code to execute being part of the stream), and to target those streams with a firing-condition model. This is amazingly versatile (and the device has the bus bandwidth and DMA channels to handle it, the PC doesn't) but it is *very* *very* different from the standard way coding is done. This is why PS2 games are still getting better two years down the line...

      Exactly the reason why all these gamedev guys kept screaming it is much harder to code for the PS2 than for other platforms

      Actually I don't think it's much harder at all, it's just different. You have 3 independent CPU's, all of which are pretty damn fast considering they're only at 300MHz. The device can do (peak) 3 billion (3,000,000,000) general purpose floating point multipliy/accumulates per second, and you can get pretty close to that figure, unlike most peak throughput estimates. Bandwidth again, and the use of an opportunistic programming methodology rather than a logical-progression methodology.


      Having said that, I'm from a parallel computing background, so using only 3 CPU's is child's play :-)


      Put it in perspective....why don't 16 bit games re-compiled for 32 bit give a "major" performance boost

      Because there's a much more quantifiable change in going from 16-bit to 32-bit. Developers had been hacking around the 16-bit limit using 'near' and 'far' pointers (!!), which meant all the cruft from those 16-bit days was still sticking around and causing problems if you just recompiled.


      Now they're (at long last!) in the 32-bit arena, there's no such problems. A char* ptr is still a char* ptr, it now just has a greater domain. No cruft. No problems.


      This isn't to say that compilers won't get better over time though - optimisation is an inexact science, and you'd hope to see improvements as compiler-writers see how to improve the optimising stage.


      Enough...


      Simon

    • by tempmpi ( 233132 ) on Thursday February 28, 2002 @06:59AM (#3083247)
      There's a lot of difference between 32 bit optimized code compiled for 64 Bit, and code written and optimized for 64 bit and compiled for 64 bit.
      That might be true if the only thing that changed were the register,adress space and ALU size, but AMD also removed many flaws of the x86 instruction set. x86 cpus got only 7 registers (EAX,EBX,ECX,EDX,ESI,EDI,EBP) for general purpose use. Other CPUs have much more registers, the lack of registers makes it very hard for compilers or assembler programmers to write efficient code for multiscalar cpus. AMD added more registers. AMD also made a more efficient fpu. You can really get a nice performance boost from these changes with just a rebuild of your software.
      Applications need to be programmed and optimized to make use of the extra registers, extra info paths, extra instructions available on the new platform. Without that, the application speeds can't be compared, even though the base code and output is the same.
      That isn't true, almost all programms, even games, are now programmed in C(++).(Or something like Java or Perl, but these programms doesn't matter here) The compiler can really use the extra registers/better fpus without any aid from the programmer(OK, maybe a compiler switch). Things like using the "register" keyword in C isn't really needed as good C compilers are better than most programmers at choosing which variables to keep in registers.

      You also compared the transition from x86 to x86-64 to the transition for PSX to PS2. That is also something very different. The PS2 is hard to code because the design of the graphic subsystem and vector cpus make it very fast on the one hand but also very hard to use the full potential. The PS2 CPUs also hard to use because the caches are too small.
      Put it in perspective....why don't 16 bit games re-compiled for 32 bit give a "major" performance boost...unless optimised code is included...??
      When the 386 was introduces things like games were coded in assembler, at least the performance critical parts. Something that is coded in assembler can't be recompiled. Now even games are coded in high level languages.
    • You're wrong in this.

      I have been working for SuSE on porting gcc and binutils for x86-64 for over a year now, and it has been pretty painless. After we had the basic system running, I ported a fullblown but small linux system to it (sysvinit, linux-utils, vim etc.) and the only thing I had to do was to make configure scripts grok the x86_64-unknown-linux architecture.

      If you take a look at the design papers on x86-64.org or amd.com, you will find that the architecture is very easy to port to. It's basically an athlon with 64 bit adressing modes on top (very simplified way of looking at it). What AMD has done is to do the exact same transition that Intel did from i286 to i386 - 16 to 32 bit.

      The new architecture is impressively easy to handle, and gcc can by now optimize almost as good for x86-64 as for i386. It's really just a matter of recompiling.

      And if you don't want to do that, run the 32 bit binary. The x86-64 architecture includes running i386 binaries at native speed. This is no marketing crap, it really is the same as you would expect from an athlon.

      Of course, if your application has assembler in it, you have to port this. But take a look at the docs again, and you'll feel very much at home there. Actually the extra registers will give you a warm fuzzy feeling inside :-) But my point here is that there is no change in the way you think - no change in the coding philosophy.

      I appreciate your point, because for a lot of platform it would be true. But on this one it simply isn't.

      Bo Thorsen,
      SuSE Labs.
    • Applications need to be programmed and optimized to make use of the extra registers, extra info paths, extra instructions available on the new platform.

      Obviously you're not aware of how the Athlon works, among other things.

      Internally, it has many more registers than four. x86 instructions only reference four registers, but internally the Athlon uses it's full set to speed up the code, as well as exploiting several types of parallelism.

      For higher level languages, it is even less of an issue. There may be some impact on my Java code as to whether "int" or "long" has faster operations, but I'll guarantee that all my code using "double" will fly. The best part is that I won't even have to recompile! =)

      The other thing I'll gain is that all of my dynamic allocations will have much larger memory limits. The virtual memory limit per process for the first Linux port to Hammer [x86-64.org] is 511 GB.

      299,792,458 m/s...not just a good idea, its the law!

  • by Inthewire ( 521207 ) on Thursday February 28, 2002 @06:15AM (#3083152)
    The article suggests that AMD write / release native compilers that plug into Visual Studio...which would be a good thing for MS programmers.
    Simple enough to say.

    I just wanted a lead-in for the following question:
    Did anyone else see a banner ad for Visual Studio .NET on Slashdot yesterday? Or was I hallucinating?
    • The article suggests that AMD write / release native compilers that plug into Visual Studio...which would be a good thing for MS programmers.
      I took issue with the author writing that as well. Like it's some trivial task in software engineering to suddenly start writing a compiler which is comparable to what's out there already on the highly competitive x86 platform. (Codeplay does this already of course)

      Obviously this isn't what AMD need to do, they need to help people making compilers which support their platform. That's something I'd like to say of my company but it's hard enough work getting AMD to answer E-mail, let alone provide documentation and hardware samples of forthcoming CPUs. You'd think they'd care a bit more since we're the only people in the world making a compiler with vectorizes for 3D Now!, wouldn't you?

      By contrast, Intel give us virtually everything we could want in the world short of hard cash. Even though they have a department working on their own (highly competent) compiler, they recognise that wide support of their CPUs is a good thing and they should do everything to encourage it. AMD don't quite appear to have the same attitude at present although we live in hope.

      Also, AMD have kind of backed off their proprietary SIMD implementation (3D Now!) with their latest Athlon XPs. The 3D Now! Professional (as far as we can tell), is actually just the old 3D Now! but with SSE as well. An admission of defeat with regards to SIMD software support? One wonders what they're going to do with regards to double float and 128-bit integer SIMD, if anything (Hammer?). Support SSE2 and call it 3D Now! Advanced Server?

    • Tom's harware was really off the mark for this one.

      So they make a 64 bit plug in for Visual Studio.. great what do you run it on now?

      I doubt Windows XP will know to save the other registers on a task switch or be able to address the extra memory without playing page table games. That leaves AMD to come up with a better optimised 32 bit compiler like Intel did? But why the hell would they do that?

      Like it or not AMD is stuck waiting for Microsoft for this one but I wouldn't hold my breath.

  • Slightly off-topic, but also, strangely, relevant. Taken from the article on Gamespot:
    Both ClawHammer and SledgeHammer will run either a standard 32-bit operating system or a 64-bit operating system. AMD's demonstration used the 32-bit version of Windows XP and a 64-bit version of Linux (the 64-bit version of Windows XP hasn't been released and its current preview releases are specifically for the Intel Itanium).
    This smacks a little of Microsoft again giving exclusive previews, access, privileges etc. to their more, shall we say pliable? manufacturers. Or am I barking up the wrong tree entirely here? I thought this would be in contravention with their settlement with the US department of justice...
    • You _are_ barking up the wrong tree. There are multiple things you can blame AMD for, but definitely not being anti-Microsoft.Microsoft received the spec and the presentation on the tentative architecture before the closed circle of "open source gurus" and in total has had more than a 6 months lead on the linux community. It is the same as with Itanic. Working stable windows is howing up now. Working linux has been around for a while.
      • Not what I mean!


        You misread me. I'm not saying that AMD are anti-MS, I'm saying why are MS anti AMD by not giving them a preview of the 64bit Win XP?

        • Itanium's 64 bit setup is has a IA64 (or whatever its called) instruction set (although it does have a marginal x86 IA32 compatibility layer), so its not compatible with the X86-64 instruction set of AMD's Hammer series CPUs.

          Therefore IA64 Windows is not compatible with X86-64 Windows, even though both the Itanium & Hammer are compatible with X82-IA32 windows (although the Itanium is like 486 slow as far as X86 code is concerned)

          What was slowing down X86-64 recompiling/developing is that Hammer CPUs only came out now, & that software compiling was real slow. Hence AMD commissioned Transmeta to bring out a X86-64 code-morphing version of their CPU, for avaliability to software developers to speed up re-compiling, etc.
          • Itanium's 64 bit setup is has a IA64 (or whatever its called) instruction set (although it does have a marginal x86 IA32 compatibility layer), so its not compatible with the X86-64 instruction set of AMD's Hammer series CPUs.

            What exactly does this have to do with anything? MS has a 64-bit Windows - they should be able to add a hammer target machine and just build binaries for it. Sure it takes 20 minutes to boot, but the ia64 port should have nailed all the nasty bugs.

          • Whether it takes 20mins to boot is irrelivent.

            I think the fact is that it takes more than 20mins to port an OS to a completelly different hardware platform, with a different instruction set, etc.
            • Which is why the Hammer has a big advantage over the Itanium - its also compatable with X86/IE32 (including standard versions of windows) without being dog slow like the Itanium is in IA32 compatibility mode.
        • Because Microsoft is not the bunch of evil geniuses some people think and is instead a rather incompetent company that is not up to the task of doing a clean 64-bit design?

          See the pseudo 64-bit Winodws/Alpha for example or the statement by Microsoft that there will be 32-bit parts in IA64-Windows....

    • I've read that WinXP 64bit is only developed in co-op with Intel, thus AMD Hammer will not work on XP 64bit. I guess this will be sorted out pretty fast as the Hammer 64bit will be sold to many users who think about the future OS'es.


      I know Intel do not believe in this 64-bit hype yet, as there exist absolutely no software on the 64bit market.

    • NetBSD had preview access to the Hammer architecture - NetBSD is stable and runs fine on the Hmmer Emulator! It has done for months! Why doesnt it get a mention?

      For those who don't know, NetBSD is not a Microsoft product.

  • by cymru1 ( 300568 ) on Thursday February 28, 2002 @06:38AM (#3083205)
    If you look on the Solo2 motherboard just below the barcodes there is a short piece of musical score. This little tune is the famous intel pentium chimes. Picture of motherboard [tomshardware.com]
  • by boaworm ( 180781 ) <boaworm@gmail.com> on Thursday February 28, 2002 @06:45AM (#3083220) Homepage Journal
    ... I would immediately spend hours porting Emacs to 64bit architecture. That would make my LaTeX typing sooo much more efficient ;-)
  • ok, i made fun of taco, so now its your turn...

    NEWS FOR NERDS, NOT COMEDIANS.

    please everyone reply with all the "HAMMER" headlines here so i don't have to watch it on the main page.

  • Tom's Hardware apparently dropped by as well, check it out here [tomshardware.com].
  • 486DX-still-going-strong dept

    That's what I was hoping when I bought an old IBM Thinkpad off eBay a while back. Unfortunately, the truth has been painful - to even install a relatively recent Linux distribution I need more memory (for one) than the max potential for the unit...I think I'll stay with good ol' Pentium Pro 200's and up...

    • That's what I was hoping when I bought an old IBM Thinkpad off eBay a while back. Unfortunately, the truth has been painful - to even install a relatively recent Linux distribution I need more memory (for one) than the max potential for the unit...I think I'll stay with good ol' Pentium Pro 200's and up...

      Memory, yes that might be a sticking point with an old laptop. If you have the memory, Slackware or one of the minimalist distributions will run fine on a 486. Don't ask it to do too much, and you will be happy. If you can't think of anything else to do with them, 486 laptops make fine routers.

  • by discogravy ( 455376 ) on Thursday February 28, 2002 @09:04AM (#3083504) Homepage

    "When the only tool you have is a hammer, all your problems start to look like nails."
  • I think it's fair to say it'll be a while before *that* many apps are recompiled for x86-64. However, it only takes *one* program for each of .Net and Java to really accelerate them: the IL->native compiler.

    It's conceivable that Microsoft won't make too much use of this, wanting to stay friendly with Intel - but what if the Mono project did? We could end up with .Net running significantly faster on Linux than on "normal" .Net Windows even *before* considering OS efficiency. This could really shake things up a bit. Let's just hope the Mono team have appropriate resources to help them generate the appropriate code.

    Personally I'm a Java fan myself rather than .Net, but there are probably fewer political implications there. I would imagine Sun will implement x86-64 stuff (despite what another poster has said) just to get every bit of performance it can out of Java. I'm looking forward to seeing it happen :)

    Jon
  • So, it can run 64 bit programs and removes some legacy x86 stuff that makes recompiled current programs more efficient? I'm sure AMD will discuss this quite a bit in their ad copy... ON OPPOSITE DAY! If AMD is smart, they'll leave this sort of talk on geek overclocker websites; Joe and Jill Average buy computers due to the advice of talking cows and that sketchy "Steve" character, not the accolades of L33t Boyz, a German guy, and Haji from Johnny Quest. (Sounds like a winning sit-com, though.)

    However, AMD did a great job with the processor's name. Now they can get that early 90's Oak-town rapper with the enormous pants for the promos. Add background dancers surfing the web with stylish laptops in between pelvic gyrations, and a catchy slogan like "Hey, Pentium IV, U Can't Touch This." Remember the name recognition generated by the dancing Intel Inside guy? This would easily beat it by a factor of n.

    • Or, remember the Dennis Miller beer commercials? AMD could do something similar...

      The word "Hammer" lights up with an AMD logo. MC Hammer walks on and says, "I knew they'd call me. All the time."

      Of course, they'd have to license that commercial from Miller Brewing Co. :-P
  • Take a look at this site [amdmb.com]. Look on the board. Right below the stickers. It's the musical score to Intel's little jingle heard at the end of there pentium commercials. They are just SOO funny.
  • I'd hammer in the morning,
    I'd hammer in the evening,
    I'd hammer at... What?
    Damn, this thing just cracked in half!
  • Windows would look like nails.. and would take just as long to drive in.

    I wanna see Hammer-based computers in storerooms running Linux with Windows as an optional downgrade..
  • by Animats ( 122034 ) on Thursday February 28, 2002 @02:44PM (#3085687) Homepage
    What we know now:
    • The 64-bit chip has more pins than the 32-bit chips.
    • It's a heavy chip.
    • The thingey that holds the heat sink on works better than the one on the old chips.
    • It goes really fast.

    When the team that ported Linux has something to say, let us know. It's good that AMD is showing sample parts that work.

    I expect that at least one of the big search engines will convert over to these things soon. They need multi-gigabyte address space in the servers.

  • Quote:
    AMD (NYSE: AMD) today announced that SuSE Linux AG, one of the world's leading providers of the Linux operating system, has submitted enhancements to the official Linux kernel.

    Read the rest here: http://www.amdzone.com/releaseview.cfm?ReleaseID=8 10 [amdzone.com]

2.4 statute miles of surgical tubing at Yale U. = 1 I.V.League

Working...