Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD

If I Had a Hammer 208

adpowers writes: "Anandtech is running an article about their preview of AMD's Hammer. They had one machine running 32-bit Windows and the other running 64-bit Linux. The Linux machine had a 32 bit program and an identical program that was compiled for 64-bit processor support. Both processors were less than 30 days old and running without any crashes, but they weren't at full speed." We did one Hammer story a day or two ago, but there have been several more posted since then (wild guess: the NDA expired). Tom's Hardware has a story, so does Gamespot.
This discussion has been archived. No new comments can be posted.

If I Had a Hammer

Comments Filter:
  • by Mattygfunk ( 517948 ) on Thursday February 28, 2002 @06:08AM (#3083139) Homepage
    So it will be a few years until we all have 64-bit PC's with applications written for them. I don't understand why the development work wasn't put into 128-bit processors in the first place. Wouldn't this avoid the next transition period when most applications are written for 64-bit machines?

    Maybe I'm over simplifying it.

  • by cymru1 ( 300568 ) on Thursday February 28, 2002 @06:38AM (#3083205)
    If you look on the Solo2 motherboard just below the barcodes there is a short piece of musical score. This little tune is the famous intel pentium chimes. Picture of motherboard [tomshardware.com]
  • by Space cowboy ( 13680 ) on Thursday February 28, 2002 @06:42AM (#3083213) Journal
    Applications need to be programmed and optimized to make use of the extra registers, extra info paths, extra instructions available on the new platform


    This is the job of the compiler... If I recompile source code I expect the compiler to optimise the object code in the best way for the target!

    Let's take the example of some of the 1st. generation playstation II code...

    No, let's not. The PS2 was so radically different from the PS1 (I've coded both) that it amounted to an architecture change, not just a platform upgrade. The PS1 is a pretty much bog standard CPU+VRAM+DRAM device. The PS2 is a dataflow architecture, with the idea being to set up datastreams, (with the code to execute being part of the stream), and to target those streams with a firing-condition model. This is amazingly versatile (and the device has the bus bandwidth and DMA channels to handle it, the PC doesn't) but it is *very* *very* different from the standard way coding is done. This is why PS2 games are still getting better two years down the line...

    Exactly the reason why all these gamedev guys kept screaming it is much harder to code for the PS2 than for other platforms

    Actually I don't think it's much harder at all, it's just different. You have 3 independent CPU's, all of which are pretty damn fast considering they're only at 300MHz. The device can do (peak) 3 billion (3,000,000,000) general purpose floating point multipliy/accumulates per second, and you can get pretty close to that figure, unlike most peak throughput estimates. Bandwidth again, and the use of an opportunistic programming methodology rather than a logical-progression methodology.


    Having said that, I'm from a parallel computing background, so using only 3 CPU's is child's play :-)


    Put it in perspective....why don't 16 bit games re-compiled for 32 bit give a "major" performance boost

    Because there's a much more quantifiable change in going from 16-bit to 32-bit. Developers had been hacking around the 16-bit limit using 'near' and 'far' pointers (!!), which meant all the cruft from those 16-bit days was still sticking around and causing problems if you just recompiled.


    Now they're (at long last!) in the 32-bit arena, there's no such problems. A char* ptr is still a char* ptr, it now just has a greater domain. No cruft. No problems.


    This isn't to say that compilers won't get better over time though - optimisation is an inexact science, and you'd hope to see improvements as compiler-writers see how to improve the optimising stage.


    Enough...


    Simon

  • by DrSpin ( 524593 ) on Thursday February 28, 2002 @07:56AM (#3083336)
    In the late '50s/early 60s, when the first mainframes were built, they were all approx 60 bits. Thereafter, all "cost is no object" computers were 60/64 bits. There is not much evidence that anyone will ever want to go further than 64 bits. There are significant overheads to longer words (ever heard of "carry propagation"?).

    In fact, the proposed 64 bit processors will pretty much be doing all known processor design techniques on a chip. At that point, we have used all the ideas that were known when the Vax was designed (approx 1980). Since then, nothing much new has been invented. The only missing piece of technology is content addressable memories (ie execute jump table in single cycle instead of stepping though each option and comparing. These have also been known since about 1980, including how to make them. Used as cache tag ram, they would make a HUGE performance improvement. There is no obvious reason for not using them apart from the fact that its a European development (mostly UK and Germany), and America has a problem with NIH.

    I dont deny there are special cases where 128 bits (or even 1024) might pay, but to sell, you need a general purpose machine, and 64 bits is the top whack as far as we know. After that, masssively parallel is more cost effective (ICL DAP, etc).

  • by Anonymous Coward on Thursday February 28, 2002 @08:10AM (#3083366)
    >I am no expert, but
    Man, yes I can tell. So why you feel compelled to write this stuff and then get modded up to 4 is beyond me.

    First of all you have the classic Umm, first of all it's hard enough to engineer a 64-bit CPU with related components. Face it, the 4Stack [jwdt.com] was made by one grad student.

    Then you have the audacity to continue with the equally intellectual Then there is the manufactoring details, etc, etc. Guess what, manufacturing does not care about bitwidth. They care about layers, metallisation and more. Only inasfar as bitwidth requiring more interconnection which usually requires more metal layers does this have any impact.

    Then finishing off with a perceived lack of economic benefit you truly complete the works of the terminally uninformed.

    Yes, for GP CPU 32 and 64 bits are ok, for graphics, DSP and number crunching 128 bits can be required. And guess what, Cray has made 128 bit computers. I have rarely had the displeasure of reading such a pile of uninformed garbage. Even 5 minutes on Google would have shown it clearly even to someone who has not been involved in both design AND fabriction AND register level programming for 14 years.

    Oh yes, as for the difficulties of programming 64 bit processors, have you heard of Linux? You have even failed to notice Linux has been ported to Itanium, SPARC and Hammer. Well done.

  • by thorsen ( 9515 ) on Thursday February 28, 2002 @10:43AM (#3083889) Homepage
    You're not oversimplifying, you're simply wrong :-) I believe the cause for your mistake is that you are listening to marketing guys without realizing it. So, some facts to set the record straight.

    The definition of a 64 bit processor is that it has 64 bit adressing - 64 bit pointers. Everything else, 64 bit registers etc - is just the icing on the cake and the things you would expect.

    With this in mind, it's easy to see why you're wrong. At this point in time, there does not exist enough memory in the world to fill the 64 bit addressing space. So why on earth would anyone want a larger pointer, when we don't have anything to use it for?

    While I'm sure this will change at one point (since 640 kbyte really isn't enough for everyone), it doesn't make sense to build a processor for requirements that might be 20 years away.

    And in case you're wondering; the so-called 128 bit processors of today are really only 32 or 64 bit processors, but since we lack the terminology for describing a processor with 64 or 128 bit registors, memory bus width, internal processing capabilities etc., the marketing dudes get away with calling them 128 bit processors.

    End of dry definition.

    Bo Thorsen,
    SuSE Labs.
  • by bhurt ( 1081 ) on Thursday February 28, 2002 @12:17PM (#3084583) Homepage
    Actually, assuming Moore's Law (in it's debased form) continues, we can calculate how long it will be before we need to go to 128 bits. The calculation is easy- 18 months of growth for every extra bit.

    So to go from 8 bits to 16 bits took a mere 12 years. Say 1966 to 1978. Going from 16 to 32 took a little bit longer- 24 years (1978-2002). Going from 32 to 64 bits will take 48 years- it'll be 2050 before we outgrow 64 bits.

    And it'll be 2242 before we outgrow 128 bits, 2626 before we outgrow 256 bits, 3394 before we outgrow 512 bits, etc.

    Of course, there are slight physics problems with Moore's law continuing for the next 1,392 years. But that's for a different post...
  • by Mike Greaves ( 1236 ) on Thursday February 28, 2002 @02:06PM (#3085362) Homepage
    "slight physics problems" is right; and how!

    I'm very doubtful that 128-bit machines will *ever* be built; though only a fool would say they definitely won't be built, this early in the game.

    32-bit CPU's still take large chunks of silicon, and their features are approaching 1E-7 meters in size. 64-bit machines will not be severely limited until they are trying to manage about 10 orders of magnitude (1E10 times - well over 2^32 times) more circuit elements. If circuits are still basically planar in physical layout, this implies circuit features approaching 1E-12 meters (1E-7 / sqrt( 1E10 ))...

    Since silicon atoms are roughly 2.5E-10 meters across, there might be a slight problem with building circuit features this small. ;-)

    Put another way, the realistic limit for further process shrink is about 2 more orders of magnitude (the circuits would be just a few atoms across) - only 4 more orders of magnitude in total number of circuit elements, not 10.

    So I really have a hard time seeing how a computer built with *chips*, that is smaller than a skyscraper, would ever need more than 64 address bits.
  • by Shadow99_1 ( 86250 ) <theshadow99@gma[ ]com ['il.' in gap]> on Thursday February 28, 2002 @04:20PM (#3086353)
    Actually it's a progression of Intel's theme... Sort of like saying "I'm a step above you"... check out the front page of AMDzone.com to see what I mean...

There are two ways to write error-free programs; only the third one works.

Working...