Please create an account to participate in the Slashdot moderation system


Forgot your password?

AMD's New SledgeHammer: 64 bit chip 211

ChickenBomb wrote to us with word that perennial battle between Intel and AMD is continuing with AMD unveiling plans for their new 64-bit microprocessor, code-named SledgeHammer. Heck of a lot better name then Itanium, IMHO.
This discussion has been archived. No new comments can be posted.

AMD's New SledgeHammer: 64 bit chip

Comments Filter:
  • I was speaking of the PC market. EXP. something I can find reasonably cheap.
    g3/4 are good chips. unfortunately I STILL havn't been able to find one I can put together myself.
    as far as other hardware... I have either been not impressed by it... or it is way too expensive for the mogamips.
  • My code will work anywhere, once someone ports the VM to it.

    If you write decent C then your code will work "anywhere" once "anywhere" has a C compiler ported to it. And your code (again if you write it decently) will take advantage of many of the innate features of the hardware. Plus a C compiler will be available long before a java VM...

    Don't get me wrong, Java was/is a neat idea. But portable, available, source code is so much better.

  • Now I hope that when 64 bits arrive in volume (and given that we'll still be alive) this #@$% 32-bit floating point rush that we have today will go away.


    (Oh, I meant games and 3d-stuff if you didn't get it)

  • The time to market for a flagship CPU is four to five years (from initial architecture design, to volume production). For a chip with a new ISA, it is longer (the article said Merced took six years, but it actually took over seven years). In other words, I don't see how they think they will get this out by 2001.

    Since they are simply making the IA-32 architecture 64 bit instead of developing a new architecture, they will be hampered by the IA-32 instruction set (e.g. all of the different funny modes, space for obsolete instructions, difficulty of instruction decode, etc.) They have an opportunity (in theory) to make a new (and good) architecture, but they are choosing to use the most backwards and difficult one available. TRUE, this is good for backwards compatibility, however, IA-64 is also backwards compatible AND has a different architecture.

    The reason this architecture will fail is that all the software makers have done ports to IA-64. I am especially talking about the non-x86 commercial Unix's. Porting things like HP-UX, Irix, AIX, and Tru64 makes as much sense as porting those to IA-32. In the more PC type market it is viable, but only just: again, it depends on support from software makers, and I really don't see why they would all want to go out and support ANOTHER architecture which is totally unproven. It also depends on support from VAR's: could you imagine IBM or HP using an AMD chip in their high profit margin workstations?

    Finally, I have the following to offer: If AMD was smart, they would license an existing 64 bit architecture (say, Alpha), and then engineer a die which would put that together with their IA-32 bit stuff. This would actually be a serious threat to Intel (but what AMD has now is a joke). It would have the advantage of have existing 64 bit operating systems and it would also be quicker to market (it might be cheaper also, depending on the licesing costs).
  • For one thing, those 21264 instructions are actually just 32-bits long IIRC ('tho they manipulate 64-bit data).

    Question: What does instruction word size have to do with the quality of a processor? Address and data word size is the important part, AFAIK. "How much memory can you address?" and "How high can you count?" are the questions you are concerned with.

    In fact, wouldn't a smaller instruction word size keep program size smaller?
  • 256-bit?!

    If I store a flag in 'int' I'll have 0.39% efficiency. Cool!
  • The 68000 was a hybrid 16/32 bit chip. It had three 16-bit ALUs. The 68020 was the first true 32-bit chip from Motorola.

    The 16032 and Z8000 were buggy chips. They died of self-inflicted wounds.

    I don't see how anyone can seriously compete with Intel. Their chip designs may be mediocre, but who else has the process technology and fab capacity to produce millions of high speed chips?

  • I believe they stopped using x86 because Intel were forbidden to patent 586 (as it is a number). I guess patenting became an issue because of the upstart AMD and Cyrix chip corps.

    I would have guessed Hexium and Heptium etc. back then, but nope. Anyway - Itanium sucks, so does Sledgehammer. I'd prefer 'Brute' or something else I can relate better to ;)

    "You rarely reach the target first by walking in another mans path"

  • by Shirotae ( 44882 ) on Tuesday October 05, 1999 @05:31AM (#1637796)
    This reminds me of the time when Intel introduced the 8086. Back then, ZiLOG with the Z80 was a real force in the market competing with Intel's 8080, Motorola's 6800 and Rockwell's 6502.

    Then came the 16 bit revolution (when we really needed more - the 16-bit minicomputers running out of space should have been the clue.)

    The competitors were:
    Intel with the 8086
    ZiLOG with the Z8000
    Motorola with the 68000
    National Semiconductor with the 16032 (later called 32016)

    In technical terms, the order of merit was 16032, 68000, Z8000, 8086. In marketing the 8086 was way ahead, but I think the 68000 was next.

    Only two of these gained any substantial market share, and the 68000 had the advantage of being really a 32 bit processor. The 16032 was a better 32 bit processor, but it was just too late arriving.

    If AMD have some technical feature of the scale of 32 vs 16 bits back then, and they are also far enough along with the development that they can ship at most a few months behind Intel, they have a chance of competing in this space. The more likely outcome of developing an incompatible processor is that we will see them reinvent themselves in some niche market in a few years time as ZiLOG have now done.

    The Open Source community may well be able to use SledgeHammer when it arrives, but the software shipped as binary will ship for itanium first (or only), and that will be what counts.
  • People should realise that this is the CPU industry's season to sell vapor - you'll see a whole host of announcements of future chips, previews of new silicon etc etc 'Microprocessor Forum' is the conference where this sort of stuff happens .. and it starts today.

    This isn't so much a bad thing Merced was announced in a similar manner MANY years ago - people should take anything you heer this week about the distant future (ie 2+ years) with a grain of salt - chips take a long time to bring to market and always change a lot during the process - remember they are announcing their goal - not new silicon that's sampling to customers - these are VERY different things.

  • did not compaq stop the future of the Alpha Chip ?

    AMD might buy some 64b technology from Compaq ?

    Or maybe Transmeta really did some highly performant x86 emulator than AMD will use ?
  • Keep in mind that AMD hasn't even made their presentation at the MPF, yet. This article is most likely based on what little AMD has said/allowed to be said about the chip prior to their presentation. Hell, most of the information could very well come from whatever they have in the MPF press guide. AFTR, I've not seen the press guide or whatever literature is handed out at the Microprocesor Forum, I'm just making some (somewhat) educated guesses as to why detail is so abhorrently missing in this article (ooh, don't forget NDAs, as well :)
  • Yes, please tell us the details. (You'd think this would go without saying...)
  • Will everyone jump on the Merced bandwagon and abandon the new AMD chip?

    well, I think yes. The only reason that people buy AMD is so that they can get relatively the same speed chip for a much lower price. They run the *same* programs, not *different* ones that their new 64bit will run. I really can't see developers *or* users wanting to buy a chip like this. The Macintosh may have been a better design, but who supported it, and where are they now?
  • by delmoi ( 26744 )
    Intel chips have no problems with more then 640k of ram. it was a design flaw in microsofts OS.

    the 68k is another type of CPU the 'k' is sort for 000, ie 68000 68001, ect
    "Subtle mind control? Why do all these HTML buttons say 'Submit' ?"
  • perhaps they could even get Lemmy to be their spokesman.

    a little bit of 'Speedfreak,' and you could bet your sweet biffy i'd con my wife into letting buy one. :)
  • The Sledg-O-Matic is the handiest, the dandiest little damn processor you ever did see. It'll replace a whole slew of previous chips and processors in your boxen. It fits in all sockets, with a nudge and a shoehorn sometimes, but it'll go. And it smashes its way through software installs and kernal compiles. Need more bandwidth? Let the Sledg-O-Matic clear the way. Have a problem with programs that don't work? Use the Sledg-O-Matic on the CEO of the software company that made it. It's a lot more permanent than a pie in the face.

    Next time you go to buy a CPU, remember the Sledg-O-Matic!

  • I think AMD really is too soon with this. Shouldn't they concentrate on delivering enough Athlons first?
  • And now for something completely serious...

    Flag - 1 bit used
    Char - 8/16 bits used
    Your average int - about 5-15 bits used....

    There must be some data on average number (and dispersion for that matter) of USEFULL bits per a piece of computer data (word?) in average computing task.
    Now it's obvious (for me at least) that if you'll get a 256 bit (or whatever) CPU - you'll actually be LOSING bandwidth compared to your average 32 bits one as you'll be tossing absolutely useless zeroes all around your computer.

    Can someone calculate the OPTIMAL number of bits per word? Bandwidth-wise.
  • ...that they get enough money from Athlon sales to go ahead and smash Itanium with their Sledgehammer(wow! I sound like a marketing type).

    If they do make it to production, I know the Sledgehammer will be a superior chip, AMD has proven themselves many times in the past and won't let us down.

  • You obviously never tried coding anything for some 8-bit chip? In assembler, of course. Let me tell you that using more than one register for a number sucks badly.
    Say nothing about address spaces. If you want to address more than 4Gigs without kludges you'll need more than 32bits! (Or you should address words, not bytes - than you'll have 16Gigs, but that sucks also)
  • Itanium really is a stupid name for a chip. I think Merced was fine.

    Now Sledgehammer is a great name. And instead of "Start me Up" by the Stones they could advertise with something by Motorhead! *wink*

  • If this Hammer really will be of x86 line it will be a 4-IN-1 chip:

    1. 16 bit - 'real' mode
    2. 16 bit - prot. mode
    3. 32 bit - prot. mode
    4. 64 bit

    Gee. I'm glad it won't have UNIAC instruction set at least.
    (Spelling could be wrong ;)
  • by Anonymous Coward
    AMD needs to make a backend for GCC for their SledgeHammer so that they get all Linux nerds backing them. This plus a (hopefully) better chip than itanium would let them push forward into server space. Low cost hardware + Free software would really rock.
  • I for one love AMD processors. I'm running a K6-2 450 in my machine at home right now. And I'd LOVE to get my hands on the chip formely known as K7. But I just have a problem with one key aspect of this processor.

    Backwards compatability. From what I've been reading in the past about processors, this is the key "feature" that keeps system speeds down. It's one of the reason RISC processors are faster than their x86 counterparts.

    Intel finally has the right idea by moving to a completely new 64 bit platform instead of just adding to the x86 chips. And now AMD is going to take a step backwards.

    Ahh.. screw em both. I'm going to save up for an Alpha, or a G4 to run Linux on.
  • by Guy Harris ( 3803 ) <> on Tuesday October 05, 1999 @09:14AM (#1637816)
    Also note that SledgeHammer might be able to run Win95/Win98/WNT4/Win2000 out of the box.

    And so will, presumably, Willamette (which, at least as I infer from what I read in Microprocessor Report, will be the next IA-32 core from Intel; they may call it "Pentium IV" or whatever, but it appears it won't be a P6 tweak).

    The questions then will be

    1. Which of them will do a better job at running those 32-bit OSes? (The 64-bitness of SledgeHammer will probably be irrelevant for that job.)
    2. To what extent will the ability to run 64-bit OSes be important to those buying SledgeHammer machines who buy them intent of running 32-bit OSes now?
  • I believe that the design team and the fab/manufacturing team are separate. Now that Athlon out of the design cycle, AMD can afford to make the design team work a new chip. Unless you want them to fire the design team and allocate more money on the fab/manu/sales of Athlon?

  • 32-bit numbers are limited in (AFAIK) two ways today:

    You forgot the biggest limitatation of 32-bit machines: Address word size. 32-bit machines can address a maximum of four gigabytes of memory. A 64-bit machine can address four billion times that. It is not uncommon to want 8GB, 16GB, or even more memory in servers these days. And it will only grow larger as disks get bigger. A 2500 GB disk array wants a lot of cache. :-)
  • I think that it'll be a bit hard for AMD to persuade developers to write code that'll run on their platform rather than an Intel's - isn't this a bit premature? With the two key people having resigned, and market share lost through Intel's price drops, they're relying too much on the success of the Athlon to give them enough pull. At least, that's how I see it...

  • Well, the great irony to all of this "backwards compatability" crap is, that PC's are the most NON backwards compatible machines there are. When you look at running modern software on a 286, forget it. Older software runs fine on newer machines, true, but really, what microscopic fraction of the market has that need - and what is there that was written in 1986, that hasn't been done 10 times better since then? (in terms of Windows software).

    As far as Linux use goes, and I think that's pretty important to a lot of people on this list, I do think that effort should be made to make sure new software will run on 486's, but going back further would probably be a waste of time, since a 486 PC can be had virtually free these days anyway. Heck, you can get a decent low-end pentium for next to nothing.

    "The number of suckers born each minute doubles every 18 months."
  • From the article:

    Krelle said AMD's SledgeHammer chip will be designed to run the older 32-bit software at high speeds, in contrast to the relatively slow performance that is expected for the 32-bit software on the Merced chip. And since AMD's new "x86 64" architecture will offer a less radical style of computing than Merced, Krelle said, it will be far easier for programmers to write 64-bit versions of the software.

    This seems somewhat surprising, as I would expect Intel to pay close attention to the needs of their good pal, Microsoft. So now you'll need the competition's chip to run 32-bit apps more efficiently... If what AMD claims is to be believed.

    And Microsoft is still years away from having a decent 64-bit OS.

    With the competition following on Intel's heels, will Intel be forced to whip their 64-bit chips into gear? If so, will they be forced to toss their alliance with Microsoft to the pigs, and move on into the realm of alternate 64-bit OS?

    If so, they'll get a lukewarm welcome, I'm sure. They're not nicknamed Wintel for nothing. I think as the possibility of 64-bit platforms becomes more and more a reality, the relationship between Intel and Microsoft is being detrimental to Intel. And they're both likely to lose ground.

    I dunno; maybe I'm reading too much into it. Maybe Microsoft will come up with their Win64 platform, and people will consider crappy performance to be the norm, and nothing will change. That certainly wouldn't be anything new.

    "There is no surer way to ruin a good discussion than to contaminate it with the facts."

  • Wait, they're making a chip that is compatible with the archaic x86 instruction set - so what is so forward thinking about that?

    Not to dis Linus and get my karma all beaten up and stuff. . .

    "The number of suckers born each minute doubles every 18 months."
  • by Sloppy ( 14984 ) on Tuesday October 05, 1999 @05:42AM (#1637824) Homepage Journal

    This is definitely a compromise solution, but it could work well for AMD.

    It will work well for AMD because it is a compromise solution. The PC industry is completely built on compromises because the masses like to take small incremental steps. That's just how evolution works; large mutations are risky, and escaping a local optimum is expensive. It looks like Intel tried to introduce real technological progress, and now they're going to face a threat from someone who is going to use their very own stepwise refinement doctrine.

    I don't know whether to be happy or sad about this. I hate seeing low tech win again, but there's such satisfying justice in seeing Intel stabbed with their own weapon, wielded by someone who uses their old(?) philosophy. Yes, I hope AMD goes ahead with this, and makes a mockery of the PC industry for another 20 years. Maybe that's my hatred talking, but I just can't help it. Even if the new boss is the same as the old boss, it's going to feel soooo good to see the old boss suffer.

  • by bubbalou ( 98776 ) on Tuesday October 05, 1999 @05:42AM (#1637825) Homepage
    Do we really need another 64-bit CPU when there is already a really great one languishing on the sidelines? Alpha AXP runs Linux extremely well and is the fastest microproceesor out there. Don't get me wrong... I love AMD's offerings--I've got Linux boxen running on their 5x86, their K6 II and K6 III lines, and I'm hankering for an Athlon, but Alphas are a sweet machine, and you can get 'em now. I guess I'd have a more welcoming attitude if I thought it would help drive down entry-level price points for the other offering like Itaniums and Alphas.
  • oops, better make that bippy.
  • by bhurt ( 1081 ) on Tuesday October 05, 1999 @05:45AM (#1637828) Homepage
    Intel realizes this, I think. But on the other hand, the clock is ticking on the life expectancy of _every_ 32-bit chip. If you average desktop system being sold today has 128M of memory today, and that number is doubling every 18 months, then in 4 more doubling, or 6 years, the average desktop system will have 2 gig of ram. Already it's not unusual to see large servers with 10's of gig of ram, and high-end workstations with multiple gigs of ram. Intel is the _only_ desktop & server chip manufacturer still selling only 32-bit chips.

    The 386 was released in 1986 IIRC. It wasn't until 1995 that Microsoft managed to release a broadly-accepted 32-bit OS. And the situation doesn't look any better today. But Intel can't wait ten years for Microsoft to get it's act together. This explains Intel's sudden support for Linux- it's one operating system that Intel can assure itself will be running on Merced (if you want something done right...). Intel already has had experience with the GCC compiler (remember pgcc), and once GCC is ported, even Linus agrees that porting Linux is easy.
  • As I have seen till now, the processor industry seems to change because Linux (and other free Operating Systems) make it possible for customers to use other microprocessors.

    But which one should someone use? I, for example, really hate using these Intel or AMD chips at the moment because they are x86 compatible (the problems with x86 have been discussed often enough yet).

    Yes, you're right. Alpha Microprocessors are a high-performance way to go, but they are really expensive.

    The only things which are _really_ interesting are the StrongARM (from intel/digital) and the PowerPC Open Platform developed by IBM.

    StrongARM seems to be dropped by Intel because you don't hear anything at the moment. On the other side, Netwinder's seem to sell well. I don't know what to think about that.

    IBM's PowerPC Open Platform hasn't launched yet and the website is rather small at the moment, but it looks interesting. Is it possible to escape from these old x86 times?

    If I would have to decide which platform to buy at the moment, I wouldn't be able to buy anything, because I simply don't know. All these interesting and good platforms seem to die in the future if there is not enough support by the customers. - Most user I know buy x86 chips because they simply "work". (They buy AMD if they don't like intel; It's a step in the wrong direction, I think. The decision is not AMD or intel; the decision is x86-arm-powerpc-alpha-sparc.)

    Maybe someone is interested in discussing that.
  • Ha, made you look. ;) Seriously, since 64bitishness is mostly vapor anyway, I've been daydreaming for years now about what a 256-bitter would be capable of. That would be some serious throughput!

  • There are reasons to prefer one company over another that aren't based just on their products. There are also moral issues here.

    Check out
    to read about how badly Intel treats its employees and you'll see some of the moral issues I'm talking about. After I read this, I decided that not one more cent of my money was going to be put toward buying a processor made by Intel
  • I happened to hear a conversation where a IT specialist tried to explain 64bit sofware to a newbie.

    The joke is that the newbie said: "Processors available today are 32bit, right? And next CPUs will be 64bit? But that means two times larger software!!!"

  • Then we'll have a CPU with a size of a frig eating enough power to make small african country happy and a couple of rooms of peripherals.
    It will run Sun Linux of course.
    And there we'll be a host of terminals connected to it.
    And admin will slap your little hands on your every move.

    Welcome back to the future.

    (Arrgh. I miss personal computers)
  • All I can say is WTF. Ford didn't have to patent "5.0 Liter". Why the hell does Intel need to?

    "The number of suckers born each minute doubles every 18 months."
  • I really hope they don't just tweak the old CISC x86 instruction set

    That's precisely what the AMD press release [] says they're going to do:

    "AMD plans to extend the x86 instruction set to include a 64-bit mode, delivering a simple yet powerful solution that enables all of the performance benefits associated with 64-bit computing, while maintaining compatability and a leading-edge performance roadmap for the existing installed base of x86 32-bit software applications and operating systems," said Weber. "No other 64-bit solution has full native x86 32- and 64-bit compatibility."

    There's also this random quote in there, also indicating that they don't plan to introduce some Exciting New 64-bit RISC Architecture:

    "By extending the x86 instruction set to 64-bits, AMD's x86-64 technology should give us very fast compiler retargetting and the easiest kernel port so far," said Alan Cox, Linux Kernel Developer.

    (Yeah, I just about dropped my teeth when I saw a quote from Cox in there....)

  • 'coz you can put those additional bits to good use, that's why. Figure bits for the register index (e.g. 128 int/128 FP registers => 7 bits per argument => 21 bits already. Add bits for predication (64 1-bit registers, for instance). Then add bits for the opcodes themselves...)

    You might get away with fewer bits if you have a small (e.g. x86-style) register file and no fancyness. But otherwise... it's gonna hurt.
  • AMD is taking a big risk, here. Of course, the biggest risks pay off the best, but they can also fail spectacularly.

    Consider: AMD has, in the past, made its money by doing what Intel does, but cheaper and better. While it did mean that AMD was always in Intel's shadow to some extent, it was a good market to be in. Being number two in the PC industry is a good place to be if you have significant market share, and AMD was doing well on that. It was also good for consumers to have a choice in their PC purchases.

    Now AMD switches to an incompatible architecture. It may beat Intel's line in every way, but stuff written for Intel will not work on AMD. They may lock themselves out of a large market. DEC's Alpha CPU, for example, is a great design, but it sells a fraction of the units the K6 line does. We may also be back to having a single Intel-compatible OEM -- namely, Intel.

    It will be interesting to see how this turns out, that is for sure.

    Just my 1/4 of a byte. ;-)
  • You mean Horium.

    (Horrorium, Hornyum... I'm going to bed, it's way too late here)
  • So, why exactly would I want to code for one of these instead of:

    Alpha on Linux/FreeBSD/Tru64.
    Merced on Linux/Win64.
    Sparc3, just because a list is too short with two.

    Dunno. Especially when you consider the Alpha and Athlon are in a kinda symbiotic relationship wrt sharing EV6.

    Anyway, all this next generation stuff is a bit up in the air. Just use the T-word.

    Dave :)

  • The K7 is a risc chip with a predecoder unit to break apart the intel instruction set, they are already faking it, what's so bad about faking it with a bigger hammer?
    All you need to do is keep the existing predecoder instructions add the ones for 64 bit and increase the pipelines and viola! You have a 64 bit chip based on a proven (or soon to be) design. They will probably add a unit in/near the predecoder to combine 32 bit segments for the 64 bit core.

    Beyond that, its a great idea anyway. If done right, they will have a single chip that will compete with both intel's 64 bit and 32 bit (you don't think they are going to abandon destop users do you ?) offerings for many years to come. While intel works on two fronts AMD can focus on one. You didn't think they built the K7 architecture to only last for the next 1-2 years. Much of it will be around probably 4-5 years from now.

    (BTW. I have seen no proof that the G4 is faster than the K7. They claim that it is ~3 times faster that the PIII in 7 of intels own tests. Look at the tests. They seem to be testing very specific aspects of the chips functionality. Wait for the real benchmarks to come out.)

  • From what I've read the k8 (sledgehammer) chip will be a dual 32 bit chip, which allows twice as many 32 bit operations or the normal number of 64 bit instructions, but is otherwise basically like a athlon for most things.

    I actually like that idea, but then again I don't run a high end server. So double the number of 32 bit instructions sounds really nice for me.
  • So they'll probably make its real name something stupid like Athlon or Wombatium or something. Slashdotium anyone?
  • I think that this is a good move for AMD. There arch rival will be moving to an architecture that will need entirely new software to operate. Customers do not want that. Do you know how many places would still be using 486's today if the current architecture wasn't backwards compatable?

    When the Icrapium comes out it will be bit before software is available for all the customers needs. When the SledgeHammer comes out it will be ready to work. No waiting for your favorite software to be ported over to the new architecture, you just go. Huge bonus!

    When the Icrapiums come out alot of would be upgraders are going to stick to there pentiums. Who wants a great machine that you can't do the shit you need to do on it?

    AMD biggest problem right now is to get the chip released much closer to Intel's release. Or else by that time all the needed software may be available for the Icrapium.

  • Um, the Athlon isn't a success yet.

    The way the chip is set up, you will be able to write current 32 bit code to work on it. You will pull a lot of developers in on that. Plus you have the great aspect of a slow migration path. Buy the 64 bit AMD chip keep your existing code. Use the 64 bit code as it becomes available. You don't have to start from scratch.

  • Alpha is great, but nobody at Digital or Compaq has ever understood that it is not enough to get cheap cpu, we need cheap motherboards too ! When Asus or Abit comes up with an Alpha motherboard, every Linux user will go Alpha. We have yet to see this happen...
  • I'm a bit confused. Twice the number of instructions (or the same number of 64-bit adds) doesn't matter if I'm still constrained by 32-bit addressing. New addressing modes => new instructions. It's the choice and coding of these instructions that I'm interested in.

    AMD should be able to get away with only adding 64-bit instructions for the common x86 operations. There's something like 15,000 different x86 instructions, of which maybe a couple hundred are used extensively (if that). Take this subset, make it orthogonal (and do the same for the 32-bit versions), and you'd be getting a decent chip. Do the rest in software.


  • Seriously, SledgeHammer was one of my favorite shows growing up as a kid. Wish someone would bring it back. HAMMER!!!

    -Huang Bao Lin (Trust Me!)
  • I bet their "x86 64" is a 64-bit version of the "RISC-86" internal op-set that Athlon uses, or something similar.

    And I bet it's a 64-bit version of the "CISC-86" external instruction set that Athlon and K6 and K5 and Pentium {, Pro, II, III} and 486 and 386 use, given what they said in their press release []:

    "AMD plans to extend the x86 instruction set to include a 64-bit mode, delivering a simple yet powerful solution that enables all of the performance benefits associated with 64-bit computing, while maintaining compatability and a leading-edge performance roadmap for the existing installed base of x86 32-bit software applications and operating systems," said Weber. "No other 64-bit solution has full native x86 32- and 64-bit compatibility."

    Yes, I guess one could, if one really wanted to, read that as saying "extend" in the sense of "add a different instruction set that only a little bit like x86", but I see no reason to believe that's likely to be interpretation AMD had in mind.

  • I was just about to post something saying this exact thing! I hope they named it after the show and not after regular sledgehammers you buy at Home Depot.
  • Using your assumptions that more complicated == faster, one could argue that CISC based cpu's are inherently faster than RISC based cpu's. We know this is definately not true.
  • The PPC _architechture_ is specified for both 32 bit and 64 bit (like MISP and SPARC). Motorola hasn't implemented the 64 bit version of the architechture, but I believe IBM has.
  • AMD, Intel, they both do this already to a degree. I believe, starting with the Pentium Pro, Intel moved to a RISC platform. AMD has been RISC since at least the K6 (I'm pretty sure it started with the K5). The x86 instructions are translated in to micro-OPs or macro-OPs depending on if you are talking to AMD or Intel, and then it is these sub instructions that are executed. If They would provide a way to execute these instructions without x86 translation, you would have a very powerful RISC/CISC platform. ISA's old and needs to be dropped. PCI and VLM (I think that's the right name) should be the only busses considered. ISA slows everything down.

    Time flies like an arrow;
  • They trotted this good idea (At the time) out and pushed the heck out of it but it died a slow painful death in the PS/2 line. However, a little later the PCI bus was born. Which, looks a lot like the PCI slots we know and love.

  • For one thing, those 21264 instructions are actually just 32-bits long IIRC ('tho they manipulate 64-bit data).

    Quote from the "Alpha Microprosessor Hardware Reference Manual":

    Term ---------- Words - Bytes - Bits -- Other
    Byte ---------- 1/2 --- 1 ----- 8 -----------
    Word ---------- 1 ----- 2 ----- 16 ----------
    Dword --------- 2 ----- 4 ----- 32 ---- Longword
    Quadword ------ 4 ----- 8 ----- 64 ---- 2 Dwords

    To me this means choices. You don't have to use those 8 bytes for a Quadword if you don't need it, but if you do, its there. Feel free to prove me wrong!

    rbf, who is typing on a Alpha running Linux 2.2.12.

  • the link is on the press release: 9.ppt

    According to them they are going to retrofit Athlon with 64-bit capability at the cost of 5% more silicon. I like their proposals, especially their 3-operand FP instructions (rather than the stack-operand FP).

    I think their approach is quite decent. They should be able to come out with one by next year, running on the same EV6 motherboard. Some suggestions I would give them include:
    1) explicit register renaming or more registers
    2) more condition codes ala POWERPC
    3) more predication support (conditional execution) especially on load and stores
    4) support for speculative load and check

    How much more silicon will that cost ya? another 5%?

  • by hazydave ( 96747 ) on Tuesday October 05, 1999 @06:02AM (#1637864)
    Folks seem to have the idea that companies, chip or otherwise, are somehow single-tasking entities. This couldn't be further from the truth. Most chip companies work on several projects in parallel, and if it's a competitive line such as CPUs or 3D graphics chips, these projects overlap (this has been SOP at Intel & Motorola, for example, since the 80s). AMD previously mentioned that the design team derived from the NexGen team, the folks who did the K6, are not the people behind the K7. So, presumably, they aren't sitting around playing Quake III, they're working on something new. More than likely, it's a CPU, and since these folks have proven pretty hot on architecture in the past, doing so in the present wouldn't be a surprise. As for Athlon, it's in production. Unless they do any more true versions of the chip (eg, K7-2, etc) or have major production problems, there is no chip designer work left on the Athlon project anyway. Whatever they're doing now is more than likely process tweaks, die shrinks, etc. That's different people, unless there's some redesign necessary along with a shrink -- anyway, not enough work to occupy a whole uP design team. So these guys are likely on to bigger and better things now, too.
  • Heh...they're gonna need -something- to replace Alpha. The way things have been going, I'd expect to see Sledgehammer hit before Itanium(snicker) or Itanium Pro or Itanium II or Itanium: The Revenge (whatever they're gonna rename McKinley, widely acknowledged as being the first -useable- generation of IA-64. It's also still up in the air whether it will have x86 compatibility in addition to PA-RISC compatibility. HP ain't interested in pushing low-ball boxes to legacy windows drones.)

    SoupIsGood Food
  • by speek ( 53416 ) on Tuesday October 05, 1999 @06:07AM (#1637866)
    A lot of people complain about Java because it's Run Everywhere theory isn't overly useful to them. They get pretty good portability from C, and why would they want to give up the processing speed for a questionable advantage?

    But I see a lot of people here saying that AMD's "compromise" will succeed cause it won't force developers to port everything all at once. It'll save a lot of work, so it'll succeed over Merced. Some also bemoan that this means a lesser quality chip will win. A drastic change in architecture is too risky, they say.

    But, Java is also portable to anything new that comes along, so an advantage of the VM architecture is there isn't as much reason to fear drastic innovation in the underlying hardware. This is major, IMO. My code will work anywhere, once someone ports the VM to it. A single port, and everyone's code is brought to the new hardware. This is why many people argue that the greater flexibility of the VM architecture is worth the relatively minor performance hit and even the larger memory hit.
  • But at least you could tell the difference (math-co-processor / no co-processor). Without reading all the specs and white-sheets, how do you instinctlively know the difference between a PII and a K62?

    To me, it's a bit like having to handle the metric system and the American system of measurement. It's useless and only clutters everything up.

  • The iMac had no floppy, no serial, no ADB, no SCSI, and no ROM. People complained about that, but lately I haven't heard anyone who bought one complaining. Apple's been pretty agressive about ditching archaic technologies of late, and for a while they caught a lot of flak for it. But if backwards-compatibility is holding computer back, we'll start reaping the benefits when all Macs are running a fully native OS X on multiple-processor G5's, with USB, firewire, and 100-BT ethernet as peripheral technologies.

    Whatever the merits of closed versus open systems, a closed system like the Mac does allow Apple to push new technologies more aggressively.
  • Not necessarily. PowerPC is very nice these days, with AltiVec and all. But it's still fundamentally a 32-bit processor. If 64-bit matters, IBM and Mot are going to have to take the 64-bit architecture of the Power3 down to the PowerPC level. And stop battling amongst themselves on who is and isn't going to support what feature. And get some real momentum behind an Open PPC platform with some real OS choices (the hell with Apple), maybe produce a G4 processor on the EV6 bus, to let it drop in to existing (or soon to exist) commodity motherboards. And so on...
  • "That's just how evolution works; large mutations are risky [clip!]"

    The iMac was a big risk, which just happened to be a real money-maker. The question is whether you could classify it as a big mutation since apple has been making all-in-one PC's since the beginning of time (ok, maybe the SECOND day..hehe).

  • Um, the Athlon isn't a success yet.

    Exactly - that's what I said "a bit premature".

    Granted, they have the slow migration path, but Intel is going to be in direct competition with them, which means that it'll be hard for them to convince developers to use their platform.

    Personally, I hope they do well; it's about time Intel [] had some serious competition.


  • No one has to license an instruction set, there's never been any special protection for this. Of course, Intel may have specific protection, in the form of patents, which would prevent anyone from cloning an IA-64 machine, or make it difficult. I would expect that AMD studied the IA-64 architecture, legally and technically, before making the rather bold decision to strike out on their own. At least that keeps things interesting. I suspect the rationale will fall out eventually. Could be they think the world will be slow to move to IA-64, that they'll be able to release faster chips before Intel does, or that there are just too many technical and legal stumbling blocks in the way of IA-64 clones.
  • Yes, you're right. Alpha Microprocessors are a high-performance way to go, but they are really expensive

    no, they're not

    IBM's PowerPC Open Platform hasn't launched yet and the website is rather small at the moment, but it looks interesting. Is it possible to escape from these old x86 times?

    yes, buy an alpha or a mac. the sawtooth mobo's are much better, anyway, and they'll be shipping very soon (much sooner than anyone will have an open PPC board shipping or even in production).

  • Well, they'll have to be careful about using sledgehammers (the tool) in their commercials when its released. Apple might want to sue because they are infringing on 'their' creative pioneering. (Apple's '1984' commercial with the person running in and chucking the sledge into the screen.)
    How twisted would that be? The new Sledgehammer has that same ad except it says "AMD" instead of "Apple" and they debue the new processor in a box that is painted so it looks like one of the old Mac Classics...

    I will agree with one of the other poster's about Peter Gabriel's song potentially being in one of the ads...even though I really don't care for the song.

    Ok, back to work...

  • It isn't so long ago that you'd still buy computers with 32 MB of ram - WAY too much for Win95, let alone Win98 or even NT.

    Huhh? Maybe you mean way to little RAM. I know win95 has a memory footprint of 19 megs, I think NT has somewhere between 35-45 megs of memory footprint. I think this is what you mean, but it's hard to tell.
  • You left out:

    PowerPC, and

    To name four.

    There's more to life than the Wintel morass.

  • Now we can honestly say that the SledgeHammer crushes Merced... in more ways than one! :)
  • Please stop spreading FUD. The Alpha is alive and doing quite well. The only things that has changed is NT is now dead.

    rbf, who is happily using a Alpha running Linux 2.2.12.

  • by m3000 ( 46427 ) on Tuesday October 05, 1999 @10:57AM (#1637889)
    From what I understand, they made the name Pentium because of copyright issues. Apparently a bunch of small no name chip competitors were trying to pass off their chips that had 486 on them, as Intel chips. People would see the 486, and assume it was an Intel chip, when it might not be. So Intel named the 586 the Pentium so those companies couldnt' trick consumers as easily.
  • I agree. While AMD chips do have the backwards compatibility, which is better than the Merced, it's achilles heel will be that the two are not compatible. And if someone was designing software, they make it for the largest user base, which would most likely be Intel. It's like Windows and Linux. Windows gets much more software than Linux because it has a much larger user base. AMD is taking a very risky move that could kill the company if it doesn't work, or kill Intel if it does. Only time will tell.
  • This is hilarious idea. If anyone from AMD or associated ad agencies is listening / reading, please follow up on this!

    That show was vastly underrated; I haven't thought of it in many years. There could be a great funny ad series based on it ... make fun of pompous, 'we're so offical' Intel, which Intel's bunnysuits are a lame attempt at ...

  • The x86 instructions are translated in to micro-OPs or macro-OPs depending on if you are talking to AMD or Intel, and then it is these sub instructions that are executed. If They would provide a way to execute these instructions without x86 translation, you would have a very powerful RISC/CISC platform.

    This was the original philosophy of RISC - expose what would have been microcode to the compiler and let it go to town.

    Really the only differences between x86 and current "RISC" machines are

    • Variable-length instruction encoding
    • Complex memory addressing
    • Prefixes like no tomorrow

    Really none of these is inherently bad.

    Variable-length instructions can improve I-cache performance. IBM has been using them for years in the 360 and beyond. x86 seems to have gone too far by allowing from 1-15 byte instructions, but a few different sizes isn't so bad.

    offset+base+index*scale address can in fact be used by a compiler. Think of accessing a field from an array of structs. Once instruction can do what takes four or five on a RISC machine. Now that one instruction may have a (slightly) higher latency than any one of the four or five RISC instructions, but when you start considering fetch time, the win isn't clear.

    Block copy instructions can be very nice for a compiler. Take a look at what gcc does on a MIPS. It inserts a call to memcpy! None of that nonsense is necessary on the x86.


  • This is not a new idea. Emulating the lesser used instructions has been done by lots of processors. The G5 (S/390) has the ability to trap on any opcode and emulate it in software. Makes for an easy patch system to fix processor defects.

    IIRC some of the later VAXen did this (micro-VAX, maybe?) and has been pointed out, Apple did this in the transition to PowerPC. Intel might do this in the future to allow flexibility in marketing.

    HP is doing this for Merced to execute PA-RISC binaries, and Dynamo may be what they're using to do it.


  • I seem to be on a shameless Apple-shilling streak here, but this is precisely what Apple did when it moved to the PowerPC. The first PPC Macs ran all of the 68k instruction set is software, and managed to do it so seamlessly that most users didn't even notice. This made the OS slower than it needed to be for a while, but they wrote the most critical components of the OS (particularly Quickdraw) natively in the first OS release, so that the OS didn't slow things down too much. They managed to ditch an inferior architecture completely, and the result has been that the G3's are tiny, fast, and low-power compared with PIII's and K7's. And as of 8.5, almost everything is running PPC-native, so they've left the old architecture behind completely.

    AMD's problem is they'll either have to convince Microsoft to support their new instruction set and implement backwards compatibility, or they'll have to write all of that themselves. Anyone know if this can be done in a way that's OS-independent, or will the backwards-compatibility features need to be OS-specific?
  • I completely agree with you. The computer-industry has been dragging this backward-compatibility thing way to far... I mean who the hell is happy he can still run 80286 software on his brand-new pentium?

    Look at computers these days, their entire architecture is a series of hardware "patches" on an archaic architecture. I mean, just try and count how many different bus-protocols run on your machine (PCI, AGP, ISA, IDE,...). How much memory is on every device and how efficiently is this used when the device isn't using it completely? It seems that each time we are looking for a way to enhance the speed of our machine, - in stead of redesigning - we take whatever we already have and add stuff (Yes, another extra level of cache)... This may be the best way for some upgrade but the Pc industry is what 15 years old?

    When will some people finally sit down at a table and say: "What is a computer? What does it have to do? What is the most efficient way to achieve this?"

  • They have not released much info to say that it will be better than Itanium. Additionally, I am always cautious of things that promised to be easier and still yet faster. Engineers know that there is something called optimization. Usually, the ease of use variable and speed variable are competing ones. That is to say, if it is easier to program, then the chip is most likely doing more for you and will probably eat some cycles from your program. Just may 2 cents.
  • Hi there,

    Did you try reading my post at all?

    I was not talking about data word size -- I am well aware of the benifits of 64-bit native arithmetic.

    I was not talking about address word size -- I am well aware of the limitations of a 32-bit address space.

    I was talking about instruction word size. That is, the size of the word each individual operation is stored in.
  • If what they say about having faster performance with x86 code that Merced is true, then this is a good strategy. From what I hear its very unlikely Microsoft will have an un-kludged 64-bit version of Windows NT ready by the time Merced (or even McKinley) ships. That means we're likely to see people running 32 bit Windows code on 64 bit Intel processors, and seeing only trivial performance inprovements relative to what is possible, for some time to come. Remind you of anything ?

    If take-up on the IA-64 instruction set on Windows is slow, and I strongly suspect it will be because of lack of (a certain) OS support and lack of software usage, this definitely gives AMD an opening for a new (or recycled) instruction set on a processor that will run 32 bit software faster than Merced. Maybe they can even pull it off.
  • AMD takes SledgeHammer approach to beating Intel's Merced []

    Interesting times up ahead for CPUs... Sun's UltraSparc-III should be selling by December, and looks pretty damn speedy. More and faster Alpha's coming. Merced is just a test/development platform btw, and won't be that great anyway - the IA-64 design itself has some designed-in limitations, and the Merced design is already a bit of a hack. (anyone want details?) btw, I was reading up some interesting info about Sun's MAJC chip, which is aimed at embedded designs with high-speed data processing, is in a couple of major ways it's actually quite like the IA-64 design, except it has a bunch of other extra spiffy things to make it faster. (want info...?)

  • by joshv ( 13017 ) on Tuesday October 05, 1999 @05:11AM (#1637958)
    This is definitely a compromise solution, but it could work well for AMD. I think that Intel is underestimating the need for backward compatibility (and high performance backward compatibility). Intel is convinced that they now have the market presence required to force the move to an entirely new architecture.

    The only problem is if there is an alternative, and AMD appears to be poised to offer just such an alternative.

    If they can deliver on the performance end, and I think they can, they will offer a much more attractive solution to users and developers. Users won't have to upgrade apps and OS to get better performance, and it sounds like developers of high end apps might have to make only minor changes to adapt their software to use the 64 bit aspects of the chip.

    AMD has essentially decided to continue in the path that Intel has followed for the last 15 years. Intel has decided to veer off that path in favor of a new architecture. AMD has decided that there might still be a few years of profitability in it, and I think that they are right.

  • Somewhere, at some point in time, there must be *someone* who's been fired for buying Microsoft. Any company that ships millions of units can't possibly have a 100% customer satisfaction rating (and, God, this is Microsoft we're talking about), and more than a few of those customers must have to report to someone higher up on the ladder when something breaks. I bet quite a few people have been fired for buying Microsoft. (Or Intel, or IBM...)

    - A.P.

    "One World, one Web, one Program" - Microsoft promotional ad

  • Would it be feasible for them to reimplement one of the existing 64 architechtures (Alpha, MIPS, SPARC, PPC) while keeping support for IA32 in the same chip?
  • This seems somewhat surprising, as I would expect Intel to pay close attention to the needs of their good pal, Microsoft. Hmm.. Have you read about the anti-trust case against Microsoft? .. I could have sworn that Intel didn't like MS at all .. let alone be a 'pal' of theirs. And again Intel was also investing in RedHat a little while ago too ... So I don't think the welfare of MS is something Intel really gives a damn about, IMHO. Still, who can predict what will happen. Crystal ball gazing is always amusing..
  • by MindStalker ( 22827 ) <> on Tuesday October 05, 1999 @07:59AM (#1637968) Journal
    Intel?? AMD??
    To heck with both of them! I'm saving up for a Transmeta!
  • Why do you think the Merced/IA-64 would be worse to code for? Unless you're doing hand-rolled assembly, the burden is pushed onto the COMPILER, not the programmer.

    So yer C will work just like normal C, eh? You don't have to know about predication, VLIW, load speculation or so forth anymore than you have to obsess about how many bits are used by a branch predictor's history today.

    On the other hand, if you, for some godawful reason, need to use 32-bit instructions on a Merced, then yes -- from what we know, you'll take a hit. But otherwise it's the compiler's problem.
  • Hell yes!

    For one thing, those 21264 instructions are actually just 32-bits long IIRC ('tho they manipulate 64-bit data).

    For another, it's got very limited predication support (conditional moves, again IIRC), in constrast to IA-64/EPIC.

    It's also more fun if you've got a (large) register file that can be treated as arbitrarily large 'coz overflow gets mapped to memory -- if you don't mind the cycles, 'natch.

    You cannot summarize the 'goodness' of an architecture or processor with just the # of bits it manipulates at a time, or the MHz of the processor.
  • Is AMD spreading themselves too thin? Will everyone jump on the Merced bandwagon and abandon the new AMD chip? Does AMD have the ability to keep up with Intel? I think the first question is probably moot. I would imagine AMD has their share of engineers working on the Athlon. Now they've got to continue future development and that's exactly what they're doing. I can't argue with that strategy. Everyone has to keep pre-planning.

    As to everyone skipping out on AMD to head for the Merced chip, I doubt it. Come on, we're all pulling for a new processor that brings us out of the bulky instruction set of 1978 (& probably earlier) 8086s and so forth. We'd love to see Merced be the "chip of the future" and everything else I'm sure Intel is boasting it as. However, we've got to face the music. If someone gives us an opportunity to avoid a drastic change in the x86 instruction set, we'll take it. It sounds like SledgeHammer should kick Merced's butt on running 32 bit code, and we're just gonna have that stuff running around. It doesn't sound like it will be too hard to port stuff to the new AMD chip while Intel's chip may take some work.

    I think what it comes down to is AMD opens a new market. People who don't want to spend tons on new ports, but want their code to execute at speeds not limited by 32 bits and 100MHz busses and so forth. (233MHz Athlons soon? -- that rocks!) This then gives AMD an opportunity to produce another chip (Bulldozer perhaps?) that may support Merced, or may not. Depending on how Merced catches on.

    I say kudos to AMD. They've got to make a move to pass Intel somehow and it can't come from following in their shadow. They've got to get this show on the road and make a presumptive move. I think they picked a great choice. Not getting stuck in the middle of the road, but not totally commiting to something completely different.


    At least if it doesn't work no one can put it on the Periodic Table of Intel Chip Flops.
  • by David Greene ( 463 ) on Tuesday October 05, 1999 @05:14AM (#1637986)
    Backwards compatability. From what I've been reading in the past about processors, this is the key "feature" that keeps system speeds down. It's one of the reason RISC processors are faster than their x86 counterparts.

    It's not so much the backwards compatability as the fact that the ISA was not designed properly in the first place. Actually, the x86 is pretty close to being a really good compiler target. The offset+base+index*scale addressing can be put to good use. The problem is the non-orthogonality of the instruction set (rep movs takes a byte count only in ECX, etc.).

    The 386 was somewhat unfortunate because it seems to have come along "too early." My hope is that AMD will do one of two things:

    • Drop the 16 bit segmented architecture and emulate it in software when needed.
    • Emulate the whole blasted x86 in software a la Compaq's FX!32 or HP's Dynamo.

    Dynamic compilation (or "JIT" or whatever) has come a long way in recent years. I hope AMD takes note of it. By moving the more ugly parts of x86 into software, AMD can hopefully design a more efficient core for whatever 64-bit ISA they dream up. If it's built on x86, then AMD should put the 32-bit and 64-bit parts in hardware (adding the appropriate opcodes and formats to get a truly orthogonal ISA) and do everything else in software.

    It will be interesting to see what happens.


  • by psichan ( 94135 )
    it would be interesting to see if someone took the name the wrong way and tried to break walls with it. I can see it now. "*blam blam blam* Damnit!" "whats wrong?" "this new sledghammer only breaks itself! And I spent $5000 on it!" "uh.."

The absent ones are always at fault.