Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AMD

64-bit Processor Next Year, Says AMD 138

Kill Switch writes: "There's this ZDNet article about AMD's announcement that they plan to introduce a 64-bit 'Sledgehammer' chip for the desktop (that's right, DEKSTOP); they also announced that they will be releasing new chips based on the new Mustang core and it looks like there will be way too many versions of this (various desktop, and server versions); and they announced mobile versions of the Duron and Athlon, based on the Mustang core." This could just be crazy enough to work! Updated 11:20GMT by timothy: wwelch contributed a link to a pretty good overview of the current 64-bit field, which of course excludes this just-announced AMD, but which helps put it all in perspective.
This discussion has been archived. No new comments can be posted.

64-bit Processor Next Year, Says AMD

Comments Filter:
  • It's going to be very interesting to see how the Sledgehammer stacks up against the Merced. While it's true that it might be "the easiest kernel port ever," the IA64 assembler code for the kernel is coming along just fine. Why bother making a processor that can run 32-bit as fast when everything has been ported to 64-bit already?

    Now, if the 64-bit portion of Sledgehammer runs as fast (or faster, knowing some of the tricks AMD has learned with the K7) as the Merced's 64-bit... that will really be the deciding factor. Welcome to Slugfest 2001, started even before the year is over. hehe

    I have to admit, the prospect of being able to switch between 32 and 64-bit code on the same CPU without a penalty is somewhat attractive.

    Alakaboo

  • by Signail11 ( 123143 ) on Thursday June 29, 2000 @02:08PM (#967307)
    The x86 ISA is so defective that any reasonable assembly programmer who has used any other ISA would rather gouge out his or her eyes with sharp pieces of bamboo rather than deal with the monstrosity that is x86. Any potential extention of the x86 ISA to 64-bits will take one of two forms, neither of which is neccesarily pleasant:

    *An extended set of instructions (akin to MMX, 3dNow!, or SSE) that operate on 64-bit words and corresponding memory operations that load/store 64-bit words using an increased address speed. This option seems to be the one that AMD will most probably use, since they stress compatibility with the existing ISA to such a great extent in their PR. This is also the worse option: we will still be stuck with the essential garbage that underlies the x86 ISA: 2 operand instructions, a limited register set, nonorthogonality of the instruction set, and numerous other flaws.

    *The processor boots to x86 compatibility mode, then requires an instruction to bring it into x86-64 mode. x86-64 mode is a sanely designed ISA that addresses and performs operations on 64-bit works. We lose the limitations of a tiny register set, horrible instruction encoding, and the other flaws of x86. This option would be far superior to the first one, but if one is willing to go to such lengths to distance oneself from the original x86 ISA (thereby losing all compatibility with native x86 applications in x86-64 mode), why not just migrate completely to a new ISA and use those wonderful fabs for new Alphas or Power4s and include a token K7 or P3 for x86 compatibility? BTW, this option almost resembles the rationale for the monstrosity that is IA-64...

    Most troubling of all is that x86-64 may bring back segments. I cannot stress how horrible this would be; application programmers from the days of mixed 16/32-bit programming will agree with me. In any case, we have been stuck with this sickening, illogical, inelegant, and inefficient instruction set for the past 3 decades; do we really want to put up with another 3 decades of this crap? I say, kill off x86 for good and move to a sane architecture.
  • I think that could be mistaken, and that Rob did not "bitchslap" you. I am looking at the initial posting score function in the Slashcode. Here it is:

    if($I{U}{uid} > 0 && !$I{F}{postanon} ) {
    $pts = $I{U}{defaultpoints};
    $pts-- if $I{U}{karma} < -10;
    $pts++ if $I{U}{karma} > 25 and !$I{F}{nobonus};
    # Enforce proper ranges on comment points.
    $pts = -1 if $pts < -1;
    $pts = 5 if $pts > 5;
    }

    I'm not that great of a Perl hacker, but it looks as if you get that initial -1 score if you have been moderated to a -11 karma or any more negative value. I think that the "bitchslap" could have come from a democratic vote of the readership rather than from Rob Malda.
    By the way, I don't see a "keep this guy down even if he's been moderated positive" function, which I think you are claiming exists in the Slashcode. But I'm not much of a Perl hacker, and I'll defer to someone else who shows me it's there. Perhaps you should look at the Slashcode yourself?

    Bruce
  • >It's a ZDNet article. It's not News for Nerds:
    >it's News for Dummies.

    Either that, or it's an editorial comment on the nature of Windows . . . :)
  • Last year SGI sponsored a Linux University road tour, and one of the sessions was given by Intel, where the IA-64 roadmap the next few years was discussed at some length. The dek^Hsktop version of IA-64 is codenamed Deerfield and is scheduled for 2003 sometime.

    I asked the Intel guy "Will Sledgehammer speed up the development of Deerfield?", and he got that "Deer in the headlights" look for a few seconds, and then replied that "competition always results in better products sooner than later".

    I hope that Sledgehammer causes Intel to put Deerfield on the front burner.

  • Two quick things:

    Sparc's aren't slow. The whole point of the Sparc architecture is throughput, not latency, which you need if you're running a transaction server, not a PC or doing realtime 3d stuff. Sparcs sacrifice sum spec98{int,fp} but the boards they get plugged into are designed with I/O in mind, which is why people buy them.

    I used to be one of those people in favor of chucking x86 right out the window, especially after spending a few days writing assembly for the godforsaken architecture. And then I realized something: investment counts. Think of all of the hundreds of millions of man hours spent on developing software for x86. Think of all of the hours just for the Linux kernel alone. That's wh people want an upgrade path that's a little easier than saying rewrite/recompile your software for PPC/Alpha/Sparc/Itanium. And why I'm so happy that AMD is following this path.


  • Posted by 11223:

    Actually, it's the $I{U}{defaultpoints} field that stores the default score - you and I have it as one (and $pts++ if $I{U}{karma} > 25 and !$I{F}{nobonus}; increases it for karma), and brucedot has it at -1. If his karma was below -10, it would try to decrement it and then $pts = -1 if $pts Also note that the /. crew has the option to give somebody a +5 starting score, too.
  • Uhhhh. You do realize that the Athlon is SMP capable, today. Right?

    SMP capability is not so much a feature of the CPU, as it is a capability of the CPU's associated chipset
    (which Athlon doesn't have yet, but soon will).

  • If I remember correctly, Transmeta used a highly optimised (read: streamlined) 128-BIT CORE. That's double 64 bits last time I checked.

    You remember sorta correctly. A Crusoe instruction word is 128 bits. This does not make it a 128 bit processor. Each of those 128 bit instruction words contains four seperate 32 bit instructions. Crusoe's integers and memory addresses are still 32 bits. Thus, Crusoe is really more like four 32 bit processors running in parallel.

  • The win AMD have scored, though, is that unlike the Merced (or whatever they call it this week), MicroSoft's apps don't need to be native to work well. Merced requires that, in order to get parity with systems depolyed now, you reengineer your software (and it's more than just a recompile). So anything not taking advantage of AMD64 will win, anything that doesn't won't be a loss.

    More importantly, AMD are explicitly aiming at the desktop, while Intel are explicitly aiming at the server market. If I'm a desktop software vendor - Microsoft's Office division, or Adobe, or whoever, AMD64 looks a lok more attractive that ia64 right away, because it's easier to rebuild for, and because my target market are, according to the manufacturers, likely to be using the AMD chips. Ditto games, ditto everything except server apps. And if AMD64 builds enough *desktop* momentum, it will start moving inexorably into the server space based on their desktop success, just like Intel have.

  • Excuse me for being clueless on this but how does a different instruction set make coding more pleasant? Unless you're speaking about assembly, I dont see how this could affect how pleasant coding is.
  • 64 bit? C'mon! AMD/Intel 2001 == Atari Jaguar 1993.

    Heh. Just kidding.

  • A) Java and Smalltalk aren't really mainstream. Sure Java is catching on, but mostly in its interpreted form, and thus performance isn't the highest concern. Smalltalk is non-existant in consumer space.
    B) As for mapping large files, the biggest files that will be used in the near future are video projects, and consumer video projects get nowhere near 4 GB.
    Remember, I'm talking consumer space. 64bit CPUs will be a tremendous help in server and much higher end space. Take Java for example. It is catching in back end server processing, and thus a 64bit CPU will help a lot. However, AMD is aiming this at the consumer/corporate, (kind of like Intel's higher end chips) and for those tasks, 64bit is kind of overkill.
  • True, but consider this. How demanding is the average corporate/consumer of his file system? In these systems (even higher end stuff like the Wintel workstations) have much bigger I/O problems than the 32bitedness of the file system. True, doing a 64 bit file system on only a 32bit processor is sort of a hack, but for the level of system AMD is targeting, the actual speed of the I/O interface and the disk array (or lack thereof!) is a much bigger problem.
  • ALongLongWayFromHome pointers.
    This is like putting lipstick on a pig.

  • Is that 64 bit aware just like previous Windows releases awareness of 32 bit?
  • I now look forward to that juicy massively parallel IBM computer that was mentioned yesterday on SLashdot... just got to build that garden shed to house it ... the garden shed the size of two basketball courts :) ... hey if someone can build a rocket in their bark garden (slashdot from a couple of days ago), I don't see why I can't have my massive parallel computer! :P

  • This isn't going to be out for a while, says the article. Companies sometimes reach the projected release a little early, but usually end up being late, with good excuses all the way. (Hey, shit happens -- and the engineering people may not have been consulted on the date that marketing announces ... that happens, too!)

    You can build a working a decent computer for anywhere from $400 to $40,000, depending on what you want to do and what you're willing to settle for, but waiting for a just-announced chip before you build a system sounds like you're looking for a case of blue balls, metaphorically speaking.

    sw

  • The most important thing here is, of course, the targetting at the desktop. Merced is targetted for servers. When you target the desktop market, you can't have prices in the stratosphere.

    --
  • actually, I would rather imagine the opposite being true. Since Pentium-4 is essentially willamette (however you spell that), it is doomed to suck, and will not clock very well at all. They already pretty much ran out of space for increasing clock speeds with that design, while AMD still has quite a bit of room for improvement.
    ------------------------------------ -------------
  • by Signail11 ( 123143 ) on Thursday June 29, 2000 @02:15PM (#967326)
    Evaluating processor performance is not just a matter of multiplying (function units*data size*clock speed). Many issues need to be considered for advanced, out-of-order, superscalar processors, including decoder width, cache structure, branch prediction techniques, reorder buffers, pipeline depth, mispredict penalties, and base instruction set. Sledgehammer will probably come close, if not slightly surpass, current 21264 integer performance, but will fall somewhat short of FP performance. However, by the time that Sledgehammer does come out, 21264b (first in a hybrid .18/.25 process, then completely migrated to .18 micron design rules) will almost certainly have clock speeds comparable to the Sledgehammer; the EV68 will blow just about every commodity chip (ie. excluding custom vector processors) out of the water in any performance metric.
  • the question everybody'll REALLY be asking is, WTH is a DEKSTOP? =]
    BRTB
  • History doesn't really bear that out. 99.9% of the people out there really like it when geeks say stuff like "64 bit". I mean, why else would something like N64 be so damn popular. ("Hey Beavis, this one has 64 bits, and that one has only 32 bits, so it must suck...")
    I think AMD was very timely in announcing this so close to the Pentium 4 announcement. It gives the world something to think about...
    Which will chip will win? Is it the same ol' same ol' in a new faster package, or is it something totally new and expanded but slower??? -- We will see soon.
    Jordo
  • by MSG ( 12810 )
    The Athlon core is designed to be SMP capable, yes. According to AMD, however, there will never be a motherboard/chipset that supports SMP with Slot A processors. Only after the Socket A Mustang family is available, will SMP be available.

    Ever think about how much space it would take to do SMP with Slot A processors and 4" fans?
  • by Christopher Thomas ( 11717 ) on Thursday June 29, 2000 @06:09PM (#967330)
    Excuse me for being clueless on this but how does a different instruction set make coding more pleasant? Unless you're speaking about assembly, I dont see how this could affect how pleasant coding is.

    You are correct - instruction set doesn't make a difference to most programmers' code. A few places where it _does_ make a difference are:

    • Letting the compiler optimize better.
      Some features of an instruction set and chip architecture make it easy to optimize code to run quickly. Some features make it harder. With 4 general-purpose registers, and only two or three more that you might be able to use for storage in a pinch (3 of ds/es/fs/gs), you have to keep fetching operands from memory if you're dealing with many variables at once. Even with a cache, this is slow. It also makes things like loop unrolling much harder.

    • Optimizing graphics routines.
      The core of any optimized graphics driver will have hand-tuned assembly. This isn't just for software rasterizers - this is for geometry engines and the like also, which are still around in abundance.

    • Writing and maintaining operating system kernels.
      As these do much deep mucking about with the processor state, many pieces of these have to be written in assembly (take a look in /usr/src/linux/arch for examples). A bad instruction set design, or more importantly, a bad register set design will make things much, much harder here.


    None of these will be issues for most programmers, but they still do come up, and all programmers working on code where speed is important will notice the effect on compiler optimizations.
  • I guess it's a DEK STOPper chip, designed to compete with those DEK Alpha chips...
  • It's really great to see that AMD is breaking into the 64 bit architecture finally. Quite a while after IBM and Apple have broken into the 128 bit architecture since they introduced the Power Mac G4 proccessor. PPC G4 fact sheet. [apple.com] Oh well, I guess it was about time somebody caught up with IBM's 2 year or so lead on proccessor development. Just a little note from the Non X86 world.
  • A recompile may make it nicely functional on a 64 bit system, but it wouldn't be particularly optimized which is the main purpose of the conversion.
  • ...will the FPU be as powerfull as that of the Alpha? DAMN that would be one quick, all-purpose chip!
  • Any potential extention of the x86 ISA to 64-bits will take one of two forms, neither of which is neccesarily pleasant:

    You overlook the third way - apply the same kind of extension as is used for 32-bit.

    Early x86s were 16-bit chips. When the 80386 came along, a kludge was implemented to allow 32-bit - add 0x66 in front of any 16-bit instruction to make it work on 32-bit operands.

    The obvious way to add support for 64-bit instructions without adding a new processing mode or bloating the instruction set is to find another unused byte code and declare it to be the 64-bit specifier.

    Code looks very similar to 32-bit code, all old code still works, and you have very few headaches porting a compiler to the new platform.

    The _best_ thing to do is abandon the architecture completely, but if that's not an option, this is probably the cleanest way of extending x86.
  • so... they just said that they were going to slow down now that they beat the 1GHz barrier. Need to put the brains up to something.. so go to 64bit?

    kicking some CAD is a good thing [cadfu.com]
  • *Sparc: Poor in performance, but software application support keeps these Sun processors alive. Even the not-yet-released UltraSparc 3 will have disappointing performance relative to modern processors
    *MIPS: Fading out.


    Sun's market isn't on the desktop or even in low-end servers - it's at the very high end. Their processors and motherboard architectures are optimized to work in machines with hundreds to thousands of processors. While they pay a penalty for this on the low end, you can always be sure that a Sun box can scale well to truly insane processor counts.

    I also like their sneaky register file trick that masks calling latency.

    MIPS, OTOH, is one of the cleanest, sanest processor designs that I've seen. It's not an FP powerhouse, but it's still respectable. Its primary market is as a licensed core, because of its easily extendible architecture. The Playstation and Playstation 2 are both based on MIPS cores, which counts for quite a bit of volume.

    SGI boxes are also mainly based on MIPS chips, and are still the reigning champions for heavy-duty rendering due to a very intelligent system bus design.

    In summary, I think that the conclusions you quote are premature.
  • I for one do not agree that a segmented memory model is a bad thing, unless of course the segments are too small, which is exactly what happened with the introduction of the 286.

    When the 386 came out, segments could be up to 4GB in size, IIRC, but by then it was too late... the concept had a bum rap.

    Regards, your friendly neighbourhood cranq
  • by JordoCrouse ( 178999 ) on Thursday June 29, 2000 @01:48PM (#967339) Homepage Journal
    "Fresh on the heals of the AMD announcment of a 64 bit processor, Microsoft announced that they would release a 64 bit version of Windows as soon as they could convince chief architect Bill Gates that "long long" wasn't his penis size."

    -- Happy 4th From Jordo --
  • Without a recompile, old Apps should not gain any particular speed benefits because they never call on 64 bit instructions. Unless the core was designed in some spectacular way, taking out of order processing to an all new level by combining instructions and optimizing on the fly into 64 bit this would have no benefit on old apps.
  • "64 bit" was a cool buzzword when the Alpha first came out. Today it's what the game consoles have.
  • Forget the new area of inherently different AMD chips for desktop and portable. I would much prefer to see Transmeta or chips that are designed purely for portable use - the companies major focus. I firmly believe that to be the best, you have to dedicate your all. Off of this, why should AMD try to make a portable Duron or Athlon. Make a better desktop chip with those resources. Lower the power consumption and heat so you can raise the Mhz. Not so you can fit in a laptop.
    Let someone manufacture a parallel SMP crusoe architecture for laptop performanc.. that's what I'd like to see happen.

    nerdfarm.org [nerdfarm.org]
  • by sconeu ( 64226 ) on Thursday June 29, 2000 @02:20PM (#967343) Homepage Journal

    "Sledgehammer" is a completely cool name. Don't change it to some marketroids idea like "Athlon Pro". I want to be able to tell people that I've got a Sledgehammer!
  • by Anonymous Coward
    There's been a lot of talk on the rumor sites about the ease of porting Mac OS X to the 64-bit Power chips. Apparently Apple has worked very hard to keep everything above the Mach Kernal processor independent. So with a different kernal and a recompile OS X should be capable of sliding onto a 64 proc. fairly easily.

    You can take this all with a grain of salt, but Darwin (and Rhapsody) for the x86 are out there. And there have been reports of Alphas and Sparcs running Mac OS X varients as well..
  • .... since we know that 64bit requires more RAM and more diskspace for applications it looks like they're in for one hell of a treat when MS (tries to) ports their windows OSen and applications to the new 64bit platform :)

  • by Signail11 ( 123143 ) on Thursday June 29, 2000 @02:23PM (#967346)
    "Don't forget that you'll have to buy twice the RAM chips to get the same amount of memory (effectively.)"

    Bullshit. The size of the base word will not affect the effective amount of memory, just the amount of memory that can be addressed. If you declare a 16-bit variable (probably a short, but ANSI makes no guarantees), it will occupy 16-bits, subject to alignment constraints.

    "And programs will be about twice as large on your hard drive due to the 64 bit instruction words."

    (Alpha) stage3-decompose.o: 213920 bytes
    (x86) stage3-decompose.o: 264018 bytes
    Here's a hint: 64 bit ISAs don't neccesarily use 64 bit instructions; the x86 ISAs variable length words take up as much (if not more) space than the fixed 32-bit words in more RISC-like ISAs.

    "And there's no point in running a 64 bit system with IDE drives, so you'd best pick up some nice expensive SCSI drives, too."

    IDE drives can offer entirely acceptable performance, although for a serious performance system or server, one would of course use SCSI drives or a RAID array. But it's definitely possible to use IDE drives with 64 bit systems.

    "Some 16 and 32 bit code will recompile cleanly. I expect much more will not."

    If AMD specifies the standard LP-64 programming model (longs and pointers are 64 bits, ints are 32 bits), I would expect a significant majority of the non-Windows software to compile relatively cleanly on the Sledgehammer (or other 64 bit processor). Most Windows software unfortunately won't, because of some poor assumptions built into MFC and standard Windows programming techniques that I won't go into here (specifically the assumption that sizeof(int)==sizeof(int *)==sizeof(long)).
  • Yeah, speaking of outrageous computing; I almost read this article as "64-way" processor... 64-way SMP on desktop, woohoo! Imagine a beowulf cluster of those... ;-D
  • by rodgerd ( 402 ) on Thursday June 29, 2000 @02:24PM (#967348) Homepage

    This could be AMD's master stroke against Intel - if AMD can get application developers like Adobe supporting their 64 bit extensions, Intel will be in big, big trouble. Especially since AMD are promising 64 bit loving on the desktop, while Intel are still pushing the line that 64 bit is server technology.

    It's interesting that Intel are being outmanoevered at their own game; for years, manufacturers would throw up technologically superior chips (680x0 in its heyday, the original ARM 2 line, the Alpha, PPC, etc) with better performance, but they would be unable to get much market penetration, since the market valued x86 compatability in 90% of cases. Now Intel is offerring (well, vapouring) a 64 bit architecture that offers second-rate ia32 compatibility and has a competitor claiming all the goodness of a fast 64 bit system with little or no loss for ia32 apps.

    It will also be interesting to see how this affects the free software world. For example, free databases like Postgres could look more attractive with cheap, abundant 64 bit hardware to run them on. And, more than that, if there is a schism in the i32 world, with some people going the ia64 route, and some going the Sledgehammer, the ability to recompile open source apps to the arch that best suits one's own needs, rather than have purchasing dictated by a split applications market, could be a win.

  • No, the language specification is not affected by the underlying instruction set of the processor on which the code is compiled or executed (especially so for Java (or so I would hope), with its much vaunted platform independence). I do suppose that some vendors (if they have not yet done so already) would offer a compiler specific additional type, akin to the pre-C99 "long long" or "_int64" offered by gcc, VC++ respectively.
  • Do you mean to say that applications that were compiled to run on i386 Linux will be able to run on IA64 Linux? Will they lose any performance/ speed because of this?
  • by Christopher Thomas ( 11717 ) on Thursday June 29, 2000 @06:31PM (#967351)
    Certainly for apps that do a lot of 64-bit arithmetic, though that's probably mostly scientific applications rather than the familiar desktop application. Beyond that I'm not sure, and would like to hear opinions too. Will it help with things like graphics, since you would be able to wade through the masses of data involved in various transformations faster?

    Memory copies on the x86 are already 64-bit due to a sneaky hack - MMX loads/stores have been 64-bit for a while, and thus take one clock. Of course, you still have the MMX/FP switching overhead.

    I don't really see much that could speed up existing code. The only 64-bit transfers that go on (MMX and FP loads) are already handled as 64-bit transfers.

    If you're writing a 64-bit application, then yes, many things will be faster (due to you now being able to hold double-precision floats in one register if nothing else), but that involves at least a recompile and possibly additional tweaking.
  • Most troubling of all is that x86-64 may bring back segments. I cannot stress how horrible this would be; application programmers from the days of mixed 16/32-bit programming will agree with me.

    The Pentium family has a 48-bit segmented mode, which, as far as I know, is used by no operating system anywhere. In fact, some Pentium-family chips bring out 36 address pins, so machines with up to 64GB of physical RAM are possible with current hardware. You'd be limited to 4GB per segment, which would probably mean 4GB per process in Linux. Do you really want single processes bigger than that?

    Segmented mode isn't bad if the segments are big enough. It's the hokey way the 8080 to 8086 transition was managed that caused segmented architectures to get such a bad reputation. Better segmented machines have been built, although mostly in the mainframe era.

    A flat address space leads to problems of its own, especially when shared code and data is involved. Look at the mess required to relocate DLLs, for example. With a segmented address space, the hardware does that for you.

    Still, everybody understands flat address spaces, and it's probably worth it to stay with them just to avoid the reeducation costs.

  • It doesn't need more RAM or disk space.
  • by FFFish ( 7567 ) on Thursday June 29, 2000 @07:16PM (#967354) Homepage
    It's a ZDNet article. It's not News for Nerds: it's News for Dummies. And your average dummy would probably not make the conceptual leap from "major OS" to "that must mean Microsoft, too."

    What is interesting is that someone thought it important to not panic the Windows users. Imagine if ZDNet's readers were to think that the AMD Sledgehammer wasn't going to be Windows compatible. The poor chip would never sell!


    --
  • pH34r /\/\Y @$C11 KuNz!!!



    (|)

  • I can't think of any reason to go to 64 bit.
    I don't need more than 4GB ram. And I should never need that much.

    Hmm, why else? I don't care about instruction sets. The Intel IA32 is good enough for me, becuase I program in QuickBASIC 4.5 for DOS. It is nice and fast. I don't care about asm anymore, I tried it once and it was too hard.

    And 32 bit apps won't go faster on a 64 bit chip, will they? 16 bit apps don't go much faster on a 16mHz 386 than on a 16mHz 286 (I have both, so don't say that's BS).

    And why do apps need more than 32 bits? What do the extra 32 bits allow an app to do that it can't do right now?

    So, it seems to me that the only people who really need this are DBMS ops or ASM programmers.

    I don't care for it. If it becomes standard, I'll eventualy buy one. But right now I don't give a damn.
  • Please, please moderate this up to "+5 Insightful".

    This is exactly the same thought I had when I read this news before. "Oh, shit. Now there'll be a 64-bit architecture with 4 general purpose registers (eeax?), and an insane ISA."

    For the love of God, please do not do this, AMD!

    --Corey
  • by Sir_Winston ( 107378 ) on Thursday June 29, 2000 @02:26PM (#967358)
    The nice thing about Sledgehammer and any derived desktop versions will be that the processor core will be able to recognize between 32- and 64-bit apps, and switch between them. I'm not sure if it's still true of the current design, but early on the AMD folks were saying that Sledgehammer would be able to work with 32-bit apps by effectively "splitting" the core as if it were 2 32-bit x86 cores working in tandem, therefore doubling the number of 32-bit instructions that a normal core would be able to deal with at any given clock cycle. Likwise, for 64-bit apps the core would work in lockstep like a normal 64-bit core. Interesting, if the concept still holds true...

    At any rate, it *will* be able to run 32-bit apps natively, not through emulation as with Merced--err, Itanium (dumb name). As much as many /.ers bitch about x86 being such a horrible ISA (it is, but who cares unless you're unlucky enough to have to code in assembler...), and about the desire for backwards-compatibility holding tech back (sure it does, but it also saves time, effort, and apps, for the user), there are advantages. First of all, I'll still be able to multi-boot Windows 98 on my future Sledgehammer to play all those wonderful old DOS and early Windows games I've collected over the years and come to love dearly--it'll be ages and ages before Linux or anything else is able to effectively emulate Windows well enough to get top-notch performance from even fairly old games, and even then most of that will be thanks to increased processor power (like emulating an Apple ][c on my 400MHz processor--easy because the whole damned machine can be executed virtually thanks to processing muscle many, many times more than the original). Not just that, but businesses with legacy, custom x86 software will be able to upgrade with virtually no software costs. Backwards-compatibility may induce cruft, but is often desirable for both personal and business reasons. I look forward, two or so years from now, to being able to run 64-bit Linux, Windows 98, BeOS, and maybe Windows 2002, all on the same AMD box. Now, if only someone would create a VMS environment for x86... ;-)
  • by Black Parrot ( 19622 ) on Thursday June 29, 2000 @02:27PM (#967359)
    > Will 16- or 32-bit apps notice a speed performance on a 64-bit architechture?

    Certainly for apps that do a lot of 64-bit arithmetic, though that's probably mostly scientific applications rather than the familiar desktop application. Beyond that I'm not sure, and would like to hear opinions too. Will it help with things like graphics, since you would be able to wade through the masses of data involved in various transformations faster?

    > Will 16- or 32-bit apps need to be ported or just recompiled to gain a speed boost?

    It is supposed to be backward compatible with current x86 systems. Probably without even a recompile. It would almost be suicide for a company to push a 64-bit x86 architecture otherwise, since (so far!) the overwhelming majority of such machines would be bought to run Windows and Windows apps, and very many people would be very reluctant to buy a processor that made them throw out their fine collection of apps.

    Similarly, software houses will be reluctant to ship 64-bit versions of their apps until 64-bit processors are common. (Witness that even Linux binaries are often still distributed for the lowest common denominator, the i386, though surely most of us run 486s, 586s, or 686s by now.)

    The preponderance of existing 32-bit apps probably means that most users will not get the full benefits of the 64-bitness of the new processors. This is another area where users of OSS will probably reap the early benefits, since they will be able to recompile their apps as true 64-bit apps right away (probably after having a few issues tweaked), whereas commercial apps will likely continue to ship as 32-bit binaries for several years after the first 64-bit x86s hit the market.

    As a final observation, these chips will surely price above even the high end Thunderbirds, which are already going to be too pricy for most people. I suspect that the early adopters of 64-bit x86 will mostly be people who need the number-crunching abilities. For others, it will initially be a status symbol (very important in the corporate environment, ya know!).

    --
  • There may be naysayers that discount how many options AMD will be producing, but think about it, it sure beats the well named pentium 4. Options never hurt, there will be multiple levels for different needs. I am currently VERY happy with the athalon, I don't know anyone who isn't.

    As long as their archetecture beats alot of the old x86 stuff into the dirt while keeping compatibility, who gives a rats ass. Your old compilers will work, however poorly they perfor mext to compilers optimized for the instruction set wont matter, because even with out major rewrites you can get old apps working with minimum fuss. Nothing wrong with that in the slightest.

    Intel you can kiss my lily white ass.. I know where my dollars go. And with the EV bus they should compete relatively well with Alpha CPU's which isn't necessarily a bad thing.

    I just wonder what is going to happen to transmeta... I want to see them do so well, the idea behind thier gear is amazing, to me at least.


    #include caffiene.c
  • Addendum

    After reading up more on Project Monterey, I've learnt that Intel is playing a major role in it's development. Obviously, this is going to have a huge impact on how much support AMD can possibly provide for Monterey, unless AMD decides to participate in the project itself directly. I hope AMD and Intel can find common ground in the development for the new 64-bit Linuxes/Unices and *BSDs; otherwise it looks like another job for anti-trust laws.

    BTW: Where do *BSD distributions fit in this picutre? Self Corrections

    I'd just like to state that Project Monterey, an alliance of several Unix vendors consists of IBM, Compaq, Sequent and SCO. I was mistaken about Sun & HP being part of it. For information on Monterey, visit IBM's site here [ibm.com] or read the zdNET article [zdnet.com]

    And my last line refers to the problems of AMD's Athlon is its early days, and does not refer to the SledgeHammer.

  • by Signail11 ( 123143 ) on Thursday June 29, 2000 @02:32PM (#967362)
    For existing x86 32-bit code, I expect the Sledgehammer to perform significantly better than the Merced in all respects (for equivalent clock speeds). For heavily numerical or scientific programming, I expect that the Merced will do better than the Sledgehammer, even with a significant clock speed differential (the Sledgehammer will probably release with a > 300 Mhz lead over the Merced). This is especially so for tasks such as encryption, large integer arithmetic, computational linear algebra, and certain other applications of this type. One must keep in mind that the Merced can potentially execute 2 bundles of 3 instructions each every clock cycle (and that some instructions can perform more than one operation on multiple operands); optimistically, Merced can perform something like 12 discrete operations every cycle (on full width words, no less). For general purpose applications(non-server, non-computational codes), before the IA64 compilers mature, I think that the Sledgehammer would do better.
  • I believe Transmeta aims to be more dynamic and flexible than that. I am certain they are smart enough to know that 64-bit processors are going to soon be ubiquitous, and they should fit right into play with processors to fit that market. I mean look at what they developed, they made extremely flexible CPU that is not bound to a particular ISA. Something that has plagued the x86 market for many years, and they knew that.

    -P
  • I'm thinking that this may have an adverse affect on Transmeta's plans, and by extension that open source movement in general.

    and how exactly, woud that harm the oppen source movement?
  • Intel and AMD's 64-bit strategies are remarkably divergent

    No kidding. IA-64 is not even using the x86 ISA, while the Sledgehammer will extend the x86 ISA to 64-bits.

    Intel will probably come out with an x86-compatible 64-bit processor around the same time AMD introduces theirs.

    Actually, so far Intel has not announced any plans to extend the x86 ISA to 64 bits, their IA-64 uses EPIC "technology", and can not natively run x86 code. But, it will be "compatible", but not nearly as compatible as the Sledgehammer, which will natively execute older programs compiled for the 32-bit x86 ISA.

    If not, AMD will be well-poised to take over the desktop market

    This sort of depends upon exactly how good the Itanium (Intel/HP's IA-64 processor) turns out to be. Remember, Intel has already taken big steps to try and turn programmers toward their new 64-bit ISA, and Intel is planning the Itanium for desktops as well. If it turns out that Intel can manage to get programs desktop users are accustomed to using to be made explicitly for their IA-64 and not AMD's 64-bit x86, then many desktop users might feel pressured to switch completely away from x86. 'Fortunatly' for AMD, Intel has totally blown off Microsoft, and not included them in the development of this at all, so maybe AMD will be able to garner some support from MS.

    BTW, there is a lot of info on this, starting about 10 months ago, in the Silicon Insider, located at real world tech [realworldtech.com].
  • > The "spliting the 64-bit core into two x86 32-bit cores" idea could not possibly work efficiently.

    That's a very definite declarative you just made, and a wise man once said "The less apt a man is to make declarative statements, the less apt he is to look like a fool in retrospect." Nothing personal, but it's always a bad idea to bandy about phrases like "could not possibly." Not too long ago, people thought that light "could not possibly" travel faster than it does in a vacuum, and well...

    Point being, as much as you may know about processor architecture, you don't know as much as the AMD design team. If at one point they thought it possible to design a processor which could perform as I mentioned above, then it is doubtless possible, even if they have since abandoned the idea in favor of something easier to design.

    You know what else "could not possibly work efficiently"? Utilizing a VLIW core to process an ISA overlay which exists in software. I mean, that's just such a terribly inefficient concept that it couldn't possibly be worth doing, right? The VLIW core of such a badly designed processor would have to be so powerful and clocked so high that it would consume far more power than is necessary to run a normal x86 processor, right? As we all know, such conventional thinking turned out to be very, very mistaken. Transmeta's Crusoe has proven that such a thing can be done, though few would have ever thought it would work and work so well.

    I think that should prove my point, but let me continue de-FUDifying your post.

    > x86 *is* a terrible ISA and backwards compatibility *does* hold back tech, both
    > in terms of performance and price/performance.

    I already admitted that x86 is a poor ISA--of course it is, it's ancient; pre-Cambrian by the standards of microprocessor tech. However, thanks to good compilers the ISA is as easy to write for as any other--few people do handwritten assembler any more, for any ISA. And yes, it is inefficient--but most current x86 processors actually use a RISC-like core to process data after it has been decoded in hardware from the CISC x86 ISA into smaller RISCy instructions; being done in efficient hardware, little overhead occurs and performance is impressive from something like an Athlon. The net effect of that is that you get RISC-like performance with backward compatibility with very little overhead. And, let us not forget that contemporary RISC processors are, as noted at http://arstechnica.com/cpu/4q99/risc-cisc/rvc-1.ht ml , every bit as complex as CISC ISAa like x86.

    The main reason people such as yourself complain on /. about x86 being so horrible and holding back progress is that you think x86 is "unsexy." And, it is. It's old and ugly. But it does the job more than well enough, thanks to modern processor designs which break down the x86 instructions and execute them in RISCy fashion. Do you prefer Alpha? Great, Alphas kick ass...except...well, I can get a nice un-sexy Athlon system together which will whomp all but the highest-end single processor Alpha system, thanks to increased clock speeds. The Athlon's FPU is even enough to make it outperform the lower-clocked Alpha, for less than half the price. And I can use more cards and peripherals and operating systems with the Athlon. Alpha ONLY makes sense on servers and high-end multiprocessor workstations, nowhere else.

    So, what should we replace x86 with on the desktop? Gee, UltraSparcs run around 10 grand for entry-level boxes, so that's not realistic. How about StrongARM? Very poor FPU performance and very low clockspeeds, don't make me laugh. Itanium? Intel will price those out of reach of God for the next few years. Oh, wait, I know: PowerPC. And yes, PPC is a great architecture, very powerful and extensible. I would love for x86 to be supplanted by PPC, but that'll never happen because Motorola and Big Blue have a stranglehold on production and have no financial need to push up clockspeeds and puch to high production levels--IBM uses them in some of their own boxes, but doesn't have reason to push out lots of them since Apple is the only other game in town--other PPC boards have remained very fringe despite the release of the CHRP specs. Non-geeks aren't interested in non-Apple PPC based systems. Learn to live with that for the next several years at least. Aside from which, thanks to the ever-increasing x86 clockspeeds, top-tier Athlons and Willamettes will be outperforming top-tier PPCs for a while.

    > backwards compatibility *does* hold back tech, both in terms of performance and price/performance

    I think I just disproved that, too. x86 processors consistently outperform all others on price/performance ratio. Come up with a better solution or shut up. There are many other ISAs out there, and new ones coming, and yet not a single one of them can unseat x86 on price/performance, where it counts. The x86 ISA is old and ugly--but processor designers have come up with very sexy ways to push its performance up, by melding RISC core technologies with the older CISC instruction decoders. And then, they use brute force of higher clockspeeds to outperform most of the competition, and to outprice all the competition. It's not holding us back at all, it's forcing us to innovate cores and to brute force clockspeeds well above all other processors.

    And that isn't even counting the importance on price/performance of maintaining backward compatibility. The same software can be re-used through many upgrades, which is even more important for businesses who've developed custom software solutions than it is for individuals.

    Not to mention the lack of competition and subsequent higher prices which would be inherent in any new ISAs. Why the fuck aren't Alphas and UltraSparcs running at higher clockspeeds and costing less, eh? Because there's no competition. The ISAs are owned and licensed by single companies, who don't feel the pressure to do more, faster, better, like x86 companies do. Look at Intel's snail-pace development in the desktop range before AMD started turning up the heat. x86 is, effectively, an open-source ISA, *the* open-source ISA. That's why they're unmatched on price/performance. If Itanium or any other proprietary ISA becomes the new standard, we're all fucked.

    So, think before you hand out that party-line BS about x86 being so terrible. x86 is responsible for the home computer revolution, and without it the Internet would have remained a toy for universities. Think about it.
  • Personally, I'd just be happy if there was a "lots" or "sufficentlylarge" data type.
  • I expect that'd be the 64-bit chip designed to replace the Alpha :-)
  • Are you racist or is this just your lame-ass attempt at extremely bad-taste humor? This should be -3,NOT FUNNY
  • Yes, these are binaries that were copied from an i386 machine onto the IA64 machine and they just worked, including the shared libraries (which were also just copied over from the i386 machine).

    Performance is kind of a dirty word right now, the IA64 cpu's and chipsets are just too new to give real performance numbers yet. Having said that I believe that even Intel will tell you that IA32 programs will not run as fast on Itanium as they will on the fastest IA32 processor available at that time. Let's face it, this is a 64-bit machine. If you want 64-bit performance use Itanium, if you want 32-bit performance use Pentium.
    --

    --

  • Thing is AMD is in a position were they have to either define the new 64 bit standard (which Intel has already done?) or make something that is compatible with the existing platform.

    I think the only way around this dilemma is if Intel and AMD got together to define a new standard that was compatible with the other.

  • by Signail11 ( 123143 ) on Thursday June 29, 2000 @03:11PM (#967374)
    The "spliting the 64-bit core into two x86 32-bit cores" idea could not possibly work efficiently. It is quite safe to say that the AMD folks must have come to their senses by now and abandoned any thought of doing Sledgehammer that way. A modern chip trace (say an adder) is highly optimized for the word size and rely critically on signal propagation delay for the most efficiency. Spliting (using the previous example of an adder) into 2 32-bit adders would result in totally incorrect results or a nonoptimal implementation, at which point one is better off just designing the functional units seperately.

    x86 *is* a terrible ISA and backwards compatibility *does* hold back tech, both in terms of performance and price/performance. If we were not shackled with the x86 ISA so pervasively, the design and fabrication talent that Intel and AMD so obviously possess could have been far better used to design chips whose performance would have been incredible. My reasoning is that if one is willing to go so such lengths and create a new ISA (albeit one that is supposedly compatible with x86 and that extends it), thereby requirely new compilers and OS support, one should go all the way and just start from scratch and use a decent, sane, reasonable ISA.
  • thus stomping out Transmeta's value proposition as the inexpensive, mid-perfoirmance x86 chip, no?

    The Crusoe is not targeted as a mid-performance cpu, nor is it a x86 cpu. It is targeted as a versitile low power consumption cpu. It seems no one understands this.

    NightHawk

    Tyranny =Gov. choosing how much power to give the People.

  • by slickwillie ( 34689 ) on Thursday June 29, 2000 @02:46PM (#967379)
    I thought the AMD Moron chip was somehow a combination of "More" and "Athlon". Looks like it is really Mobile + Duron.
  • To speed up the FPU, they removed the on-chip spell checker. Slashdot, as you can probably tell, is being powered by them already.
  • by JacKDoff ( 206270 ) on Thursday June 29, 2000 @03:36PM (#967382)
    The idea of the CPU having two 'personalities', and distinguishing by a bit in the code segment descriptor was suggested in the days of the PPro, by one of Intels own engineers. I don't recall their name, but I do recall they were listed as one of the original developers of the 8051.

    He claimed the 'x86'ness of the PPro took 7% of the die space. For an additional 5%, they could have added a second 'personality', and begun the migration to a 'cleaner' ISA some years ago.

    Oh well...

    Three cheers for AMD! (o8
  • by Sloppy ( 14984 ) on Thursday June 29, 2000 @03:38PM (#967383) Homepage Journal

    So in other words, you're saying it's an Alpha killer?


    ---
  • I don't know about "realistically", but SGI [sgi.com] have a few boxes meant for the desktop, that run Irix, a 64-bit operating system. The O2 [sgi.com], for example, or the (newly announced, I think) Octane2 [sgi.com]... Of course, these machines are not within most people's realistic budgets, but they do exist... ;^)
  • Unicode = 2 bytes.

    That is 2^16 (65536) characters.

    That's NOT 20million.

    And wouldn't fit all the languages if you went and tried to have them all represented at once.
  • Most troubling of all is that x86-64 may bring back segments. I cannot stress how horrible this would be; application programmers from the days of mixed 16/32-bit programming will agree with me.

    Yeah, but kernel programers and other Unix heads will cry with joy. finially the phrase:
    segmentation fault: core dumped
    will make sense with regaurds to the underling hardware.

  • Neither FreeBSD nor Linux are good SMP OSs. For that try Solaris x86, BeOS (obligatory plug :), or (gasp!) WinNT.
  • How long before we see 256-bit desktop machines? And what will we be able to do on them?

  • Is it possible that this could mark the start of an age in which our desktop chips and our portables are inherently different?

    I would suggest that this era has already begun. Take a look at the current (or at least the upcoming) crop of PDA's. Though there still not as powerful as laptops, and for many, still not functional enough to replace laptops, they do have more than enough power for many people, espically now that wireless internet access is becoming more and more a reality. I personally abandoned my compaq notebook a year or so ago for the joy of my PDA. And, as far as I know atleast, none of the mainstream PDAs use x86 processors.

    Even if you don't consider the PDAs to have started this trend, the fact that we'll have both 64-bit and 32-bit, would hopefully lead to more and more portable programs (as in cross platform), and less 32-bit x86 specific code, which, imho, can only be a good thing.

  • by Signail11 ( 123143 ) on Thursday June 29, 2000 @02:48PM (#967391)
    I read the referenced RealWorldTech article well before seeing this information on Slashdot, so I'll give a summary of Paul DeMone's conclusions, as well as my impression of why Sledgehammer was excluded.

    *Alpha: The reigning champion of the 64-bit processors battlefield. The 21264a (EV67) shipping now still has the best integer and FP performance of any processor. The 21264b (EV68) will be released in two phases: first as a hybrid .18/.25 process, then fully migrated to .18 design rules. The clock speed (especially after it's contracting out to IBM) will become competitive again. 21364 will add improved SMP capabilities (and an improved cache) to the basic EV6 core. The 21464 will be an impressive 8-issue processor with simultaneous multithreading.
    *PA-RISC: HP has not neccesarily given up on its own processors, despite its nominal strategic alliance with Intel's IA64. An enormous cache allows the current members of this family to keep pace, even without significant architectural modifications. Later members could
    *Itanium: A bloated instruction set that is incredibly ornate. Heavily dependent on compiler technology. Lots of marketing hype that exaggerates the true technical merit of the Merced processor. The Merced will debut at a relatively slow clock speed, but McKinley (HP's 2nd generation IA64 CPU) will definitely been a key competitor in the 64-bit HPC market.
    *Sparc: Poor in performance, but software application support keeps these Sun processors alive. Even the not-yet-released UltraSparc 3 will have disappointing performance relative to modern processors.
    *MIPS: Fading out.
    *Power: The Power4 looks very impressive, but not much information has been released about it to this point.

    The Sledgehamer is simply not a very interesting chip; it is generally agreed that x86 had the misfortune to become the most popular desktop ISA, without regard to its actual merits. An extention of x86 to 64-bits does not interest people much, essentially because of how ugly, inefficient, and inelegant the original x86 ISA is/was. Speaking for myself, I certainly do not want to have to put up with 30 more years of this defectiveness.
  • AMD is doing a custom rewrite of the core layout which should decrease power significantly. AMD's Ruiz at the PCExpo this week said AMD is including POWERNow technology on Mustang's die rather than via BIOS as currently on K6-2+ mobile processors. I believe current K6-2+ mobile manufactures are using only 4 of 32 available power levels. PowerNow is capable of adjusting speed according to application and peripheral requirements and not just Fast/Slow like Intel's SpeedStep. This way they can include larger speed ranges(K6-2:200-550 vs. P3:500-750) The question I have is if PowerNow will be included in ALL Mustangs. AMD's Eco-Warrior?
  • Nope. To quote from "Linux/IA64: Preparing for the Next Millenium" from "Proceedings of the 5th Annual Linux Expo":

    "The user programming model is the standard LP64 moden meaning that the C data type "long" as well as pointers are 64-bit in size. This is the same model that has been adopted by all other Linux and UNIX 64-bit platforms in existence."

    Moreover, "long long" is not in C99 (the latest ANSI C Standard); it is a gcc specific extention. The correct type should be "long long int"; this is guaranteed to be at least 64-bits. Most compilers will probably choose for this to be double the native word size (or 128-bit for 64-bit CPUs).
  • - AMD announces their 64 bit offering
    - Intel announces the NAME of their next chip

    Hmm, which company would you bet on?

  • IMO, they won't. The work for ABI for Sladgehammer has already started and (as I could see in the project funded by AMD) it's much more similar to Alpha than to Itanium.

    Fortunately, a lot of work has been put to linux developement to work smoothly on 64bits.

    On the other hand, both ia64 and sladgehammer will be able to run in "compatible mode"


    I have an alter-ego at Red Dwarf. Don't remind me that coward.

  • As an aside, I feel quite certain that Transmeta will not be left behind in the 64 bit world... (well, actually, that's mainly what I'm posting about..). What I feel like is that we're assuming, suddenly, that the 64 bit chip is going to come thundering in and take over and everything will either be 64 bit or nothing from now own. I'm not trying to be a slow poke and say that we're not going to move to 64 bit, but I am trying to be realistic and point out that we've not gotten there yet and I figure we're a long way because even though we may get those 64 bit chips next year, everyone will not be getting them. There are still millions of processors out there that are not 64 bit and those users who have them are not going to up and switch right then and there. Many will never switch. I'm sure that it's not that crucial to computing. We've sure done well without them. (Though I will admit I wouldn't mind having one at all. ;-)

    Just.. don't.. jump the gun with this thing.. (err... whatever.)
  • by blakestah ( 91866 ) <blakestah@gmail.com> on Thursday June 29, 2000 @01:49PM (#967399) Homepage
    From Here [amd.com]

    AMD has disclosed specifications to the major OS vendors and Microsoft so that they may ensure that their operating systems and tools will be AMD x86 64-bit aware

    AND

    "By extending the x86 instruction set to 64-bits, AMD's x86-64 technology should give us very fast compiler retargetting and the easiest kernel port so far," said Alan Cox, Linux Kernel Developer.
    -----------------

    It looks like a real battle ahead for Intel.

  • Right now Intel, IBM and SGI are all working on optimizing compilers for Linux. There's talk that at least some of that stuff will be rolled back in with gcc. I think work on a properly optimizing compiler is important, since the speed gains attained through those optimizations may be the deciding factor in a close fight between Itanium and AMD's chip. I expect Windows performance on all these 64 bit machines to be underwhelming at best.
  • Alternatively, you can do essentially the same thing, but rather than simply extending actions of the old instructions the new specifier could activate a whole new set of 64-bit instructions, preferably with a saner design.

    As an aside, the x86 is not especially hard to write assembly for compared with heavily pipelined RISC and VLIW (or EPIC, if you insist) chips. The instruction set may be crufy, but it doesn't require youy to think like a compiler. Writing compilers, of course, is another matter - compilers for RISC devices are generally much easier to write.
  • by ca1v1n ( 135902 ) <snook.guanotronic@com> on Thursday June 29, 2000 @01:51PM (#967407)
    Is it possible that this could mark the start of an age in which our desktop chips and our portables are inherently different? I have serious doubts about Intel's ability to scale down Itanium to laptop scale and power consumption. Now it looks like AMD is going the same way, too. Many of the high-performance architectures, PPC joyfully excluded, don't even attempt to be portable. I'd like to think this is just a temporary condition, while manufacturing processes are improved, but I think with the physical limits we're looking at, even PPC may not be too far away on this trend. Hopefully we can see crusoe-like innovation on the portable end to offset the pain, but don't be surprised if portability means a different instruction set in the near future, or at least a different native one, regardless of what may be emulated.
  • Alternatively, you can do essentially the same thing, but rather than simply extending actions of the old instructions the new specifier could activate a whole new set of 64-bit instructions, preferably with a saner design.

    There are two problems with this.

    The first is instruction set bloat. This is generally agreed to be a Bad Thing, and also increases the area of the decoding circuitry by some marginal amount.

    The second is shadowing of opcodes that other chips may use. If Intel decides to extend x86 yet again, and AMD has already allocated the used opcodes to their own 64-bit instructions, binary compatibility will be broken. This is much less likely snagging only one opcode.

    In summary, I don't really see the point of adding new instructions when the 64-bit-tag system works just as well (no slower than 32-bit code).
  • by n0ano ( 148272 ) <n0ano@arrl.net> on Thursday June 29, 2000 @03:01PM (#967413) Homepage
    I can't comment on Sledgehammer but I can talk about the IA32 support available on the Itanium. I've been working on adding IA32 support to the IA64 Linux kernel for the last 6 months so I have some knowledge of the subject.

    Individual processes can select which instruction set they wish to run in, IA32 or IA64, even though the kernel is executing entirely in the IA64 instruction set. We've added IA32 kernel interfaces to match the system calls available currently on the i386 Linux kernel. This is not vaporware, this is running and has been publicly demonstrated at conferences this year.

    Currently I've run IA32 versions of bash, gdb, gcc and netscape. All of these programs are running now with no known problems. I'm sure there are IA32 programs out there that don't work yet but my goal is to make sure that eventually all IA32 programs will run on the Itanium.

    I admit to having a bias on this subject as I work for VA Linux and my job is to help create Linux for the IA64 processor.
    --

    --

  • I already do use Alphas. What really bothers me is that the fab techniques and capacity (not to mention the design teams) could be far more productively used to create chips based on a reasonable ISA. An Alpha, with the design funding of an Intel or AMD, combined with their wonderful fabs would be awe-inspiring and would bring true supercomputing level performance (at a much more reasonable price, comparable to current x86 chip prices) to the desktop, in addition to being much more pleasant to code for.
  • by John Whitley ( 6067 ) on Thursday June 29, 2000 @04:22PM (#967419) Homepage
    I think work on a properly optimizing compiler is important, since the speed gains attained through those optimizations may be the deciding factor in a close fight between Itanium and AMD's chip.

    You understate the case. Every modern general purpose CPU implementation is "design symbiotic" with a targeted modern compiler(s). The primary distinction between RISC/CISC/VLIW/etc. architectures is the tradeoff of work between the CPU and the compiler. (Go dig around in the technical documentation at the TI 'C6x DSP [ti.com] web site for a fascinating view of how a modern VLIW architecture impacts processor and compiler design.)

    The architectural decisions in hardware must be borne out by a compiler that leverages these features to the fullest. Likewise, the implementation of a CPU must actively enable the compiler to take maximum advantage of hardware bandwidth. Once the chips tape out, both Intel and AMD MUST ensure that the compilers measure up -- or else they've run half the race and given up.

  • Besides Linux, what operating systems support a 64 bit CPU that could realistically be used on a desktop?
  • #-- disclaimer: I'm in over my head, I'm just asking! :P

    If this is an extention of x86, I assume existing binaries will still function -- but I have two questions:
    -Will 16- or 32-bit apps notice a speed performance on a 64-bit architechture?
    -Will 16- or 32-bit apps need to be ported or just recompiled to gain a speed boost?

    I'd also be curious to see if the gain in performance is going to be worth the doubtless hefty price...

    --

  • Since the chip comes out in the middle of next year I would be intrested in seeing just what type of support is provided by the various compiler groups and OS groups to support native 64bit mode of this chip. Sure it isn't needed to run, but I'm intrested in known just what the differences in performance are going to be between the 32bit instructions and the 64bit instructions this chip is supposed to support.

    I'm currently hearing about all this support for the Intel Itanium, or whatever it shall be called this week, from Linux and some compiler groups and yes MS too. I haven't looked that hard yet but I don't see any mention of who will be supporting the 64bit extentions.

    Now making the assumption here that the 64bit extentions are not supported immediately it may mean that Intel can market its chips more effectively and provide a bit of FUD that will hurt AMD in this. Then again since this AMD chip will support all the old 32bit applications and appears to not require much of a hardware change that the Intel chip, AMD might be able to take a larger chunk of the market away from Intel.

    Now just when can I expect to see one of these chips with a motherboard and decent chipset?

  • By now, a lot of us have already heard of plans by many vendors, both Linux and traditional Unix (HP, SCO & Sun under the auspices of Monterey) releasing versions of their operating system for Intel's Itanium 64 (IA-64) [zdnet.com]. The link before refers to a zdNET article on Intel releasing a developer's kit for Linux. Intel has claimed that the strength of the open source movement influenced their discussion to make the unprecedented move of releasing full details of Itanium's architecture to the public without a Non-disclosure agreement.

    The question I'd like to ask is whether 64-bit Linux/*BSD distributions designed for the IA-64 be readily compatible with or available for the AMD SledgeHammer, and will AMD follow in the footsteps of Intel in supporting open source development on this architecture?

    Hopefully AMD and the Linux kernel developers will be able to avoid the initial MTTR problems that plagued the processor in the first few weeks it was out. Keep up the great work AMD.

  • Unicode can be extended to 16 additional 64k segments by paring two of the reserved extender characters. That's 1 million extra, RTFFAQ [unicode.org]

    We don't know how bad things are in north korea, but here are some pictures of hungry children. -- CNN
  • As much as I like AMD, I have to saw that this processor is utterly useless for a desktop at this time. The reason is, that it serves no purpose to the desktop user. The only forseeable advantage is the increased memory address space, but at the current rate, 4GB will be enough for another 10 years or so (assuming memory usage doubles every 2 years and right now is 128 meg.) The general notion is that 64 bit processors are faster. This is entirely untrue. 64 bit processors are no faster than 32 bit ones for 32 bit operations. And desktop software is still essentially a 32bit regime. Anything using the integer units rarely deal with speed critical code that need 64 bits, and the vast majority of FPU (video, 3D, etc) intensive apps still use single precision (32bit) floating point, even though Pentium has an 80 bit FPU. Thus, I really don't see much of a benifet. What is a much cooler technology, in my opinion, is the double-pumping that Intel is doing. I can guarantee you that a 32 bit 1.5 GHz proc that is running its ALUs at 3GHz will beat the hell out of a 64 bit CPU.
  • by MSG ( 12810 ) on Thursday June 29, 2000 @02:03PM (#967439)
    What ZDNet didn't mention is that the Mustang based processors will be SMP capable. The north bridge used for these processors will provide two separate host processor busses giving each processor full bandwidth to the bridge (as opposed to shared bandwidth, as with Intel) but limiting SMP to two processor configurations.

    See my earlier post at http://slashdot.org/comments.pl?sid=00/05/19/18222 34&cid=194 [slashdot.org]
  • by crypto_creek ( 149032 ) on Thursday June 29, 2000 @02:04PM (#967442)
    Does this mean that JAVA gets an Extra Long? And maybe a Quadruple? And now UNICODE can expand to 8 bytes per character. Get to include extraterrestial character sets like Klingon and Vulcan.

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...