Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AMD

AMD Sledgehammer (64-bit CPU) Preview 153

ruiner writes, "There's a sweet AMD Sledgehammer preview up at AMD Zone. They discuss the need for a 64-bit CPU, and what operating systems would support it. "
This discussion has been archived. No new comments can be posted.

AMD Sledgehammer (64-bit CPU) Preview

Comments Filter:
  • by Anonymous Coward
    Is anybody else getting really tired of the bit/hertz marketing hype? Great, my Dreamcast can do a few 128-bit operations, so it's 128-bit huh? And Wow!, I must get an excellent number of frames/sec in quake on my 900Mhz cell phone. The worst part about the hype is all the asinine consumers that continuously spew this offal verbatim into posts and articles like these.
  • by Anonymous Coward
    The x86 is very limited in registers. This causes problems in optimising for a branched pipeline.

    The instructions are variable length which means that the fetch unit has to be much cleverer to deal with this

    Only the AX (Plus DX) registers can be used for multiplies and divides, which makes a second multiplier or divider virtualy useless.

    The integer unit uses 4 general purpose registers, plus some specialised registers, the FPU uses a logical stack which is stored in physical registers, and the MMX unit uses the floating point registers as numbered registers.

    Intel should have realised they were heading towards a superscalar architecture when they designed the 386. Then modified the instruction set, and added some more registers at that stage. Nobody can run 16 bit applications in protected mode, except in virtual 8086 mode, so all the programs had to rebuilt anyway. This would have been the perfect time to do the modifications that they're doing now.
  • Spitfire is the name for UltraSPARC-Is and the MMU architechture in the I and II.

    [wesolows@pimp:~]$ cat /proc/cpuinfo
    cpu : TI UltraSparc II (BlackBird)
    fpu : UltraSparc II integrated FPU
    promlib : Version 3 Revision 19
    prom : 3.19.0
    type : sun4u
    ncpus probed : 2
    ncpus active : 2
    Cpu0Bogo : 398.95
    Cpu1Bogo : 399.76
    >>>> MMU Type : Spitfire
    State:
    CPU0: online
    CPU1: online
  • This post naturally assumes that this description is accurate, which is by no means certain.

    As the writer touches on here, AMD has an excellent opportunity here. That opportunity is wasted if they simply make a 64-bit x86 CPU. This was a great idea when moving to the MIPS R10k. It was a great idea when moving from SPARC v8 to v9. These architectures were great to begin with, and have managed to maintain backward source and binary compatibility without adding cruft or useless compatibility logic. The x86 architecture, OTOH, is filled with cruft already. It has dozens of useless instructions, hundreds of artificial restrictions, and about as much non-orthogonality as anyone could want in a cautionary example.

    If AMD simply warms over and extends this already much-maligned architecture, they will lose their big change to truly break away from Intel. Why not put out something genuinely creative? A new design? I know AMD can innovate, but I'm puzzled as to why they won't. If everyone is resigned to an architecture change anyway, why not give them something good to change to? AMD seems to be so accustomed to playing follow-the-leader that they don't know how to lead even when given the opportunity.

    Get with it, AMD. Nobody is going to buy this processor. The improved FPU performance will be wasted on your market. These are peecee people; they don't care about vagaries like "computation." They want to play gamez and talk in chat rooms. Your puny FPU isn't going to touch the real workstation CPUs; the FPU performances of Alphas, SPARCs, and MIPS are dramatically above anything you can get with only 16-32 registers. So whom are you targeting? Hardcore Quake addicts?

    It's time for something new, not the same old thing. If AMD does in fact follow this path, they will flop, and start all over again by reimplementing Itanium three or fours years after Intel, then spend ten more years playing catch-up. I can only hope that there's sufficient confusion to drive people into the arms of the workstation vendors, who have been producing consistently superior products for 10 or 20 years, maintaining or breaking compatibility at the right times and in the right ways. With any luck this will eventually be the death of the peecee.

  • They already did. The speed difference in FPU operations between the K6 and the Athlon is impressive, and I even often see an 500Mhz Athlon be somehow (feels 5-10%) faster than a 550Mhz PIII Xeon in some FPU intensive operations (mp3 compression with a nontrivial psychoacoustic model).

    OG.
  • This article has an old feel to it. Especially where it talks about OSs for ia64. That was the state of affairs in december of 1999. Things have shifted somewhat with Turbo Linux releasing a full distribution for Ia64.

    Sure it's only Alpha quality but the fact that it exists and includes all the major parts of a Linux system. The web server runs fast enough and the file server flies. The compiler exists and works but has not been optimized nearly enough. GCC's strength has never been speed anyway, just cross platform consistency.

    Apart from that this was a decent and well researched article. Frankly I think it was either just published late or Slashdot waited a few months to pick it up.
  • I'm sure that by the time the 386 was introduced, they'd been doing 32 bit for ages and were starting to move to 64 bit processors.
    Check out DEC's timeline [digital.com].
    #define X(x,y) x##y
  • Why use Intel PIII CPUs as the reference? You'd need to figure out how to scale other archs, so you'd need a benchmark. Probably the most widely accepted benchmarks are the SPECint and SPECfp benchmarks. I think it would make the most sense to use straight SPECint numbers for computer speed.

    I'm wondering how to include SPECfp numbers, though. Maybe we should choose a reference speed, as 47Ronin did, and assign it speed 1. Speed of other CPUs would be calculated by averaging the ratios SPECint(cpu)/SPECint(ref) and SPECfp(cpu)/SPECfp(ref). So the final formula would be:
    speed(cpu) = speed(ref) * ( Sint(cpu)/Sint(ref) + Sfp(cpu)/Sfp(ref) )/2

    This way, a CPU has a SPECfp rating twice as high as the reference, but only 3/4 the SPECint, would have speed=1.375, which seems reasonable. Obviously, you will still need to look at real benchmarks, not just a speed number, to see if a given CPU will do what you want, but at least it would give _some_ meaning to the numbers shown in ads :)

    I think it would be necessary to pick a reference with a good balance between integer and fp, so things wouldn't be skewed. I think UltraSPARCs would be good for this, since x86 CPUs have too crappy FP, and Alphas have too good FP (relative to integer) I think. I'm sure there is a better choice though; Anyone want to tell me what it is?

    That actually raises an interesting question: Will integer performance rise faster, slower, or equally to floating point performance in future CPUs? Which is easier to accomplish from a hardware design standpoint?
    #define X(x,y) x##y
  • Actually, I thought the bit about memory in this article was way off. Sure, 4GB may be a heck of a lot for the _desktop_ right now, but the article willfully included the server market in the "no 16GB for 6-8 years" schtick.

    While I was at IBM working in system testing, I saw many AS/400 servers destined for customers with 20GB of RAM. That's right -- 20GB of RAM (you should have seen how much disk space these puppies had). And this was 3 years ago!

    Note that these don't run NT :) Oh, and they weren't web servers. These would typically be used as business transaction servers, corporate database servers, etc.

    Maybe the article's author was using some definition 'high-end' that excludes anything non-Wintel. Well, hate to break it to them, but Wintel is barely entering into what might be laughingly called high-end (except by Intel's marketing department).

    Anyway, the point here is that you definately need more than 32 bits of address space in the high-end server market here and now.
  • From what I heard about microsoft they don't like to rewrite certain things that is why Windows of alphas sucks This might give them a chance to have a processor that will help not have to write most of thier code I could by wrong but you never know


    http://theotherside.com/dvd/ [theotherside.com]
  • Literally Apples and Oranges. Different OS's all around. Differnt amounts of memory, no mention of rating. There is much room for improvement on all the systems tested, and ways of bringing them more inline to the same specs. ATI for instance has Rage boards for both the PCs and the MACs. The same amount of memory would also be a good improvement. Same OS for the PCs. Not the best test environment to draw any real conclusions.
    Time flies like an arrow;
  • Microsoft has the most to gain from IA64 because that's the only horse that they can ride into the datacenter on. MS desperately wants to be in the big leagues with Sun and IBM, but they need the hardware to get there. Since they dropped Alpha, IA64 is the only shot they've got.

    First, Itanium won't offer the performance of its "traditional server vendor" rivals. So it's not going to grab much share from them, if any.

    Are you kidding? Here's a clue -- "Cost of Ownership". People will dump expensive Sun/IBM boxes if they can get away with cheaper Intel models. The "convential wisdom" has been that NT has lower administration costs than Unix. True or not, some people will be able to run NT in situations where it wasn't feasible before. And they'll do it. (The same could be said for Linux.)

    People running Linux on peecees aren't going to run out and buy M$ for IA64. The most M$-optimistic outcome is that current enntee users migrate en masse to IA64/enntee. In which case M$ holds its current market share, which, in this environment, isn't very high (37% or so last I saw).

    Huh? This reads like you believe that Linux has a much higher marketshare among servers than Windows NT. It doesn't -- not by a long shot. The idea is that Microsoft holds their small server market share, and eats some of the midrange pie too.

    Of course, the midrange market is very Unix friendly, so Linux has a great opportunity on IA64. But, I still can't see how the IA32 to IA64 transition puts Microsoft behind the eightball in any way whatsoever other than wishful thinking.
    --
  • If Microsoft follows previous policies, AMD will need to pay them for a Sledgehammer port of Windows NT, or sign NDAs and do the work themselves. Compaq/DEC Alpha and Motorola/IBM PowerPC both went down this road, and found it to be economically infeasible to keep NT running on their CPUs.

    Alpha/NT was a very established architecture, very fast, and had an entire MS BackOffice port running on it, and enough third party vendor support. Still, it didn't sell at all! I think it would be a horrible mistake for AMD to pour money down that hole -- the low-end server market seems more than happy with the performance range of x86.

    So then the question is -- Who is going to buy this thing? I'd think the best bet would be to price Sledgehammer similarly to 32-bit chips like the Pentium IV. Perhaps they can get some game/video driver support and market it similar to the MMX/3DNow processor extensions. As long as it's seen as an extension of the traditional x86 market (price- and performance-wise), it will probably do OK. If it's marketed as a brand new architecture, it's DOA.
    --
  • But it could be a helluvalot faster if the same amount of time and money spent to squeeze some more performance out of a geriatric architecture had been spent on developing a new one.

    True, but in the big picture, the world has collectively invested a quadrillion dollars into a bazillion x86 closed-source applications. It's hard to fight the economics of that, no matter how cheap+fast another CPU might be.

    (I think that the only time anyone recently went up against Intel for the desktop market was the PowerPC in the mid-1990s. It's been faster and cheaper, but never quite enough to convert the commodity market.)
    --
  • I can see the "Extended 64-bit features that make Quake faster" approach when they are trying to market Sledgehammer to the Windows 98 crowd.

    But, with Linux, why not put out a 100% 64-bit port? The source is there, and if they can't/won't make a native 64-bit compiler (gcc is the obvious choice) for Sledgehammer, why even bother designing the architecture?

    Linux on Sledgehammer sounds nice -- but let's not kid ourselves -- it's not a big enough market to support an entire processor line. Either they price this thing so that it's competitive with the desktop market of Pentiums and Athlons, or they need to be very risky and go all out and try to get a broad range of server OS support (MS, Sun, IBM), the way that Intel is doing with IA64. An expensive chip that only runs Linux is doomed to failure.

    --
  • I think you're right -- if they try to position this thing against the Sparcs and Alphas and Itaniums of this world, it's doomed. For one, there won't be any software support. On the other hand, if this is priced and marketed like a "PeeCee" chip, AMD has a real window of opportunity here.

    Intel is going to be backing away from the x86 market to push IA-64 - an unknown, expensive quantity with very little software running on it and poor x86 performance. They might have to slow down Pentium performance advances just to keep IA-64 from looking too embarassed.

    Meanwhile, the desktop market is going to stay x86 for a long time. Add a little "64-bit" gloss to Windows 98, and AMD might gain some serious market share here.

    So whom are you targeting? Hardcore Quake addicts?

    Why not? What else are people running with their 1Ghz PC machines? (Not MS Excel!) How else do all of these $250 video cards get sold into a very price sensitive market? Why does every "Biff's Hardware" site on the 'net hold Quake benchmarks to be the ultimate test of any piece of hardware? If it's priced the same as Intel's 32-bit chips, and runs certain hybrid 64-bit applications-errr-games faster, it could sell tons. (Just think of MMX and SSE -- Intel pushed those into every desktop PC, when in fact they really only appeal to gamers.)

    On the other hand, if it's more expensive than a 32-bit chip and doesn't offer any real advantage to the average user, what's the point?
    --
  • I do not see the poor x86 performance as a serious issue. ... Linux is ready to go.

    Great, if you are running only Open Source Linux software. For the rest of the market, software availbility and slow x86 emulation is a problem. Microsoft, for example, is only porting SQL Server to IA64 -- not Exchange, not Excel, not anything else. That's considerably less software than MS produced for even the Alpha chip (where people continually bitched about software availability) -- and this is from the vendor that has the most to gain from IA64.

    If it's priced the same as Intel's 32-bit chips
    It won't be. You know it, I know it, AMD knows it.


    Yeah, that's kinda my point. If it's not an affordable Quake upgrade, there's no market for this thing. I sorta wonder if the whole "Sledgehammer" hype is just FUD smoke that AMD is blowing at IA-64.

    --
  • The 8086 was a hack. Intel knew it was a hack. The chip they really wanted to define their future was the 432 [brouhaha.com]
    Unfortunately, the 432 was rediculously big, complex, and slow. It failed misserably in the marketplace.
  • Agreed. There was no preview of the processor or no new "News" in that article. No specs, no simulations. It looked liked AMD marketing trying to get people to talk about the product and raise awareness.
  • by Lizz ( 17532 )
    I must say that Raleel has it correct. I work for a DOE lab and the most memory we have in one of our machines is 2Gig, and it's not enough. We do finite element mesh generation, and the 100,000,000 element range requires more than 2Gig. I work on a Dec DS20, with 1Gig, and we're actively looking into upgrading it to its maximum of 4 Gig just so we can do at least part of these huge problems.
  • I'm just wondering, now that AMD is working on a 64-bit chip without having an Intel counterpart to base itself on, how compatible will they both be when they hit the market??


    I believe they will be basing it on the Alpha chip-also a 64-bit chip IIRC. I can't wait to see one of these in my Linux box ;-)

  • The article talks about how FP on the x86 is stack based. However, I seem to remember intel switching to a stack/register model back in the pentium. I believe all the recent x86 chips have allowed you to address stack locations using F(1), F(0), etc. but you can still treat the registers as a stack if you want. I may be wrong though.

    The most important detail that the article leaves out is how the new 64 bit extensions will work. Personally I'm hoping that AMD keeps the current instructions but creates 64 bit extensions that work in a more RISClike fashion. Something like limiting addressing modes to register and register+constant, do the more complex instructions in microcode, and most importantly add another 10 or 20 gp registers to the integer code.

  • The widely-used M680x0 was a representative chip design for its time (1979 and on) and featured a full 32-bit internal architecture, although its data bus was only 16 bits wide. The 68020 (1984) had a full 32-bit data bus.

    There are zillions of 32-bit operating systems for IA32. Some of the more well-known include the various BSDs, Linux, Windows NT, Solaris, and NeXTStep.

    MJP
  • Yeah, now let's just all hope that apple doesn't sue them for infringing on the "sledgehammer" metaphore. Hehehhehehehe =:-)
  • I work in an enviroinment were real computing starts at 16 GB of RAM. Scientific computing. The funny thing is that it's not the programs. Often the programs take up a couple of hundred KB in memory themselves, but the data takes up huge quantities. I'll use an example I saw on some beowulf site...Los Alamos uses a beowulf to compute a nuclear explosion through 1,000,000 atoms....each atom takes up who knows how much. Even at a single K per atom, that's a gig. As I remember though, a single K would be kind of small for an atom. I didn't understand either until I saw it. It's just insane.
  • Well, the MHz vs. performance issue has finally been tested. Notice the speed differences between a 400MHz G4 vs. an 800MHz Athlon vs. a dual-600 PentiumIII system. Guess who comes out on top most of the time in real-world tests?
    Bare Feats speed comparison of three high speed desktop systems [barefeats.com]

    -----
    Linux user: if (nt == unstable) { switchTo.linux() }
  • Considering that smaller, faster CPUs are running circles around chips with more MHz tacked on to their name, we need a new standard in selling computers to consumers.. something similar to how Rambus is listed as PC600/PC800 and DDR RAM is PC2100 (because it's inherently faster even though its clockspeed is slower). If say, a PIII runs at 800MHz it would be sold as a "CPU800" -- and a 500MHz G4 would then be listed as a "CPU850" or so, taking into consideration the architecture performance. Such a naming convention may help even the gap between all competitors as long as they are all tested against the same benchmark. Notice the speed differences between these desktops with wildly different clock speeds (400MHz G4, 800MHz Athlon, dual-600MHz PIII):
    Real world speed test web page [barefeats.com]

    -----
    Linux user: if (nt == unstable) { switchTo.linux() }
  • Before you all go bashing on AMD, realize that this article is *NOT* AMD PR. This article was posted by one of the brothers who run amdzone.com. Look at the whois information on amdzone.com and you will see that it has *nothing* to do with AMD corporate. True, amdzone does post very substantial stuff, but this particular article isn't up to their standards imho.

    djx.
  • Whereas Intel has the IA64-ready compiler from the RedHat folks and so on (and puts less emphasis on the x86 compatibility of the Itanium), will AMD's x64 architecture get its own compiler or will the x64 end up as an marginal improvement to a dying breed?
    --
  • G4 isn't 128 bit. The Vector unit is. The FPU is 64 bit and the Integer unit is 32-bit. What the G4 has is plenty of registers (Altivec has it's own) and a shorter pipeline that makes it more efficient (although it makes it harder to clock up). The G4+, out this summer-fall, will have a 256bit 256KB on die cache, 2 more int units, the altivec unit will be able to execute 2 vector instructions in a clock cycle, and another FPU will be added. Plus, 36bit memory addressing will enable it to access 64GB of RAM instead.
    --
  • Oh and I forgot to add, they are increasing the pipeline to 7 stages allowing it to clock up easier.
    --
  • More MHz == better performance? Would a 1GHz 386 be faster than a 500MHz PIII? No. Design of theprocessor, cache architecture, instructions issued per clock cycle, ISA, etc. all have bearing on the processor performance. Also, why is 'IMAC' all in caps? Could you tell me what it stands for?

    If x86 is sooooo bad, don't wait and buy a fucking pink IMAC and stop BSing about something you don't know.

    Why is the Macintosh the only other alternative to x86? How about Alpha or Sparc? Or was this just another opportunity for you to bash 'pink' iMacs?

    (please no ALTIVEC RuLeZ pathetic replies)

    You should go to arstechnica [arstechnica.com] and check out the article comparing SIMD units. You'd be surprised. I'm not ragging on the x86 architecture, but you have some weird ways of determining performance (MHz only).
    --

  • Alpha doesn't play in the same ball park? Pull-ease... I've seen cheap Alphas going for the price of slightly high end P3 and G4 systems (~$2000) with impressive specs. I know people with Alphas at home.

    And yes, bashing pink "iMac" is really funny and YOU KNOW WHY !! ahahahaha

    That says it all.
    --

  • Did you really want me to start YET ANOTHER flamewar ??

    No, that wasn't my intention. Do you?
    --

  • Why am I so suspicious of the statement, "eliminating the 'memory limit' barrier." ?

    Because it's not really true? Do a little math: let's take the memory size of "today" as 128meg (=2^27 bytes). That leaves 2^(64-27) = 2^37 memory size doublings before the 64bit memory space is full.

    The article says memory size has increased by a factor of 16 in the last 8 years -- if we extrapolate that out, solving

    2^(27 + 4n) = 2^64 => n = 37/4
    it will take 37/4 = 9.25 8-year periods (or about 74 years, give or take) for memory sizes to reach 2^64bytes.

    Now, I am no means convinced that memory sizes will continue to be driven to increase at the same speed they have in the last 8 years. OTOH, any kind of sustained exponential growth will fill up the 64bit address space within a few centuries... (to be sure, not something we need to worry about any time soon)

    -y

  • It wouldn't be unlikely to see one of the companies take the Microsoft "Embrace and Extend" approach to the CPU market also. Where a program complied for and AMD would work on an Intel chip, but programs written for Intel would not work on an AMD.

    I am not implying that Intel would do this, AMD could very easily do this also. This was merely an example showing that once binary compatibility is broken, it opens the way for one competitor to take very aggressive measures to ensure their own sucsess.
    --------------------------------------------
  • Agreed, writing x86 asm is horrible. The machine has too few registers. I could use more registers instead of moving temporary results in and out of memory most of the time.

    I have to side with Intel on this one. As much as I like the AMD x86's I'd like to see the architecture dead already and replaced by something better suited for modern applications. The x86 is carrying too much bloat. Perhaps a G4 or an alpha would be a good alternative.

    One problem is that software vendors haven't written their programs with portability in mind. IMO fair practice in the software business would be that any user upgrading to a new architecture would get the application for that architecture for free as soon as available. This is one service that open source can provide and commercial applications can't. Backward compatibility sucks; closed source software has turned it into a ball and chain for both the CPU manufacturers and the users. Honestly, I think that AMD has a shot at winning this one, but I sincerely hope they do not.

  • He, I visited a short lecture on the x86-64 architecture last week at the GDC [gdconf.com], and the guy who gave it specifically noted that the lack of registers on x86 was a pain in the butt... He also hinted pretty strongly about Sledgehammer supporting SIMD-type processing on double-precision floating point numbers. Cool!
  • ben voyons, Linux Tabarnak !
  • Design a new, but not-Itanium-compatible 64-bit CPU. Technically, a far better choice. But they knew they haven't Intel's marketing strength to impose this new arch to the world.


    It's not just that. The development cycle on a hacked x86 architecture is bound to be shorter than that of a new architecture.

    Also, (and I may be wrong on this one, so don't holler murder just yet) but I was under the distinct impression that the EPIC instruction set was public and already published.. just like the x86 instuction set. So AMD could build a chip too, they just chose not to.

    Rami James
    Altec Lansing R&D, IL
    (All comments are my own, and not necessarily those of my employers.)
    --
  • Although I'm no pro, this is further proof that AMD is a GREAT investment. I bought as much of it as I could around when K7 was being talked about ($17/share), and now its gone up to around 50! At the rate they're moving, I don't see this company going anywhere but up. Two thumbs up for investor relations and profitability.


    Mike Roberto
    - roberto@soul.apk.net
    -- AOL IM: MicroBerto

  • [...] you won't believe how people judge a CPU only by it's rated MHz. For them, K6/2 500 == Athlon 500 == Alpha 500.

    Oh, I believe it. Anyone using a PowerPC is well-aware of this. :) Comparing a 500 MHz G4 to a 500 MHz PIII is apples and oranges, but you can't forcefeed clues to anyone.

  • Geez - you must have a crappy viewer - load the JPG using your 128 bit MMX/SIMD/3DNow!/etc instructions (or your floating point unit by typecasting to a double float to blockload a bigger chunk of data - useful on machines such as Apple's G3) and optimize the JPG decoder to use the units (on G3, use float unit) as well. Since everything except MMX can do parallel operations, you can devote the integer unit(s) to decoding instructions (ie, use [nearly] exclusively for 32 bit memory addressing) and your other units to decoding the JPG and therefore you save your precious integer cycles.

    If your pr0n - er, sexy pics - are taking up more than 4GB of memory (64GB on 36bit) THEN you have problems... but since most JPGs aren't this big (I believe I remember the upper limit being 4GB, actually), I think you're still safe

    my point is (aside from being snotty ;), 64 bit int units aren't gonna boost performance much, if at all, which is why most PCs are still 32 bit.

  • by Kinthelt ( 96845 )
    Sounds like AMD's going to speed their FPU up somewhat. It's about time. I've been cranking my K6/2 without any pipelining, and been feeling 1/3 the speed of an Intel.
  • Really in the current CPU market it is quite needed a CPU that can handle loads and levels such as these. With the line in between high-end server and high-end gaming becoming unclear (Friend and his pair of CuMine 700s for gaming) it is nice to see -- for now -- a pure business server CPU.
  • Okay as a software engineer in my limited experience here if a server needs 16GB of ram as this guy predicts things are going to be moving to a VERY server centric environment or.. there is some SERIOUS bloatware being written

    In 2010, when the film industry wake up and I make my millions off a video streaming server, I will be wanting my most popular films in RAM, say 10 films. Soon after that when I begin streaming CD-quality music too (wav files not mp3's), I will need to keep the top 40 in RAM also. I can safely predict that I will need well over 16Gb of RAM to do that in.

    If that's not the future of music and video then it damn well should be.

  • Assuming that 'gates law' of memory expansion holds, this means in another 6 to 8 years, consumer-end computers will ship with 2 to 4GB of RAM, and high-end servers will need at least 16GB of RAM. A 64-bit processor can map thousands of terabytes, (1.84e7) effectively eliminating the 'memory limit' barrier.

    Gates law? I thought that was a sarcastic blow at MS for making software that requires twice the RAM every year. I think what the author meant was "Mores Law".

    is any account I think 64 bits will bring on a new wonder full world of memory mapped file systems, 4 hours of DVD quality porn cached to RAM, and yet another excuse to buy a bigger badder "faster" computer.

    -Jon

  • Actually, that would be better worded as "The Microsoft world didn't see a true 32 bit OS until NT came out". There were a lot of true 32 bit OSes out before NT, even on the x86 (SCO UNIX, Microport UNIX, Interactive UNIX, etc). On non-x86

    this is so discussion is so lame.. *sigh* ...

    SCO Unix was XENIX, which was a Microsoft Product.

    -Jon

  • As Gates once said: "640k should be enough for anyone".

    As i once said: "WOW! a 1GB harddrive! imagine the games you can store on that!"

    Everything will be obsoleted in time.

    Although i have to agree, 18.4MTB should take a long time ;)
  • >Personally, I can't imagine how AMD can success with this.

    What matters to the market is speed and compatibility, not elegant design. (Otherwise, the x86 would be long dead.) Therefore, AMD can succeed, at least in the short and medium term, by selling a CPU that runs existing code faster than Intel's new CPUs. Even if the x86 market shrinks once IA-64 appears (and that's not a given), AMD can win big simply by claiming all of a smaller market, rather than part of a larger market.

  • AS I understood it, Intel was merely talking about the 64 bit, not the instruction set. After all, if they though it would be standard in 2005 to still have the same stone-age command set, why would the move away from it? And how could the old architecture become standard?
  • I never had difficulties writing x86 asm, did YOU have ?

    []

    It gives all the power of RISC and the nice things of CISC (variable length instructions).

    The point is this: writing assembler code is nowadays not something many people do. A different command set would produce MUCH faster code on the same architecture, especially with good compilers. The thing the x86 are lacking most are general-purpose registers that could be used for compiler optimization.

    The fact that x86 processors run at higher MHz than any other should be a hint for your pedantic biased mind.

    And the fact that other architectures have higher performance despite running lower clock rates should be a hint to you.

    Of course this is slashdot, so maybe I'll get moderated to -1 AGAIN for saying x86 is good.

    Go run to your mommy. I'd like to know how many /. readers are using alternative platforms ?

    The good thing about x86 is that it's cheap. It's also fast, sure. But it could be a helluvalot faster if the same amount of time and money spent to squeeze some more performance out of a geriatric architecture had been spent on developing a new one.

  • This "preview" is pure vapor. There's practically no concrete information on the actual architecture of the chip, and without such information, there can be no real estimation of its performance. If I understand it right, they are in fact keeping the x86 instruction set, and that is a serious performance dampener.
  • Apparently it is you who does not have a clue. The x86 architecture has by now become an ugly, horrendous hack job of patches upon patches and desparate attempts to somehow keep something that was designed 20 years ago up-to-date.

    Practically every good feature of it is an afterthought that would work much better in a completely new architecture.

    The fact alone that all current x86 CPUS are actually internally RISC processors that have to translate a command set that is totally unfit for RISC methods should be telling enough.

  • This is the first real offering that doesn't mimick the direction set by Intel.

    ...and instead keeps mimicking an architecture that even Intel is about to abandon. How again is this showing AMD's independance?

  • But the true irony of all this is that it is in Microsoft's favor too! Almost a catch-22 if you dislike the Wintel duopoly.

    I thought this for a while, and then realised that there are 2 things that could redress the balance.

    1. AMD are out of the server market for the most part. I don't think they have their sights set on it in the way MS do. Until now, AMD's main market is and always has been the home user. It would be interesting to see an AMD server solution, simply for the fact that it would be power at a better price. On top of that, the clock speed that the Sledgehammer runs to is likely to be a lot higher than Itanium, so 64-bit apps on Linux (as long as the compilers are there) shouldn't be a problem.

    2. I doubt AMD are in a mood to do MS any favours at the moment, after going with Intel for the X-Box. It must be hard to forgive a corporation that strings you along like that (Although Intel's offer was too good to refuse, it was a nasty move that could only be made by the monopoly it holds).

  • Also, there is speculation about AMD releasing the Sledgehammer with multiple chips on a single die.

    If this is going to be real, I'm wondering how difficult it's for linux to support such SMP system. Think how long it has taken for true SMP support that is promised to be in 2.4. Of course this could be emulated by motherboard as normal SMP system but I think that kind of mobos would not be cheap!

  • The silly thing I found in that article was the claim that Intel would be running x86 code through software emulation. My understanding is that the IA-64 uses hardware emulation, and that the big speed barrier they were having for running x86 code was clocking the chip up to speeds that didn't melt a hole in the motherboard...
  • I know the article is written with a very pro AMD point of view. But in the second paragraph of the details section [amdzone.com], it speaks to the paraphrased statement from Intel that they don't expect this technology to become standard until at least 2005. I would take this as Intel is not working on marketing a chip of this flavor for now, but is looking at it by 2005. I didn't get the impression that Intel has abandoned or is about to abandon the architecture if they feel it will be standard in 2005.
  • by 348 ( 124012 )
    Good point, maybe I should have said an x86 on Viagra. 8)
  • Intel should have realised they were heading towards a superscalar architecture when they designed the 386.

    IIRC, Intel did try to replace the 386 soon after it was released with a much better design. However, many people were already using the 386 and were not interested in a chip that wasn't backwards compatible.
  • ...is that Sledgehammer will continue to tie us to the (seriously stretched to its limits) x86 architecture, warts and all. I for one had hopes that IA-64 would give us a clean break in processor architectures, but Sledgehammer, if successful, will likely lead us into another decade chained to the x86 legacy. Hooray.

    -- WhiskeyJack

  • Even NFS, which can be implemented using very little memory for the application size, the computer doing the serving can really use lots of RAM. Why? Cache. Disks are much slower than modern networking. If you have four gigabit ethernet connections coming out of a host, that's a maximum throughput of 500M/sec. MAXIMUM SCSI rates are in the 160M/sec rate nowdays. And that's not taking into account latency and stuff. On the server side, cache really, really helps out. So, many times it's beneficial to stick several gigs of ram into the machine. Even if you're only using, say, 64M for program space, having the additional 15.9 G for cache can really help out.
  • Intel is going to be backing away from the x86 market to push IA-64 - an unknown, expensive quantity with very little software running on it and poor x86 performance.

    I do not see the poor x86 performance as a serious issue. GCC has already been ported to produce native code, and Linux is ready to go. There is presumably a bit of glibc work left but I fully expect that a full native distribution will ship the same day as the CPUs. Full native == no x86 performance problems, and full native == full software availability. The real problem with Itanium is that its performance is going to disappoint, even in native mode. By the time Intel works out the problems inherent in any new processor, not to mention the problems with their supply chain, their competitors will be well ahead of them, shipping stable, mature products with superior performance and, thanks to not having to supply two chips in one, at lower cost as well.

    Add a little "64-bit" gloss to Windows 98, and AMD might gain some serious market share here.

    Who's going to do this? Microsoft doesn't have any incentive to do it; they're already selling all the winblows 98 they want. They're going to be far too busy trying not to lose their shirts to Linux in the IA-64 sphere to give two shits about a warmed-over 64-bit AMD with miniscule market share. Anyone expecting to see winblows on native Sledgehammer is in for a nasty surprise. Linux probably will be ported at some point, but by that time I doubt it will still be in production.

    If it's priced the same as Intel's 32-bit chips

    It won't be. You know it, I know it, AMD knows it.

    On the other hand, if it's more expensive than a 32-bit chip and doesn't offer any real advantage to the average user, what's the point?

    Bingo. This is a product in search of a market. The engineers (and probably the lawyers too, but that's just rampant speculation) told the droids that it would be easier to produce a 64-bit x86 processor than an IA64-compatible one. So that's what they're going to do. I don't think AMD ever even considered something non-Intel compatible, it was more a question of which Intel to be compatible with. They just wanted to say "we have a 64-bit processor too." Well, big deal, AMD; everyone has a 64-bit processor these days. Having the weakest one of all isn't going to turn any heads, unless it's at the next shareholders' meeting.

  • Microsoft ... the vendor that has the most to gain from IA64.

    Really? I don't think so. I think they can gain little but lose a great deal. The ultra-low-end market is their strong point, which will be totally unaffected by IA64 for at least 5 years. The high end, which is supposedly targeted by IA64, already has a strong competitor, Linux. If M$ gets their act together, they can possibly hold their current market share. If not, their days of producing anything but low-end software for home users will be over. They aren't going to gain much, no matter what happens. First, Itanium won't offer the performance of its "traditional server vendor" rivals. So it's not going to grab much share from them, if any. Most of the IA64 share will come out of the existing peecee market. People running Linux on peecees aren't going to run out and buy M$ for IA64. The most M$-optimistic outcome is that current enntee users migrate en masse to IA64/enntee. In which case M$ holds its current market share, which, in this environment, isn't very high (37% or so last I saw). If any current enntee/peecee user needs more performance but is unwilling to leave Intel, he may find that his only choice is Linux - that is, if M$ doesn't get their IA64 OS done in time. In that case, M$ loses more market share to Linux and misses the opportunity to gain a foothold in the IA64 world.

    So what can they gain? Not much. They might convince a few of the dumber IT manglers that Itanium + enntee can compete with traditional high-end solutions, or maybe that it's better than Linux (unlikely given current trends), but this certainly isn't going to help them very much. At best, they stand to gain a very small chunk of the market. At worst, they stand to lose their entire presence in the non-home market. Given this fact, I join you in surprise at their decision making. Not because they have so much to gain but because they have so much to lose.

    If anyone comes out of IA64 a big winner it will be Linux, which will be the only option that runs on current peecees, future peecees, and the majority of the workstions/servers using non-Intel CPUs as well. It certainly won't be M$, and it probably won't be Intel. During this time I fully expect Intel to lose a good chunk of their customers to the traditional RISC vendors (Sun, SGI, etc) and people like IBM. Itanium will flop initially at least, and Intel isn't going to come out of it looking good. Nor is AMD, especially if this Sledgehammer really represents their primary plan.

    I sorta wonder if the whole "Sledgehammer" hype is just FUD smoke that AMD is blowing at IA-64.

    Seems like a pretty good theory. But I'd FUD better than that if I were AMD. Of course, there are plenty of good reasons to have F, U, and D about Itanium that have nothing to do with AMD.

  • The article did mention AMD adding support for 16 or 32 directly addressable floating-point registers, which would help in large part to relieve the register pressure of IA-32.
  • I forget what they were called, now, but Intel's version of the future was *not* the 8086, but the 8600 (?). The 8086 was just supposed to be a transition chip for which 8080 code could be easily cross-compiled. But silly IBM made a PC with the thing, and those three letters took over . . .

    The 186 was meant for controllers, but Radio Shack and a couple of others used it in desktops.

    The 286 would buy a couple of more years, and iirc, the 386 was supposed to be the flat-out end of the line (and a lot further than had been planned at the time of the 8086).

    Then a team managed to come up with the 486, and kept going a while . .
  • AMD doesn't have the market presence to make a new standard archtitecture. They don't have the legal rights to make a clone of the IA-64 chips. Thus, they have to stick to the x86.

    They have extended x86 to include a 64-bit mode. No, the instruction set isn't really "RISC", but that doesn't mean much anyway. The internals of x86 chips for some time have been RISC-like processors, with complicated decode/translation units on the front-end.

    The instruction set that you use to program a chip says little about the instructions that are actually getting executed by the core. In the decode, the x86 instruction is cracked into smaller instructions to do a simple load, store, or compute. I believe that AMD has done this since the K5, and Intel has done this since the PPro.

    Even the old VAX (whose instruction set was as CISC as CISC got) translated its instruction set down into microcode for what was essentially a load/store back-end. That is when people realized that that decode complexity could be moved into the compiler instead, and everyone started talking about RISC.

    AMD has already worked out the problems of dealing with an x86 instruction set. Decode on the Athlon is nastier than it should be as a result, but it works. The technological problem has been solved. To AMD, a more complex decode unit is a small price to pay for compatibility with the massive x86 code base.

    Given Intel's slip-ups with Merced, its just possible that AMD might gain some marketshare with this architecture. From an aesthetic point of view, it is unfortunate that it is still a child of the x86, but that is the fate of the PC. IBM made a standard with DOS/x86, and that standard holds to this day in Windows/P6.

    --Lenny
  • We're seeing a true fork in the development road, and I wouldn't expect an application compiled for a 64-bit AMD processor to execute at all on a 64-bit Intel processor. And an app compiled for a 64-bit Intel processor certainly will not execute on a 64-bit AMD processor.

    Intel (& HP) started with a fresh instruction set architecture for IA64 -- which means they don't need to worry about dedicating transistors or limiting design considerations to supporting decisions made 25 years ago (though I understand Itanium will have an IA32 emulation mode). Further, IA64 is using an advanced form of VLIW. AMD is creating a 64-bit extension to IA32 -- superscalar, not VLIW. Which is why they will be mutually incompatible. Sledgehammer will be to the today's AMD & Intel processors what the 386 was to the 286. Itanium will be to today's AMD & Intel processors what the 68K was to the x86.


    Christopher A. Bohn
  • In the sense that Sledgehammer is supposed to be a 64-bit extension to IA32 (similar to the 386's extension to the 16-bit instruction set in the 286), the existing gcc will compile code that would execute on Sledgehammer (just as code compiled for the 286 would execute on the 386 -- heck, I've got code compiled for my old 8088 executing on a Pentium). It won't be the most efficient use of processor or memory, but it'll execute. I'd also expect that 1) a 64-bit extension to the IA32 will quickly find its way into gcc even without AMD's help, and 2) AMD will help -- it'd be the smart business decision.
    Christopher A. Bohn
  • The upcoming 2.4 kernel will handle 64 GB of RAM.

    But to do it right you do indeed need a 64-bit processor.

    As for there being no memory limits with 64-bit processors, oh really? The people who put together data warehouses can put the lie to that one!

    Cheers,
    Ben
  • It definately was a good survey piece, but anyone notice that there were wayyyyyy to many WAGs (wild-ass-guesses) in the article?

    (Offtopic): Does anyone know where I can find a good technical description of AMD's roadmap? I'm trying to figure out what kind of SMP Athalon I can expect to possibly buy in Q4, what the new chip designs will incorporate, L2/Bus combinations, etc.

    -Erik

  • The 68000/68010 had 16-bit internal data paths and three 16-bit ALUs. Motorola advertised it as a 16/32-bit processor. The 32-bit features were done with microcode. The 68020 was the first fully 32-bit member of the 680x0 family.
  • Check AMD old PR. One of the first presentations on Sledgehammer was actually given to Alan Cox and Co. If not the first one.

  • Correct. Bseides the most important point. No more of this ugly emulation of a HP calculator for FPU. It is at best inappropriate for the modern world.

    On the less important points - 32 to 64 integer and 32 to 64 addresses:
    have a look at the 386-Pentium tech reference. You can see that almost anything besides a few control registers and some stuff related to wierd 48 bit addressing modes can be happily extended to 64 bits.

  • In the market will exist two mainstream 64-bit processors, the AMD Sledgehammer using an extended x86 instruction code and the Intel Itanium using the IA-64 instruction code. Consumers are screwed. Right now the P3 and Athlon are rad because they can run all the same binaries. In the 64-bit universe you'll have two options, chip optimiuzed code or huge binaries with 32-bit x86 instructions and a few optimized 64-bit instructions (like writing a binary with support for SSE and 3DNow!). My guess is that the software guys will tell the chip guys what they can do with their silicon. Current software and OSes will fork into at least two camps. The only recourse for people will be open sourced software or NeXT-ish packages containing binaries and libraries optimized for said processor. This is going to create so much damned market confusion. In a couple of years there's going to be half a dozen different hardware architectures in the mainstream market, again. I wonder if this is a good or bad thing, I suspect for most people it might be a bad thing being as some software companies will only support x number of OSes and chip architectures. I figure what will probably happen is smaller software companies that can only support one or two different systems will get eaten up by the Microsofts because they won't be able to afford to stay competitive in enough markets. Hmmm.
  • Although I found the non-x86 FPU and flat FPU details interesting, there is a lot more to all this.

    I think the site is unaware that GCC has been capable of 64-bit compilation for a long time and that 64-bit Linux is already here with the Alpha (several years mature, thanks to an industry who is using it). The basic IA-64 port and compiler is already completed (with optimizations slowly but surely being added). 64-bit Linux has a solid foothold in IA-64's release.

    But does that count AMD out? No. I do not see it too difficult for AMD to _extend_ the 32-bit GCC compiler to support its 64-bit extensions. It won't be a full-up project like the IA-64 port and compiler which has radical differences. Remember, Sledgehammer is all about compatibility, and that is in AMD's favor. It will be easy to extend Linux to support some of the 64-bit functions of Sledgehammer, first starting with the kernel, then the apps later on (as 32-bit x86 dies).

    But the true irony of all this is that it is in Microsoft's favor too! Almost a catch-22 if you dislike the Wintel duopoly. If IA-64 succeeds, Linux has that much more of a chance of wiping NT off the planet in the server and workstation relm. If Sledgehammer succeeds, Windows has a much better chance of going 64-bit with the little effort traditionally found in their Windows products.

    It's going to be interesting to see how this all unfolds. But on thing is for sure, just like the Alpha, Linux will allow IA-64 to sell regardless of any Windows support.

    -- Bryan "TheBS" Smith

  • this is so discussion is so lame.. *sigh* ...

    Then why participate at all?

    SCO Unix was XENIX, which was a Microsoft Product.

    Partially true, however XENIX was a 16 bit product at the time that Microsoft sold it off to SCO. Also by the time the product was renamed 'SCO UNIX' and became 32 bit, the XENIX kernel had been largely replaced by SVR2 code. The closest XENIX came to 32 bit when it was still a Microsoft product was the 68K versions, which were a 32 bit internal and 16 bit external processor (the first true 32 bit 68K, the 68020, wasn't out at that time). XENIX was also itself largely based on the Bell Labs Version 7 experimental version of UNIX, so Microsoft can't really take much more credit than having done a port.

    The truly sad thing is it took so long from the time Microsoft abandoned XENIX to the time that NT came out, which was around 10 years. Microsoft certainly could have had a 32 bit OS in 1986 or 1987 when the 386 started showing up had they been on the ball. Heck, they bailed out on OS/2 before a workable 32 bit version of it came out, which is part of the reason they were so late to the 32 bit game.

  • We didn't see a true 32 bit OS until NT came out.

    Actually, that would be better worded as "The Microsoft world didn't see a true 32 bit OS until NT came out".

    There were a lot of true 32 bit OSes out before NT, even on the x86 (SCO UNIX, Microport UNIX, Interactive UNIX, etc). On non-x86 processors there were dozens, dating back to at least the mid 70's. Since you mention DEC specifically, an example would be VMS. For that matter, they had Ultrix (which started as a thinly disguised 4.3BSD port) out in the mid 80's.

    I'm actually sure you already knew that, but some of the people out there who know nothing but Microsoft might not.

  • As others have mentioned, the Itanium and the Sledgehammer will probably not play well together at all. What I have read says that the Sledgehammer will be leaps and bounds better than the Itanium at running existing 32 bit apps. This is due to the fact that AMD is building on top of x86 and Intel is starting something entirely new. There was a great article on Toms Hardware about how Intel is very afraid because the 32 bit performace of the Itanium is just piss poor. The theory is that Itanium is a server CPU and should only be running a handfull of apps. But sooner or later those server CPUs trickle down to consumer CPUs. And consumers tend to frown on replacing every app they own.

    -B
  • I can just imagine the amazing confusion that will happen when they get around to trying to explain to the average consumer about "this type of 64 bit computer" as opposed to "that type of 64 bit computer". And how much heartache it will cause consumers to try to figure out *which* applications they can run.

    It seems to me that AMD, by choosing a non-IA64 architecture to compete with Intel's 64 bit processors, is basically splitting the market for themselves. Average consumers looking for a next-gen processor will most probably not understand the difference, and (maybe) pick one randomly or (maybe) go with the one with the better known name (Intel, right now). Much confusion abounds.

    Also, by using an extension of x86, AMD puts software vendors in a bind as well. Instead of allowing all the vendors to just recompile their software IA-64 compatiable, they'll have to keep Sledgehammer and IA64 versions. Much confusion abounds again.

    I think that if they could, AMD should have joined the IA-64 alliance, and released a IA-64 compatitable processor.
  • If I were AMD I would be working fast in the SMP motherboard. I'm positively sure that everyone is interested in them.
  • Good question. I think it's fairly safe to assume that they will only be compatible in the 32-bit modes. For 64-bit purposes, add one more architecture that will have to be supported, if successful. In reality, of course, it is usually not that bad, as this may increase the chance that another 64-bit architecture will fall by the wayside.
  • The author mentions the need for a single process to map over 4GB of ram. This is really only a very small piece of the puzzle though. The OS can really benefit from seeing more than 4GB. Win2k supports Intel's PAE (36-bit addressing on IA32) for up to 64 GB of memory. Win64 on Itanium/IA64 supports up to 16 TB of memory. This greatly reduces swap and allows massive disk caches. Programs can be left in memory all the time. Swap is the biggest killer of time, cpu, and overall performance. File system caches can grow so that the second time a program or library is loaded it comes right out of cache.
    Consider a WebServer/Database system. The web server can keep all of its static files in memory. The database can keep all its structures in memory. In Win2k applications can allocate physical memory that will never be moved or swaped out. This gives applications that need it total control. Memory is the only really fast component in the system. The more memory a system has the less often it has to access its disks.
  • Yeah, so this article wasn't too techie.. big deal.. there's no way an article this far ahead of the release could be very techie.. I don't think Intel and AMD are sharing all their secrets like good little children.

    The article does make some good points.. and shows a way that AMD could really get some market share. I know all of you aren't going to like it, but it's a good idea. I'm talking about this part : Compaq has said that it will not support a 64-bit Windows for its Alpha servers, which leaves Microsoft looking for some way to get into the server marketplace. Enter the AMD Sledgehammer. Microsoft could develop a 32-bit extended version of Windows that it could, over time, turn into a native 64-bit OS. If AMD splits Microsoft and Intel over the 64-bit OS issue, it would do some damage to both Intel and Windows' collective solidarity. I know.. our little cinderella AMD would be getting in bed with big bad Microsoft, but we've got to take out Intel and Microsoft out seperately, not together. Competition, enter stage right..

    props to AMD.. for making the next few years of CPUs a little more interesting than the last few years of Intel MHz leapfrogging with no real breakthroughs.

    //Phizzy

    afterthought : AMD.. I'm Still pissed I can't get a dual athlon, if you're listening.
  • I think it will be VERY similar to an Alpha chip. It will probably use an EV-7 Alpha bus, and may even be pin compatible. I saw a quote from AMD once that said something like this:

    "Someday, when designing a system, the very last decision you will have to make will be whether to use an AMD or Alpha CPU."

    I believe it was Jerry Sanders that said this. The sledgehammer and the new Alphas will be cousins. I'm pretty sure Compaq/Alpha and AMD are sharing lots of ideas.
  • 1. Actually, the Athlon is selling better on the high-end than the low end. Last I heard, the Athlon had over 40% of certain high-end markets. The exception to this may be corporate IT departments, which are a little too set in their ways - i.e. Run windoze on an Intel processor or we'll have "compatibility" problems. Compatible with what? Windoze on an Intel processor, that's what.

    2. The X-Box thing was mostly AMD's decision. AMD said that they could have gotten the X-Box, but they would have been "giving away their processors to do it." Which they weren't willing to do. I don't think they hold bad feelings toward Microsoft because of it. Actually, other recent interviews have indicated that Microsoft and AMD are still the best of buds.
  • From the article:

    "This means in another 6 to 8 years, consumer-end computers will ship with 2 to 4GB of RAM, and high-end servers will need at least 16GB of RAM. A 64-bit processor can map thousands of terabytes, (1.84e7) effectively eliminating the 'memory limit' barrier."

    a) 1.84e7 = millions of terabytes, 18.4 million

    b) Why am I so suspicious of the statement, "eliminating the 'memory limit' barrier." ?

    I guess 4billion * 4billion is an awful lot of bytes, but I clearly remember when the 386 came out and I was many years younger and more naive. I eagerly read every trade rag I could get my hands on, all of which happily gushed that nobody would ever need 4 gigs of RAM.

    Now that we've found applications for which we do need 4 gigs of ram, what kind of apps would it take to use 18.4 million terabytes?
  • This topic has generated some comments on AMD's weird choice to stay with an x86 architecture, where Intel finally decided to abandon it (except through emulation).

    As I can see, AMD had only two possible choices :

    Continue with an x86 arch and hack it once again. That's the way they have chosen.

    Design a new, but not-Itanium-compatible 64-bit CPU. Technically, a far better choice. But they knew they haven't Intel's marketing strength to impose this new arch to the world.

    Lots of comments have said that this choice is the result of AMD's will to dissociate themselves from Intel. I wouldn't say that : in my opinion, it comes from the fact that it would be impossible for AMD to impose a new CPU architecture.

    What do you think ?

    Stéphane

  • Seems like more of an x86 on steroid

    Hm. And just how much can a 90-year-old on steroids achieve?

  • So AMD is going their own way. That's good. Intel's decision to build a Very Long Instruction Word machine isn't well thought of in Silicon Valley. There's even opposition within Intel. VLIW machines have been tried before, and not with success. The compiler group from HP sent some people to speak at Stanford about compiling for the beast, and it's a very tough problem they don't know how to solve well. Early compilers for the IA-64 machines will probably suck. Optimization for those machines requires incredible amounts of instruction rearrangement.

    Whether AMD can do better remains to be seen, of course.

  • by 348 ( 124012 )
    At least with this product they are not strictly following intels direction. But I don't think they're heading toward RISC specifically. The "Details" section on the site has a blurb aboput Sledgehammer bridging between 32 and 64, but not with the standard RISCesque architecture. Seems like more of an x86 on steroids.
  • The 64-bit chip will also allow for much larger memory addresses. The upper limit of the current 32-bit processors is 4 gigabytes. Of course, who the heck has 4GB of RAM? If you do, please call me, we need to talk. High-end servers are right now shipping out with about 1GB of RAM. The ability to map only 4GB of memory will start to become an issue sooner than some people would like to admit. Eight years ago most computers contained between 4 and 16MB of RAM, but today it is commonplace to see computers with 64 to 256MB of RAM, a sixteen-fold increase. This means in another 6 to 8 years, consumer-end computers will ship with 2 to 4GB of RAM, and high-end servers will need at least 16GB of RAM. A 64-bit processor can map thousands of terabytes, (1.84e7) effectively eliminating the 'memory limit' barrier.

    This is a snip from the article that was sort of bothersome and or frightening to me.

    Okay as a software engineer in my limited experience here if a server needs 16GB of ram as this guy predicts things are going to be moving to a VERY server centric environment or.. there is some SERIOUS bloatware being written.

    It is sad to actually say we need more and more and more ram just to accomodate bloatware. When is gates law gonna start falling off??

    I can imagine it but how many servers are gonna need much more than 16Gig of ram...

    Then I look over at my webserver who is using 800MB of ram and I sigh.. How soon before the NT box eats all its phsyical ram and starts swapping and dies?

    The whole reason we added more ram to the machine was so that it would stay up another 10 hours so we did not have to reboot so often.

    Enter the hell that is 'modern' software that slowly chews up memory and never returns it.

    Note the two Unix webservers I have use 1/4 of the ram this NT box does and go down FAR less often and serve equal or greater loads some days. :-)

    *sigh*

    Thanks AMD you will allow me to not have to reboot so often, G.

  • by stripes ( 3681 ) on Monday March 20, 2000 @04:47AM (#1191269) Homepage Journal

    First the article says

    It is the first product that AMD has developed that doesn't totally follow Intel's lead. The K5, K6 and Athlon were created to compete with equivalent Intel products: The 486, Pentium and Pentium Pro/II/III.

    If we ignore AMD's many non-CPU products, there is still the AMD29k, a fine RISC CPU that had some great success in the printer market, and a few other embeded markets before it was discontinued.

    Shortly after that the article says:

    Intel has gone the "RISC-y" path while

    The IA-64 is definitly not a RISC. It has a few similar features, like being a load-store archature, but it has a lot of unRISCy features. The instruction decode looks very very complex (for no good reason). The modulo-scheduled register file while having some resemblence to SPARCs register windows are really a whole diffrent beast (ironicly having more resemblence to the AMD29k's "local" registers!). It is chock full of out and out scheduling restrictions (not as in "do this and it is slow", but "you can't do that", "if you do this who knows what happens").

    There are lots of intresting ideas in IA-64, many that may actually pan out. But calling it an "EPIC" rather then "RISC" isn't marketing speak, it really does have a lot of non-RISC attributes.

  • by Psiren ( 6145 ) on Monday March 20, 2000 @03:53AM (#1191270)
    I understand that Intel have been very helpful in porting Linux to the Itanium. Obviously, its in their best interests. Will AMD be as helpful? I'd like to hope they will. A positive commitment from AMD for Linux would bring a welcome boost to their sales methinks.

    Now weary traveller, rest your head. For just like me, you're utterly dead.
  • by FuriousJester ( 7941 ) on Monday March 20, 2000 @04:31AM (#1191271) Homepage

    I've read /. and AMDzone for a while now. I use and advocate for the use of AMD products. When reading previews of new tech, I like to know who wrote the piece, and what the connection between the tech and the author is. AMDZone [amdzone.com] is ran by Chris Tom, aka ruiner. Highlight the name attached to most posts on the site and check the email address. The site ruiner.net [ruiner.net] makes reference to his work on AMDZone [amdzone.com].

    The intentional use of "they talk about" in the post here, which indicates to the reader a separation between the poster and the site referenced, is definitely misleading. There is only one thing worse than faking impartiality, getting caught doing it. No, this is not a major sin, but it is a common marketroid sin, and it is one I prefer not to see either of my regular reads getting into. Ya gotta teach'em while they're stil youngins, else they never learn.

    This is just an FYI for those of us who know preview is another word for marketroid. There is probably some meaty goodness in the article, but remember the source.

  • by Greyfox ( 87712 ) on Monday March 20, 2000 @04:55AM (#1191272) Homepage Journal
    The last major change was to the 386, the first real 32-bit processor. We have been riding the 32-bit wave for at least ten years now, and we are beginning to see this wave crest.

    The first real 32-bit process from Intel maybe. Dec released their first 16 bit processor in 1970. I'm sure that by the time the 386 was introduced, they'd been doing 32 bit for ages and were starting to move to 64 bit processors. Never mind that a machine with one of those processors would have a six digit price tag.

    But the second bit of the quote is actually more interesting. We didn't see a true 32 bit OS until NT came out. The 95/98 archetecture still requires you to thunk back to 16 bits today, 3 decades after DEC introduced their first 16 bit chip. OS/2 also had a 16 bit device driver model to start out with. We didn't start using the full IA32 capabilities for years after it was introduced. AFAIK Linux and NT are the only two 32 bit OSes for IA32 (Well maybe SCO but I'm still amazed that anyone actually buys that stuff.)

  • by Maïdjeurtam ( 101190 ) on Monday March 20, 2000 @03:58AM (#1191273) Homepage Journal
    This article (from AMDZone, I know) seems to forget that this new AMD CPU is one more hack to the x86 architecture.

    Intel, with the Itanium, did the right thing and designed their new processor from scratch. Do we really need a new x86 chip, with its horrible design, when the open source concept allow you to recompile virtually anything in seconds, provided a compiler exist ?

    Personally, I can't imagine how AMD can success with this.

    Stéphane
  • by 348 ( 124012 ) on Monday March 20, 2000 @04:01AM (#1191274) Homepage
    Intel has said that it does not think 64-bit will become a standard until at least 2005; this is a 4-year window for AMD to move to the forefront of consumer computing.

    AMD has finally decided not to be the bridesmaid. This is the first real offering that doesn't mimick the direction set by Intel. With Sledgehammer being the only targeted 64-bit architecture from the big three that doesn't move to RISC. Speeds at close to 2Ghz and not using RISC architecture will open up a part of the market and allow AMD to bet the leader for once. Opening up the portability between 32-bit and 64-bit computing is goint to give AMD a huge advantage at least for the short term. Now let's see how they deliver, hopefully they learned from the Coppermine like failures with logistics and get the product to the shelves when they say they will.

  • by toofast ( 20646 ) on Monday March 20, 2000 @03:44AM (#1191275)
    I really appreciated the chart that compares the different CPU architectures. I teach Computer Science classes, and you won't believe how people judge a CPU only by it's rated MHz. For them, K6/2 500 == Athlon 500 == Alpha 500.

    I'm just wondering, now that AMD is working on a 64-bit chip without having an Intel counterpart to base itself on, how compatible will they both be when they hit the market??

One way to make your old car run better is to look up the price of a new model.

Working...