
64-bit Processor Next Year, Says AMD 138
Kill Switch writes: "There's this ZDNet article about AMD's announcement that they plan to introduce a 64-bit 'Sledgehammer' chip for the desktop (that's right, DEKSTOP); they also announced that they will be releasing new chips based on the new Mustang core and it looks like there will be way too many versions of this (various desktop, and server versions); and they announced mobile versions of the Duron and Athlon, based on the Mustang core." This could just be crazy enough to work! Updated 11:20GMT by timothy: wwelch contributed a link to a pretty good overview of the current 64-bit field, which of course excludes this just-announced AMD, but which helps put it all in perspective.
Slugfest '01 (Score:2)
Now, if the 64-bit portion of Sledgehammer runs as fast (or faster, knowing some of the tricks AMD has learned with the K7) as the Merced's 64-bit... that will really be the deciding factor. Welcome to Slugfest 2001, started even before the year is over. hehe
I have to admit, the prospect of being able to switch between 32 and 64-bit code on the same CPU without a penalty is somewhat attractive.
Alakaboo
I'm not especially impressed (Score:4)
*An extended set of instructions (akin to MMX, 3dNow!, or SSE) that operate on 64-bit words and corresponding memory operations that load/store 64-bit words using an increased address speed. This option seems to be the one that AMD will most probably use, since they stress compatibility with the existing ISA to such a great extent in their PR. This is also the worse option: we will still be stuck with the essential garbage that underlies the x86 ISA: 2 operand instructions, a limited register set, nonorthogonality of the instruction set, and numerous other flaws.
*The processor boots to x86 compatibility mode, then requires an instruction to bring it into x86-64 mode. x86-64 mode is a sanely designed ISA that addresses and performs operations on 64-bit works. We lose the limitations of a tiny register set, horrible instruction encoding, and the other flaws of x86. This option would be far superior to the first one, but if one is willing to go to such lengths to distance oneself from the original x86 ISA (thereby losing all compatibility with native x86 applications in x86-64 mode), why not just migrate completely to a new ISA and use those wonderful fabs for new Alphas or Power4s and include a token K7 or P3 for x86 compatibility? BTW, this option almost resembles the rationale for the monstrosity that is IA-64...
Most troubling of all is that x86-64 may bring back segments. I cannot stress how horrible this would be; application programmers from the days of mixed 16/32-bit programming will agree with me. In any case, we have been stuck with this sickening, illogical, inelegant, and inefficient instruction set for the past 3 decades; do we really want to put up with another 3 decades of this crap? I say, kill off x86 for good and move to a sane architecture.
What if you're mistaken? (Score:1)
if($I{U}{uid} > 0 && !$I{F}{postanon} ) {
$pts = $I{U}{defaultpoints};
$pts-- if $I{U}{karma} < -10;
$pts++ if $I{U}{karma} > 25 and !$I{F}{nobonus};
# Enforce proper ranges on comment points.
$pts = -1 if $pts < -1;
$pts = 5 if $pts > 5;
}
I'm not that great of a Perl hacker, but it looks as if you get that initial -1 score if you have been moderated to a -11 karma or any more negative value. I think that the "bitchslap" could have come from a democratic vote of the readership rather than from Rob Malda.
By the way, I don't see a "keep this guy down even if he's been moderated positive" function, which I think you are claiming exists in the Slashcode. But I'm not much of a Perl hacker, and I'll defer to someone else who shows me it's there. Perhaps you should look at the Slashcode yourself?
Bruce
Or it could be . . . (Score:1)
>it's News for Dummies.
Either that, or it's an editorial comment on the nature of Windows . . .
Sledgehammer vs Deerfield! (Score:1)
I asked the Intel guy "Will Sledgehammer speed up the development of Deerfield?", and he got that "Deer in the headlights" look for a few seconds, and then replied that "competition always results in better products sooner than later".
I hope that Sledgehammer causes Intel to put Deerfield on the front burner.
Suns aren't slow, they're just special; yeah x86! (Score:1)
Sparc's aren't slow. The whole point of the Sparc architecture is throughput, not latency, which you need if you're running a transaction server, not a PC or doing realtime 3d stuff. Sparcs sacrifice sum spec98{int,fp} but the boards they get plugged into are designed with I/O in mind, which is why people buy them.
I used to be one of those people in favor of chucking x86 right out the window, especially after spending a few days writing assembly for the godforsaken architecture. And then I realized something: investment counts. Think of all of the hundreds of millions of man hours spent on developing software for x86. Think of all of the hours just for the Linux kernel alone. That's wh people want an upgrade path that's a little easier than saying rewrite/recompile your software for PPC/Alpha/Sparc/Itanium. And why I'm so happy that AMD is following this path.
Re:What if you're mistaken? (Score:2)
Actually, it's the $I{U}{defaultpoints} field that stores the default score - you and I have it as one (and $pts++ if $I{U}{karma} > 25 and !$I{F}{nobonus}; increases it for karma), and brucedot has it at -1. If his karma was below -10, it would try to decrement it and then $pts = -1 if $pts Also note that the
Re:SMP (Score:1)
Uhhhh. You do realize that the Athlon is SMP capable, today. Right?
SMP capability is not so much a feature of the CPU, as it is a capability of the CPU's associated chipset
(which Athlon doesn't have yet, but soon will).
Re:How will this affect Transmeta? (Score:1)
You remember sorta correctly. A Crusoe instruction word is 128 bits. This does not make it a 128 bit processor. Each of those 128 bit instruction words contains four seperate 32 bit instructions. Crusoe's integers and memory addresses are still 32 bits. Thus, Crusoe is really more like four 32 bit processors running in parallel.
Re:Ripping Intel a new one? (Score:2)
The win AMD have scored, though, is that unlike the Merced (or whatever they call it this week), MicroSoft's apps don't need to be native to work well. Merced requires that, in order to get parity with systems depolyed now, you reengineer your software (and it's more than just a recompile). So anything not taking advantage of AMD64 will win, anything that doesn't won't be a loss.
More importantly, AMD are explicitly aiming at the desktop, while Intel are explicitly aiming at the server market. If I'm a desktop software vendor - Microsoft's Office division, or Adobe, or whoever, AMD64 looks a lok more attractive that ia64 right away, because it's easier to rebuild for, and because my target market are, according to the manufacturers, likely to be using the AMD chips. Ditto games, ditto everything except server apps. And if AMD64 builds enough *desktop* momentum, it will start moving inexorably into the server space based on their desktop success, just like Intel have.
Re:Moderators! (Score:1)
They finally caught up to Atari (Score:1)
Heh. Just kidding.
Re:Useless. (Score:2)
B) As for mapping large files, the biggest files that will be used in the near future are video projects, and consumer video projects get nowhere near 4 GB.
Remember, I'm talking consumer space. 64bit CPUs will be a tremendous help in server and much higher end space. Take Java for example. It is catching in back end server processing, and thus a 64bit CPU will help a lot. However, AMD is aiming this at the consumer/corporate, (kind of like Intel's higher end chips) and for those tasks, 64bit is kind of overkill.
Re:Useless. (Score:2)
Near pointers, far pointers, and ... (Score:1)
ALongLongWayFromHome pointers.
This is like putting lipstick on a pig.
Re:Old news links (Score:1)
My *needs* far outstrip 64 bit already ... (Score:2)
build one now, build more later :) (Score:1)
You can build a working a decent computer for anywhere from $400 to $40,000, depending on what you want to do and what you're willing to settle for, but waiting for a just-announced chip before you build a system sounds like you're looking for a case of blue balls, metaphorically speaking.
sw
versus Merced (Score:1)
--
Re:I don't think we'll ever see that chip (Score:1)
-----------------------------------
Re:In theory (Score:4)
Of course...... (Score:1)
BRTB
Re:I don't think we'll ever see that chip (Score:2)
I think AMD was very timely in announcing this so close to the Pentium 4 announcement. It gives the world something to think about...
Which will chip will win? Is it the same ol' same ol' in a new faster package, or is it something totally new and expanded but slower??? -- We will see soon.
Jordo
Re:SMP (Score:1)
Ever think about how much space it would take to do SMP with Slot A processors and 4" fans?
What the instruction set affects. (Score:3)
You are correct - instruction set doesn't make a difference to most programmers' code. A few places where it _does_ make a difference are:
Some features of an instruction set and chip architecture make it easy to optimize code to run quickly. Some features make it harder. With 4 general-purpose registers, and only two or three more that you might be able to use for storage in a pinch (3 of ds/es/fs/gs), you have to keep fetching operands from memory if you're dealing with many variables at once. Even with a cache, this is slow. It also makes things like loop unrolling much harder.
The core of any optimized graphics driver will have hand-tuned assembly. This isn't just for software rasterizers - this is for geometry engines and the like also, which are still around in abundance.
As these do much deep mucking about with the processor state, many pieces of these have to be written in assembly (take a look in
None of these will be issues for most programmers, but they still do come up, and all programmers working on code where speed is important will notice the effect on compiler optimizations.
DEKSTOP? (Score:1)
64 bit Barely? (Score:1)
Re:Honest Question (Score:1)
but the big question is... (Score:1)
You forget the obvious, and probably sanest way. (Score:3)
You overlook the third way - apply the same kind of extension as is used for 32-bit.
Early x86s were 16-bit chips. When the 80386 came along, a kludge was implemented to allow 32-bit - add 0x66 in front of any 16-bit instruction to make it work on 32-bit operands.
The obvious way to add support for 64-bit instructions without adding a new processing mode or bloating the instruction set is to find another unused byte code and declare it to be the 64-bit specifier.
Code looks very similar to 32-bit code, all old code still works, and you have very few headaches porting a compiler to the new platform.
The _best_ thing to do is abandon the architecture completely, but if that's not an option, this is probably the cleanest way of extending x86.
another war... (Score:1)
kicking some CAD is a good thing [cadfu.com]
Sparc and MIPS. (Score:2)
*MIPS: Fading out.
Sun's market isn't on the desktop or even in low-end servers - it's at the very high end. Their processors and motherboard architectures are optimized to work in machines with hundreds to thousands of processors. While they pay a penalty for this on the low end, you can always be sure that a Sun box can scale well to truly insane processor counts.
I also like their sneaky register file trick that masks calling latency.
MIPS, OTOH, is one of the cleanest, sanest processor designs that I've seen. It's not an FP powerhouse, but it's still respectable. Its primary market is as a licensed core, because of its easily extendible architecture. The Playstation and Playstation 2 are both based on MIPS cores, which counts for quite a bit of volume.
SGI boxes are also mainly based on MIPS chips, and are still the reigning champions for heavy-duty rendering due to a very intelligent system bus design.
In summary, I think that the conclusions you quote are premature.
Re:I'm not especially impressed (Score:1)
When the 386 came out, segments could be up to 4GB in size, IIRC, but by then it was too late... the concept had a bum rap.
Regards, your friendly neighbourhood cranq
And in other news.... (Score:3)
-- Happy 4th From Jordo --
Re:What about applications? (Score:1)
Some 7 years late (Score:1)
Re:Could this be the start of a new era? (Score:1)
Let someone manufacture a parallel SMP crusoe architecture for laptop performanc.. that's what I'd like to see happen.
nerdfarm.org [nerdfarm.org]
Keep the name (Score:3)
"Sledgehammer" is a completely cool name. Don't change it to some marketroids idea like "Athlon Pro". I want to be able to tell people that I've got a Sledgehammer!
Re:Honest Question (Score:1)
You can take this all with a grain of salt, but Darwin (and Rhapsody) for the x86 are out there. And there have been reports of Alphas and Sparcs running Mac OS X varients as well..
That means happy days ahead for HD and RAM co's (Score:1)
Re:Don't forget (Score:5)
Bullshit. The size of the base word will not affect the effective amount of memory, just the amount of memory that can be addressed. If you declare a 16-bit variable (probably a short, but ANSI makes no guarantees), it will occupy 16-bits, subject to alignment constraints.
"And programs will be about twice as large on your hard drive due to the 64 bit instruction words."
(Alpha) stage3-decompose.o: 213920 bytes
(x86) stage3-decompose.o: 264018 bytes
Here's a hint: 64 bit ISAs don't neccesarily use 64 bit instructions; the x86 ISAs variable length words take up as much (if not more) space than the fixed 32-bit words in more RISC-like ISAs.
"And there's no point in running a 64 bit system with IDE drives, so you'd best pick up some nice expensive SCSI drives, too."
IDE drives can offer entirely acceptable performance, although for a serious performance system or server, one would of course use SCSI drives or a RAID array. But it's definitely possible to use IDE drives with 64 bit systems.
"Some 16 and 32 bit code will recompile cleanly. I expect much more will not."
If AMD specifies the standard LP-64 programming model (longs and pointers are 64 bits, ints are 32 bits), I would expect a significant majority of the non-Windows software to compile relatively cleanly on the Sledgehammer (or other 64 bit processor). Most Windows software unfortunately won't, because of some poor assumptions built into MFC and standard Windows programming techniques that I won't go into here (specifically the assumption that sizeof(int)==sizeof(int *)==sizeof(long)).
Re:My *needs* far outstrip 64 bit already ... (Score:1)
Ripping Intel a new one? (Score:5)
This could be AMD's master stroke against Intel - if AMD can get application developers like Adobe supporting their 64 bit extensions, Intel will be in big, big trouble. Especially since AMD are promising 64 bit loving on the desktop, while Intel are still pushing the line that 64 bit is server technology.
It's interesting that Intel are being outmanoevered at their own game; for years, manufacturers would throw up technologically superior chips (680x0 in its heyday, the original ARM 2 line, the Alpha, PPC, etc) with better performance, but they would be unable to get much market penetration, since the market valued x86 compatability in 90% of cases. Now Intel is offerring (well, vapouring) a 64 bit architecture that offers second-rate ia32 compatibility and has a competitor claiming all the goodness of a fast 64 bit system with little or no loss for ia32 apps.
It will also be interesting to see how this affects the free software world. For example, free databases like Postgres could look more attractive with cheap, abundant 64 bit hardware to run them on. And, more than that, if there is a schism in the i32 world, with some people going the ia64 route, and some going the Sledgehammer, the ability to recompile open source apps to the arch that best suits one's own needs, rather than have purchasing dictated by a split applications market, could be a win.
Re:JAVA gets Extra Long? (Score:2)
Without recompiling? (Score:1)
Speedup of existing code. (Score:3)
Memory copies on the x86 are already 64-bit due to a sneaky hack - MMX loads/stores have been 64-bit for a while, and thus take one clock. Of course, you still have the MMX/FP switching overhead.
I don't really see much that could speed up existing code. The only 64-bit transfers that go on (MMX and FP loads) are already handled as 64-bit transfers.
If you're writing a 64-bit application, then yes, many things will be faster (due to you now being able to hold double-precision floats in one register if nothing else), but that involves at least a recompile and possibly additional tweaking.
Re: segments (Score:2)
The Pentium family has a 48-bit segmented mode, which, as far as I know, is used by no operating system anywhere. In fact, some Pentium-family chips bring out 36 address pins, so machines with up to 64GB of physical RAM are possible with current hardware. You'd be limited to 4GB per segment, which would probably mean 4GB per process in Linux. Do you really want single processes bigger than that?
Segmented mode isn't bad if the segments are big enough. It's the hokey way the 8080 to 8086 transition was managed that caused segmented architectures to get such a bad reputation. Better segmented machines have been built, although mostly in the mainframe era.
A flat address space leads to problems of its own, especially when shared code and data is involved. Look at the mess required to relocate DLLs, for example. With a segmented address space, the hardware does that for you.
Still, everybody understands flat address spaces, and it's probably worth it to stay with them just to avoid the reeducation costs.
Re:That means happy days ahead for HD and RAM co's (Score:1)
Re:Old news links (Score:3)
What is interesting is that someone thought it important to not panic the Windows users. Imagine if ZDNet's readers were to think that the AMD Sledgehammer wasn't going to be Windows compatible. The poor chip would never sell!
--
Re:I don't see the point... (Score:1)
pH34r /\/\Y @$C11 KuNz!!!
(|)
WHY? (Score:1)
I don't need more than 4GB ram. And I should never need that much.
Hmm, why else? I don't care about instruction sets. The Intel IA32 is good enough for me, becuase I program in QuickBASIC 4.5 for DOS. It is nice and fast. I don't care about asm anymore, I tried it once and it was too hard.
And 32 bit apps won't go faster on a 64 bit chip, will they? 16 bit apps don't go much faster on a 16mHz 386 than on a 16mHz 286 (I have both, so don't say that's BS).
And why do apps need more than 32 bits? What do the extra 32 bits allow an app to do that it can't do right now?
So, it seems to me that the only people who really need this are DBMS ops or ASM programmers.
I don't care for it. If it becomes standard, I'll eventualy buy one. But right now I don't give a damn.
Moderators! (Score:1)
This is exactly the same thought I had when I read this news before. "Oh, shit. Now there'll be a 64-bit architecture with 4 general purpose registers (eeax?), and an insane ISA."
For the love of God, please do not do this, AMD!
--Corey
All of them that run on x86 will run on Hammer... (Score:3)
At any rate, it *will* be able to run 32-bit apps natively, not through emulation as with Merced--err, Itanium (dumb name). As much as many
Re:What about applications? (Score:5)
Certainly for apps that do a lot of 64-bit arithmetic, though that's probably mostly scientific applications rather than the familiar desktop application. Beyond that I'm not sure, and would like to hear opinions too. Will it help with things like graphics, since you would be able to wade through the masses of data involved in various transformations faster?
> Will 16- or 32-bit apps need to be ported or just recompiled to gain a speed boost?
It is supposed to be backward compatible with current x86 systems. Probably without even a recompile. It would almost be suicide for a company to push a 64-bit x86 architecture otherwise, since (so far!) the overwhelming majority of such machines would be bought to run Windows and Windows apps, and very many people would be very reluctant to buy a processor that made them throw out their fine collection of apps.
Similarly, software houses will be reluctant to ship 64-bit versions of their apps until 64-bit processors are common. (Witness that even Linux binaries are often still distributed for the lowest common denominator, the i386, though surely most of us run 486s, 586s, or 686s by now.)
The preponderance of existing 32-bit apps probably means that most users will not get the full benefits of the 64-bitness of the new processors. This is another area where users of OSS will probably reap the early benefits, since they will be able to recompile their apps as true 64-bit apps right away (probably after having a few issues tweaked), whereas commercial apps will likely continue to ship as 32-bit binaries for several years after the first 64-bit x86s hit the market.
As a final observation, these chips will surely price above even the high end Thunderbirds, which are already going to be too pricy for most people. I suspect that the early adopters of 64-bit x86 will mostly be people who need the number-crunching abilities. For others, it will initially be a status symbol (very important in the corporate environment, ya know!).
--
Oh happy day! (Score:2)
As long as their archetecture beats alot of the old x86 stuff into the dirt while keeping compatibility, who gives a rats ass. Your old compilers will work, however poorly they perfor mext to compilers optimized for the instruction set wont matter, because even with out major rewrites you can get old apps working with minimum fuss. Nothing wrong with that in the slightest.
Intel you can kiss my lily white ass.. I know where my dollars go. And with the EV bus they should compete relatively well with Alpha CPU's which isn't necessarily a bad thing.
I just wonder what is going to happen to transmeta... I want to see them do so well, the idea behind thier gear is amazing, to me at least.
#include caffiene.c
64-bit Linux Support? Addendum & Self corrections (Score:1)
After reading up more on Project Monterey, I've learnt that Intel is playing a major role in it's development. Obviously, this is going to have a huge impact on how much support AMD can possibly provide for Monterey, unless AMD decides to participate in the project itself directly. I hope AMD and Intel can find common ground in the development for the new 64-bit Linuxes/Unices and *BSDs; otherwise it looks like another job for anti-trust laws.
BTW: Where do *BSD distributions fit in this picutre? Self Corrections
I'd just like to state that Project Monterey, an alliance of several Unix vendors consists of IBM, Compaq, Sequent and SCO. I was mistaken about Sun & HP being part of it. For information on Monterey, visit IBM's site here [ibm.com] or read the zdNET article [zdnet.com]
And my last line refers to the problems of AMD's Athlon is its early days, and does not refer to the SledgeHammer.
Re:Slugfest '01 (Score:3)
Re:How will this affect Transmeta? (Score:1)
-P
Re:How will this affect Transmeta? (Score:1)
and how exactly, woud that harm the oppen source movement?
Re:Still too early to say... (Score:1)
No kidding. IA-64 is not even using the x86 ISA, while the Sledgehammer will extend the x86 ISA to 64-bits.
Intel will probably come out with an x86-compatible 64-bit processor around the same time AMD introduces theirs.
Actually, so far Intel has not announced any plans to extend the x86 ISA to 64 bits, their IA-64 uses EPIC "technology", and can not natively run x86 code. But, it will be "compatible", but not nearly as compatible as the Sledgehammer, which will natively execute older programs compiled for the 32-bit x86 ISA.
If not, AMD will be well-poised to take over the desktop market
This sort of depends upon exactly how good the Itanium (Intel/HP's IA-64 processor) turns out to be. Remember, Intel has already taken big steps to try and turn programmers toward their new 64-bit ISA, and Intel is planning the Itanium for desktops as well. If it turns out that Intel can manage to get programs desktop users are accustomed to using to be made explicitly for their IA-64 and not AMD's 64-bit x86, then many desktop users might feel pressured to switch completely away from x86. 'Fortunatly' for AMD, Intel has totally blown off Microsoft, and not included them in the development of this at all, so maybe AMD will be able to garner some support from MS.
BTW, there is a lot of info on this, starting about 10 months ago, in the Silicon Insider, located at real world tech [realworldtech.com].
Small Minds, Small Minds... (Score:2)
That's a very definite declarative you just made, and a wise man once said "The less apt a man is to make declarative statements, the less apt he is to look like a fool in retrospect." Nothing personal, but it's always a bad idea to bandy about phrases like "could not possibly." Not too long ago, people thought that light "could not possibly" travel faster than it does in a vacuum, and well...
Point being, as much as you may know about processor architecture, you don't know as much as the AMD design team. If at one point they thought it possible to design a processor which could perform as I mentioned above, then it is doubtless possible, even if they have since abandoned the idea in favor of something easier to design.
You know what else "could not possibly work efficiently"? Utilizing a VLIW core to process an ISA overlay which exists in software. I mean, that's just such a terribly inefficient concept that it couldn't possibly be worth doing, right? The VLIW core of such a badly designed processor would have to be so powerful and clocked so high that it would consume far more power than is necessary to run a normal x86 processor, right? As we all know, such conventional thinking turned out to be very, very mistaken. Transmeta's Crusoe has proven that such a thing can be done, though few would have ever thought it would work and work so well.
I think that should prove my point, but let me continue de-FUDifying your post.
> x86 *is* a terrible ISA and backwards compatibility *does* hold back tech, both
> in terms of performance and price/performance.
I already admitted that x86 is a poor ISA--of course it is, it's ancient; pre-Cambrian by the standards of microprocessor tech. However, thanks to good compilers the ISA is as easy to write for as any other--few people do handwritten assembler any more, for any ISA. And yes, it is inefficient--but most current x86 processors actually use a RISC-like core to process data after it has been decoded in hardware from the CISC x86 ISA into smaller RISCy instructions; being done in efficient hardware, little overhead occurs and performance is impressive from something like an Athlon. The net effect of that is that you get RISC-like performance with backward compatibility with very little overhead. And, let us not forget that contemporary RISC processors are, as noted at http://arstechnica.com/cpu/4q99/risc-cisc/rvc-1.h
The main reason people such as yourself complain on
So, what should we replace x86 with on the desktop? Gee, UltraSparcs run around 10 grand for entry-level boxes, so that's not realistic. How about StrongARM? Very poor FPU performance and very low clockspeeds, don't make me laugh. Itanium? Intel will price those out of reach of God for the next few years. Oh, wait, I know: PowerPC. And yes, PPC is a great architecture, very powerful and extensible. I would love for x86 to be supplanted by PPC, but that'll never happen because Motorola and Big Blue have a stranglehold on production and have no financial need to push up clockspeeds and puch to high production levels--IBM uses them in some of their own boxes, but doesn't have reason to push out lots of them since Apple is the only other game in town--other PPC boards have remained very fringe despite the release of the CHRP specs. Non-geeks aren't interested in non-Apple PPC based systems. Learn to live with that for the next several years at least. Aside from which, thanks to the ever-increasing x86 clockspeeds, top-tier Athlons and Willamettes will be outperforming top-tier PPCs for a while.
> backwards compatibility *does* hold back tech, both in terms of performance and price/performance
I think I just disproved that, too. x86 processors consistently outperform all others on price/performance ratio. Come up with a better solution or shut up. There are many other ISAs out there, and new ones coming, and yet not a single one of them can unseat x86 on price/performance, where it counts. The x86 ISA is old and ugly--but processor designers have come up with very sexy ways to push its performance up, by melding RISC core technologies with the older CISC instruction decoders. And then, they use brute force of higher clockspeeds to outperform most of the competition, and to outprice all the competition. It's not holding us back at all, it's forcing us to innovate cores and to brute force clockspeeds well above all other processors.
And that isn't even counting the importance on price/performance of maintaining backward compatibility. The same software can be re-used through many upgrades, which is even more important for businesses who've developed custom software solutions than it is for individuals.
Not to mention the lack of competition and subsequent higher prices which would be inherent in any new ISAs. Why the fuck aren't Alphas and UltraSparcs running at higher clockspeeds and costing less, eh? Because there's no competition. The ISAs are owned and licensed by single companies, who don't feel the pressure to do more, faster, better, like x86 companies do. Look at Intel's snail-pace development in the desktop range before AMD started turning up the heat. x86 is, effectively, an open-source ISA, *the* open-source ISA. That's why they're unmatched on price/performance. If Itanium or any other proprietary ISA becomes the new standard, we're all fucked.
So, think before you hand out that party-line BS about x86 being so terrible. x86 is responsible for the home computer revolution, and without it the Internet would have remained a toy for universities. Think about it.
Re:JAVA gets Extra Long? (Score:1)
Re:Of course...... (Score:1)
Re:a cooler name (Score:1)
Re:Without recompiling? (Score:1)
Performance is kind of a dirty word right now, the IA64 cpu's and chipsets are just too new to give real performance numbers yet. Having said that I believe that even Intel will tell you that IA32 programs will not run as fast on Itanium as they will on the fastest IA32 processor available at that time. Let's face it, this is a 64-bit machine. If you want 64-bit performance use Itanium, if you want 32-bit performance use Pentium.
--
--
Re:I'm not especially impressed (Score:2)
I think the only way around this dilemma is if Intel and AMD got together to define a new standard that was compatible with the other.
Re:All of them that run on x86 will run on Hammer. (Score:3)
x86 *is* a terrible ISA and backwards compatibility *does* hold back tech, both in terms of performance and price/performance. If we were not shackled with the x86 ISA so pervasively, the design and fabrication talent that Intel and AMD so obviously possess could have been far better used to design chips whose performance would have been incredible. My reasoning is that if one is willing to go so such lengths and create a new ISA (albeit one that is supposedly compatible with x86 and that extends it), thereby requirely new compilers and OS support, one should go all the way and just start from scratch and use a decent, sane, reasonable ISA.
It won't (Score:2)
The Crusoe is not targeted as a mid-performance cpu, nor is it a x86 cpu. It is targeted as a versitile low power consumption cpu. It seems no one understands this.
NightHawk
Tyranny =Gov. choosing how much power to give the People.
I stand corrected (Score:3)
DEKSTOP POWER!!! (Score:2)
AMD listens where Intel is deaf. (Score:5)
He claimed the 'x86'ness of the PPro took 7% of the die space. For an additional 5%, they could have added a second 'personality', and begun the migration to a 'cleaner' ISA some years ago.
Oh well...
Three cheers for AMD! (o8
that's right, DEKSTOP (Score:5)
So in other words, you're saying it's an Alpha killer?
---
Re:Honest Question (Score:2)
Re:Unicode (Score:2)
That is 2^16 (65536) characters.
That's NOT 20million.
And wouldn't fit all the languages if you went and tried to have them all represented at once.
Re:I'm not especially impressed (Score:2)
Most troubling of all is that x86-64 may bring back segments. I cannot stress how horrible this would be; application programmers from the days of mixed 16/32-bit programming will agree with me.
Yeah, but kernel programers and other Unix heads will cry with joy. finially the phrase:
segmentation fault: core dumped
will make sense with regaurds to the underling hardware.
Re:Useless. (Score:2)
ignorant, almost sci-fi question (Score:2)
How long before we see 256-bit desktop machines? And what will we be able to do on them?
Re:Could this be the start of a new era? (Score:2)
I would suggest that this era has already begun. Take a look at the current (or at least the upcoming) crop of PDA's. Though there still not as powerful as laptops, and for many, still not functional enough to replace laptops, they do have more than enough power for many people, espically now that wireless internet access is becoming more and more a reality. I personally abandoned my compaq notebook a year or so ago for the joy of my PDA. And, as far as I know atleast, none of the mainstream PDAs use x86 processors.
Even if you don't consider the PDAs to have started this trend, the fact that we'll have both 64-bit and 32-bit, would hopefully lead to more and more portable programs (as in cross platform), and less 32-bit x86 specific code, which, imho, can only be a good thing.
A quick summary of the DeMone's article (Score:5)
*Alpha: The reigning champion of the 64-bit processors battlefield. The 21264a (EV67) shipping now still has the best integer and FP performance of any processor. The 21264b (EV68) will be released in two phases: first as a hybrid
*PA-RISC: HP has not neccesarily given up on its own processors, despite its nominal strategic alliance with Intel's IA64. An enormous cache allows the current members of this family to keep pace, even without significant architectural modifications. Later members could
*Itanium: A bloated instruction set that is incredibly ornate. Heavily dependent on compiler technology. Lots of marketing hype that exaggerates the true technical merit of the Merced processor. The Merced will debut at a relatively slow clock speed, but McKinley (HP's 2nd generation IA64 CPU) will definitely been a key competitor in the 64-bit HPC market.
*Sparc: Poor in performance, but software application support keeps these Sun processors alive. Even the not-yet-released UltraSparc 3 will have disappointing performance relative to modern processors.
*MIPS: Fading out.
*Power: The Power4 looks very impressive, but not much information has been released about it to this point.
The Sledgehamer is simply not a very interesting chip; it is generally agreed that x86 had the misfortune to become the most popular desktop ISA, without regard to its actual merits. An extention of x86 to 64-bits does not interest people much, essentially because of how ugly, inefficient, and inelegant the original x86 ISA is/was. Speaking for myself, I certainly do not want to have to put up with 30 more years of this defectiveness.
Re:Power hog (Score:2)
Re:Don't forget (Score:2)
"The user programming model is the standard LP64 moden meaning that the C data type "long" as well as pointers are 64-bit in size. This is the same model that has been adopted by all other Linux and UNIX 64-bit platforms in existence."
Moreover, "long long" is not in C99 (the latest ANSI C Standard); it is a gcc specific extention. The correct type should be "long long int"; this is guaranteed to be at least 64-bits. Most compilers will probably choose for this to be double the native word size (or 128-bit for 64-bit CPUs).
priorities (Score:2)
- Intel announces the NAME of their next chip
Hmm, which company would you bet on?
Re:64-bit Linux Support? (Score:2)
IMO, they won't. The work for ABI for Sladgehammer has already started and (as I could see in the project funded by AMD) it's much more similar to Alpha than to Itanium.
Fortunately, a lot of work has been put to linux developement to work smoothly on 64bits.
On the other hand, both ia64 and sladgehammer will be able to run in "compatible mode"
I have an alter-ego at Red Dwarf. Don't remind me that coward.
Let's not get too excited, though.. (Score:2)
Just.. don't.. jump the gun with this thing.. (err... whatever.)
Old news links (Score:5)
AMD has disclosed specifications to the major OS vendors and Microsoft so that they may ensure that their operating systems and tools will be AMD x86 64-bit aware
AND
"By extending the x86 instruction set to 64-bits, AMD's x86-64 technology should give us very fast compiler retargetting and the easiest kernel port so far," said Alan Cox, Linux Kernel Developer.
-----------------
It looks like a real battle ahead for Intel.
The deciding factor... (Score:2)
Re:You forget the obvious, and probably sanest way (Score:2)
As an aside, the x86 is not especially hard to write assembly for compared with heavily pipelined RISC and VLIW (or EPIC, if you insist) chips. The instruction set may be crufy, but it doesn't require youy to think like a compiler. Writing compilers, of course, is another matter - compilers for RISC devices are generally much easier to write.
Could this be the start of a new era? (Score:4)
Re:You forget the obvious, and probably sanest way (Score:2)
There are two problems with this.
The first is instruction set bloat. This is generally agreed to be a Bad Thing, and also increases the area of the decoding circuitry by some marginal amount.
The second is shadowing of opcodes that other chips may use. If Intel decides to extend x86 yet again, and AMD has already allocated the used opcodes to their own 64-bit instructions, binary compatibility will be broken. This is much less likely snagging only one opcode.
In summary, I don't really see the point of adding new instructions when the 64-bit-tag system works just as well (no slower than 32-bit code).
Itanium IA64 support (Score:5)
Individual processes can select which instruction set they wish to run in, IA32 or IA64, even though the kernel is executing entirely in the IA64 instruction set. We've added IA32 kernel interfaces to match the system calls available currently on the i386 Linux kernel. This is not vaporware, this is running and has been publicly demonstrated at conferences this year.
Currently I've run IA32 versions of bash, gdb, gcc and netscape. All of these programs are running now with no known problems. I'm sure there are IA32 programs out there that don't work yet but my goal is to make sure that eventually all IA32 programs will run on the Itanium.
I admit to having a bias on this subject as I work for VA Linux and my job is to help create Linux for the IA64 processor.
--
--
Re:Moderators! (Score:2)
Re:The deciding factor... (Score:3)
You understate the case. Every modern general purpose CPU implementation is "design symbiotic" with a targeted modern compiler(s). The primary distinction between RISC/CISC/VLIW/etc. architectures is the tradeoff of work between the CPU and the compiler. (Go dig around in the technical documentation at the TI 'C6x DSP [ti.com] web site for a fascinating view of how a modern VLIW architecture impacts processor and compiler design.)
The architectural decisions in hardware must be borne out by a compiler that leverages these features to the fullest. Likewise, the implementation of a CPU must actively enable the compiler to take maximum advantage of hardware bandwidth. Once the chips tape out, both Intel and AMD MUST ensure that the compilers measure up -- or else they've run half the race and given up.
Honest Question (Score:2)
What about applications? (Score:2)
If this is an extention of x86, I assume existing binaries will still function -- but I have two questions:
-Will 16- or 32-bit apps notice a speed performance on a 64-bit architechture?
-Will 16- or 32-bit apps need to be ported or just recompiled to gain a speed boost?
I'd also be curious to see if the gain in performance is going to be worth the doubtless hefty price...
--
How long till we get native 64bit? (Score:2)
Since the chip comes out in the middle of next year I would be intrested in seeing just what type of support is provided by the various compiler groups and OS groups to support native 64bit mode of this chip. Sure it isn't needed to run, but I'm intrested in known just what the differences in performance are going to be between the 32bit instructions and the 64bit instructions this chip is supposed to support.
I'm currently hearing about all this support for the Intel Itanium, or whatever it shall be called this week, from Linux and some compiler groups and yes MS too. I haven't looked that hard yet but I don't see any mention of who will be supporting the 64bit extentions.
Now making the assumption here that the 64bit extentions are not supported immediately it may mean that Intel can market its chips more effectively and provide a bit of FUD that will hurt AMD in this. Then again since this AMD chip will support all the old 32bit applications and appears to not require much of a hardware change that the Intel chip, AMD might be able to take a larger chunk of the market away from Intel.
Now just when can I expect to see one of these chips with a motherboard and decent chipset?
64-bit Linux Support? (Score:2)
The question I'd like to ask is whether 64-bit Linux/*BSD distributions designed for the IA-64 be readily compatible with or available for the AMD SledgeHammer, and will AMD follow in the footsteps of Intel in supporting open source development on this architecture?
Hopefully AMD and the Linux kernel developers will be able to avoid the initial MTTR problems that plagued the processor in the first few weeks it was out. Keep up the great work AMD.
Read the spec again, sparky (Score:2)
We don't know how bad things are in north korea, but here are some pictures of hungry children. -- CNN
Useless. (Score:2)
SMP (Score:5)
See my earlier post at http://slashdot.org/comments.pl?sid=00/05/19/1822
JAVA gets Extra Long? (Score:3)