Intel: No Rush to 64-bit Desktop 616
An anonymous reader writes "Advanced Micro Devices and Apple Computer will likely tout that they can deliver 64-bit computing to desktops this year, but Intel is in no hurry. Two of the company's top researchers said that a lack of applications, existing circumstances in the memory market, and the inherent challenges in getting the industry and consumers to migrate to new chips will likely keep Intel from coming out with a 64-bit chip--similar to those found in high-end servers and workstations--for PCs for years."
Re:Of course... (Score:3, Interesting)
As a semi-future-proofing-power-user. I built a PC in 1998. I put in 256MB RAM to try to keep it running as long as possible. That's price-equivilent to 2GB at todays prices.
It's really not going to be long before the geeks feel they need to do so.
Re:amd get leap on intel? (Score:2, Interesting)
Another technique for expanding the memory capacity of current 32-bit chips is through physical memory addressing, said Dean McCarron, principal analyst of Mercury Research. This involves altering the chipset so that 32-bit chips could handle longer memory addresses. Intel has in fact already done preliminary work that would let its PC chips handle 40-bit addressing, which would let PCs hold more than 512GB of memory, according to papers published by the company.
Reasons for 64 bit desktops (Score:5, Interesting)
Re:amd get leap on intel? (Score:3, Interesting)
For corporate desktops... (Score:5, Interesting)
AMD investor. (Score:3, Interesting)
Intel is committing hara-kiri in my opinion here (thats suicide for honor in Japanese). Similar events return to my memory, and history has proved all these were utterly wrong... (Its sad to acknowledge that I REMEMBER when some of these things happened!
- Intel 286 vs 386 (IBM: A 286 is enough for most people...)
- IBM Microchannel vs ISA (The same thing)
- 'A good programmer should be able to do anything with 1K of memory'. I don't remember the author, but probably someone from IBM in the 60s or 70s.
Time flies...
It's been done before (Score:5, Interesting)
Didn't Apple manage to get their (admittedly smaller) user base to switch to a better processor?
Intel's argument against 64-bit computing seems to be an advertisement for the x86-64 concept. The article didn't mention gaming, but surely the gamer market will be a major early-adopter base. It sounds like preemptive marketing to me.
As for memory, the article, and presumably intel, don't seem to account for the ever-increasing memory footprint of Microsoft's operating system (or for the GNOME stuff on our favorite OS), and so are perhaps too dismissive of the need for a >4GB desktop. As we all know all too well, one can never have too much memory or disk space, and applications and data will always grow to expand to the limits of both.
Personally, I'm holding off on any new hardware for my endeavors until I see what AMD releases, though I would settle for a Power5-based desktop...
Re:pc overhaul (Score:5, Interesting)
Of course, if you want real hardware agnosticism, there is always Linux isn't there? That runs on 64 bit CPUs, in 64 bit mode right now, and should be ready to work on AMD's Hammer right from launch. The big gamble for Intel is, can it afford to be late to the party? Intel certainly seems to think so, but I think that the Hammer is going to end up on more desktops than they expect, unless AMD sets the price of entry too high.
Margins (Score:4, Interesting)
Separation of consumer and "server" processors is just marketing, which is Intel's strongest talent (like Microsoft).
New operating sytems will change Intel's tune? (Score:5, Interesting)
But I think that will change almost overnight once operating software that supports the Athlon 64/Opteron becomes widely available. We know that Linux is being ported to run in native Athlon 64/Opteron mode as I type this; I also believe that Microsoft is working on an Athlon 64/Opteron compatible version of Windows XP that will be available by time the Athlon 64 is released in circa September 2003 (we won't see the production version of Windows Longhorn until at least the late spring of 2004 (IMHO), well after the new AMD CPU's become widely available).
Re:Does 64 bits slow memory down? (Score:5, Interesting)
In case you're wondering about constants: the PPC only supports loads of 16bit immediate values (both in the lower and upper 16bits of the lower 32bits of a register), so to load a 64bit value you may have to perform up to 5 operations (two loads, a shift and two more loads). So a PPC requires up to 64bits for a 32bit immediate load and up to 160bits to load a 64bit value (unless you store such a value in a memory location that can be addressed in a faster way). These are worst cases however, and in a lot of cases 1 or maybe two instructions is enough.
The main downside of 64bit code is that all pointers become 64bit, so all pointer loads and stores indeed require twice as much storage and bandwidth.
I agree with Intel (Score:2, Interesting)
Before you reply with a bunch of other reasons why my PCs are becoming more obsolete with each passing day anyway, think back to the transition between the 286 and 386. The 386 could run everything a 286 could run and it performed much better. Due to the performence benefit, most applications that couldn't be run on a 286 wouldn't have run well on a anyway.
The transition to 64-bit on the desktop isn't going to be the same. While 640k may not be enough for everybody, 4GB is certainly enough for web browsing, wordprocessing and basic photo manipulation. I'd hate to see the horribly inefficient code that requires more than 4GB of RAM for such simple tasks.
Realistically, the force that will cause 64-bit to be a requirement on the desktop will be the version of Windows that no longer runs on 32-bit hardware. Windows XP's minimum requirements are:
If you look at the current system requirements compared to the current top end PC hardware, it's easy to see why Intel wants to hold off on production of 64-bit processors targeted for the desktop market.
Re:I wonder what would happen if...... (Score:5, Interesting)
I've heard that Microsoft is developing an Athlon 64/Opteron native version of Windows XP; if that is true then gaming companies involved with PC-based games may be already creating games that run in native Athlon 64/Opteron 64-bit mode under Windows XP as I type this.
Intel is wrong, just like they were last time (Score:5, Interesting)
Intel didn't want to make the jump to 32 bit, so they introduced "segment registers". They tried to convince people that this was actually a good thing, that it would make software better. Of course, we know better: segment registers were a mess. Software is complex enough than to have to deal with that. That's why we ended up with 32 bit flat address spaces.
64 bit address spaces are as radical a change from 32 bit as 32 bit was from 16 bit. Right now, we can't reliably memory map files anymore because many files are bigger than 2 or 4 Gbytes. Kernel developers are furiously moving around chunks of address space in order to squeeze out another hundred megabytes here or there.
With flat 64 bit address spaces, we can finally address all disk space on a machine uniformly. We can memory map files. We don't have to worry about our stack running into our heap anymore. Yes, many of those 64 bit words will only be filled "up to" 32 bits. But that's a small price to pay for a greatly simplified software architecture; it simply isn't worth it repeating the same mistake Intel made with the x86 series by trying to actually use segment registers. And code that actually works with a lot of data can do what we already do with 16 bit data on 32 bit processors: pack it.
Even if having 4G of memory standard is a few years off yet, we need 64 bit address spaces. If AMD manages to release the Athlon 64 at prices comparable to 32 bit chips, they will sell like hotcakes because they are fast; but even more worrisome for Intel, an entirely new generation of software may be built on the Athlon 64, and Intel will have no chips to run it on. If AMD wins this gamble, the payoff is potentially huge.
big mistake IMHO (Score:5, Interesting)
Intel currently owns the market for low end workstations and servers. If you need a web server or a cad station you get a nice P4 with some memory. This is also the market where the need for 64 bit will first come. At some point in time some people will want to put 8 GB of memory in their machine. AMD will be able to deliver that in a few months, Intel won't.
My guess is that Intel is really not that stupid (if they are, sell your intel shares) and has a product anyway but wants to recover their investment on their 32 bit architecture before they introduce the 64 bit enhanced version of their P4. The current P4 compares quite favorably to AMDs products and AMD has had quite a bit of trouble keeping pace with Intel. AMD needs to expand their market whereas Intel needs to focus on making as much money as they can while AMD is struggling. This allows them to do R&D and optimize their products and ensure that they have good enough yields when the market for 64 bit processors has some volume. Then suddenly you need 64 bit to read your email and surf the web and Intel just happens to have this P5 with some 64 bit support. In the end, Intel will as usual be considered a safe choice.
The entire industry (Score:3, Interesting)
Re:4 GB is not a lot of memory (Score:5, Interesting)
That is true, but the memory bus can be made wider, and that won't affect the adressing scheme. Take nVidia's nForce, it uses 2 DIMM slots in paralell to double the memory bandwidth (although the processor bus must be fast enough to use the bandwidth).
The bandwidth issue scales much more easily than the fact that 32 bits is 4 GB of addressable memory, no matter what. (OK, you can do a extended-memory-kludge, but that's beside the point
Re:No surprise (Score:1, Interesting)
Can someone explain to me why we don't already have 64-bit Pentiums? I may be a little ignorant, but i don't understand how the Pentium isn't a 64-bit processor already. Since MMX (then 3DNow! and SSE and SSE2) there have been a bunch of special-purpose 64-bit registers that can be accessed and utilized fairly simply. I can't imagine it'd be a huge leap to allow those 64-bit registers to address memory on the bus. What exactly is a "true" 64-bit processor going to give us that we didn't already have in our MMX registers?
Who cares about 4GB? (Score:3, Interesting)
I wrote a little library that strings together a bunch of unsigned longs. It in effect creates an X-bit system in software for doing precise addition, subtraction, etc. This library would be considerably faster if I could string 64 bit chunks together instead of 32 bit chunks. Does no one on
What about bitwise actions like XOR, NOR, and NOT. You can now perform these operations on twice as many bits in one clock cycle. I'm not really into encryption, but I think this can speed things up there.
Many OS's (file systems) limit the size of a file to 4GB. This is WAY crazy too small! This again stems from the use of 32 bit numbers. When the adoption of 64 bit machines is complete, this limit will be removed as well. Again, 32 bits isn't just about ram.
I could really go on all day. The point is this: Twice the bits means twice the math getting done in the same amount of time (in some situations). So if a person were to write their code smart to take advantage of it, you would have all around faster code and a larger memory size. Sounds like a nice package to me.
Really, give the 4GB limit a rest. Lets talk about some of the exciting optimizations we can do to our code to get a speed boost!
Re:The ceiling is 2/3GB not 4GB... (Score:1, Interesting)
However, you only need to do this if you're making a distinction between kernel space and user space. If you're running a speciality or ad-hoc application then you might not care too much about protecting the kernel memory, so you can just map all of the available virtual address space into one big 4Gb chunk and let the kernel and the user space process(es) have full access to each others memory.
When 64bit Desktop PCs Hit the Market... (Score:3, Interesting)
As fast as the hardware engineers struggle to keep up with Moore's law, shoddy programmers backed by cheapskate management labor to set the performance gains back.
Kids these days...
Whither VMware? (Score:3, Interesting)
With investment from Intel and Microsoft, they could release a cheap VM workstation optimized to run Windows only. They could even detect a 32-bit app starting up and shove it off to the VM, where it sounds like it might run faster. Well, easy for me to say, I guess. Make it so!
Also, MS is buying Connectix, but their VMs are below VMware's quality, and it seems they bought it mainly for the server product. But this strategy could still work for them; build the 64-bit Windows workstation with a built in 32-bit VM.
Re:Apple is already RISC... (Score:2, Interesting)
And, as other have stated, whether a CPU is 32-bit or 64-bit has nothing to do with whether is it classified as a "RISC" or a "CISC" processor. Also, make sure you know what the real differences are between what people commonly call "RISC" and "CISC". It has extremely little to do with anything being "reduced" in terms of count. Don't believe me? Go count the number of instruction op codes for the G4 and the current x86 ISA and compare.
Bill Gates claims he did not say 640K is enough (Score:5, Interesting)
One quote from Gates became infamous as a symbol of the company's arrogant attitude about such limits. It concerned how much memory, measured in kilobytes or "K," should be built into a personal computer. Gates is supposed to have said, "640K should be enough for anyone." The remark became the industry's equivalent of "Let them eat cake" because it seemed to combine lordly condescension with a lack of interest in operational details. After all, today's ordinary home computers have one hundred times as much memory as the industry's leader was calling "enough."
It appears that it was Marie Thérèse, not Marie Antoinette, who greeted news that the people lacked bread with qu'ils mangent de la brioche. (The phrase was cited in Rousseau's Confessions, published when Marie Antoinette was thirteen years old and still living in Austria.) And it now appears that Bill Gates never said anything about getting along with 640K. One Sunday afternoon I asked a friend in Seattle who knows Gates whether the quote was accurate or apocryphal. Late that night, to my amazement, I found a long e-mail from Gates in my inbox, laying out painstakingly the reasons why he had always believed the opposite of what the notorious quote implied. His main point was that the 640K limit in early PCs was imposed by the design of processing chips, not Gates's software, and he'd been pushing to raise the limit as hard and as often as he could. Yet despite Gates's convincing denial, the quote is unlikely to die. It's too convenient an expression of the computer industry's sense that no one can be sure what will happen next.
Click here [nybooks.com] to read the full article.
Intel finally learned from past errors? (Score:2, Interesting)
Furthermore, Intel Itanium has very poor compatibility with 32-bit applications, whereas AMD Athlon64 supports them natively. So releasing Itanium too early would once again mean poor performance compared to AMD, and potentially reproduce the P4 problem.
Truth or Denial? (Score:2, Interesting)
Another technique for expanding the memory capacity of current 32-bit chips is through physical memory addressing, said Dean McCarron, principal analyst of Mercury Research. This involves altering the chipset so that 32-bit chips could handle longer memory addresses. Intel has in fact already done preliminary work that would let its PC chips handle 40-bit addressing, which would let PCs hold more than 512GB of memory, according to papers published by the company.
I dunno about them, but my 32 bit system already has 768MB. 40 bit addressing would present the interesting effect of needing memory manufactures to buy into a different addressing standard, which, as you can well imagine, they'll be slow to do, even with Intel pitching it. Also keep in mind that AMD could follow suit, with their 32 bit line. This doesn't strike me as a very realistic direction to go.
Intel still has some mileage in the P4, throwing more cache at it, etc., but 64 bits is something computer techies understand, and once 64 bit PC's start rolling out, everything else will seem second best, particularly if AMD plays their advertising cards right.
Oh, and the 'no need' argument never has flown. I've been hearing it for decades. If anyone actually listened to it we'd still be pon PC-AT's with VGA.
Re:definition of 64-bit (Score:2, Interesting)
Re:It's been done before (Score:4, Interesting)
Yes. They did it gradually. The first PPC Macs ran a 68k emulator which provided backwards compatability for old Mac software. Intel are trying to do the same thing; you can run IA-32 software on IA-64.
The problem that Intel has, and that Apple didn't, is that the IA-32 mode on an Itanium is generally slower than a real IA-32. Many Mac users found that their old 68k code ran just the same, or in some cases faster on the new PPC's. Intel then, is at a disadvantage with the IA-64, speedwide. Why invest all that money in a new platform just to run your code slower?
Sorry, you're wrong on two points there.
- The PPC Macs did not run a m68k 'emulator' - an opcode translator converted m68k code to PPC code. There wasn't a clearly-defined emulator (which implies an application) - certain parts of the MacOS itself at the time consisted of m68k code, which was run through the translator.
- The first PPCs ran m68k code *slower* than the fastest m68k Macs. In particular, the 6100/60 was badly crippled by its lack of cache, and could be quite handily beaten by the faster 68040 Macs when running m68k apps.
Re:"The first" PPCs? (Score:3, Interesting)
Those Mac emulators still work, and still run the ancient software, on a modern OS X Mac. My father has a word processor from maybe 1987 (WriteNow) that's just fine, and continues to use it for day-to-day writing. Hey, whatever makes you comfy.
Maybe it isn't supported in some subtle ways, and I'm sure there's stuff that's broken -- even recent OS 9 games sometimes won't run in "Classic Mode" and require booting in OS 9 instead. But Apple's taken this seriously during every OS or chip migration they've ever had, and they're still keeping their eye on pre-PPC chip software.
Re:Who cares about 4GB? (Score:3, Interesting)
And last I checked, most major x86 operating systems supported 64bit addressing for files.
And if you are thinking about RAM, x86 isn't limited to 4gb. It can support up to 64gb of physical ram; Windows and Linux have both supported this for a while now... except for a few AMD chips (a number of recent AMD chips have microcode bugs which prevent you from addressing more than 4gb of RAM).
There actually are some cool things you can do in 64bit which you can't in 32bit. You listed none of them. However, they tend to be closely tied to OS architecture, and even then few OSes take advantage of them (they aren't the kind of things you can retrofit on).
Re:Object spaces (Score:4, Interesting)
Re:bah! (Score:3, Interesting)
Re:4 GB is not a lot of memory (Score:3, Interesting)
And this is great...if you're doing mainframe style computing and price is no object. Back in the day, given infinite funds, you could have purchased an Apple II or a VAX 11/780. The former, even with its 64K of memory, let you do about 80% of what you'd want to use the VAX for, and it's a lot easier to maintain, lower power, and fits on your desk.
Now we have a similar situation. 64-bit is "better," but in a loose "for maybe 5% of all computing tasks" kind of way. That's not a compelling reason to switch all desktop PCs over to 64-bit processors. If Intel--or any other company--tries to do that, then I'll just wait until the lower end mobile processor makers improve enough that I can avoid the bloated desktop market all together.
Re:Intel is wrong, just like they were last time (Score:3, Interesting)
Let's not forget the excellent Motorola 68K chips either. The 32bit addressing 68020 was introduced in 1984. It was used in many *nix workstations.
In 1985 Intel said the same thing they are saying now: This new CPU is for servers, you don't need it in workstations. They were wrong then. They are wrong now.
Everybody else must be seriously jumping for joy. (Score:5, Interesting)
- Well, now that they're most recently Going out of business [slashdot.org], in steps IBM to save the day for them... a new line of iMacs is going to do insanely well, considering it's going to be the only fully-functional line of 64-bit personal computing, because I can pretty much guarantee Apple's going to have full-fledged 64-bit standardizing before anybody else. Apple's going to have an insane surge in users, a lot of the multimedia software that's been migrating to PCs is going to be happy with the better, faster and more powerful 64-bit hardware support and go back to developing for Macs... basically, Macs regain a lot of the status they've been falling behind in quickly.
AMD:
- Hammer sales go up! If they're really lucky, Intel will either do a harsh (and hopefully inferior) yet still more expensive knock-off of Hammer, or they're going to release Itanium in a hurry because they realize businesses like the idea of progress so they're starting to hop over to 64-bit architectures. So AMD will reclaim its status it lost about a year and a bit ago when the P4 got the title of "Best x86 on the market". Good on them.
Linux:
- Business as usual. Increased PPC support. Cool new Hammer patches, as well as the usual suspects (i386 still harshly dominating)
Microsoft:
- Well, maybe not everybody's jumping for joy... A lot of migration to PPC. But otherwise, they're still busy saying that "The Next New Windows Will Be Secure, And This Time We Mean It!" (tm).
That about it?
Re:No hurry? (Score:3, Interesting)
Re:RISC vs. CISC (Score:2, Interesting)
Re:bah! (Score:3, Interesting)
Was a specialized enterprise. Not anymore; witness iMovie or Final Cut Express.
I am still stunned by this. I remember building and demo'ing Media 100 systems in 1997; you needed at least $20k for something reasonable (i.e. Big Mac w/gobs of RAM, SCSI arrays, specialized PCI board and breakout box, industrial VTR, preview monitor, time-base corrector...) and that didn't get you fancy realtime effects.
A $1500 iMac just spanks the crap out of this system I used to sell, requires no extra hardware (firewire is beautiful), and the quality is superior.
So, past tense.
Now, back on topic, accessing 4GB of memory is very desirable in this situation; 4GB of DV footage is measured in minutes. It would be nice to manuipulate more than minutes in RAM, no? (also, RAM Preview in After Effects would be really sweet).
Re:Of course... (Score:3, Interesting)
spin (Score:3, Interesting)
I'd like to see one of two systems. Either provide backward compatibility - like AMD with it's 64 bit extensions, or start with a clean slate and produce a performer - like Digital's Alpha.
The advantage of a 64 bit AMD is that the most used architecture can migrate without dropping everything. My PII can still run DOS binaries that ran on my 8088. This is a GOOD thing. Even running Linux, I don't want to recompile all my apps, if I don't have to. If this were the case, I might have gotten a Power PC already.
The advantage that the Alpha has is speed, and there is only one kernel systems calls interface - 64 bits. For example, there's no lseek() and lseek64() on the Alpha. (For the history buf, first there was seek() for 16 bits, then lseek() for 32 bits. We've been here before. Now we have the off_t typedef, so it should be easier to simply change it to be 64 bits... Yet some have added off64_t, in the name of backwards compatibility.)
Itanium may have the clean break (or it may not), but where's the speed? I'm not switching without something.
Digital's Alpha is at least the third attempt that Digital made before getting a RISC system to perform. The Power architecture is IBM's 2nd attempt. Sometimes you design it, and it just doesn't deliver. Move on!
When one looks at Digital's switch from 16 bits (PDP-11) to 32 bits (Vax 11/780), one notes that the new machines were more expensive, and about the same performance. I'd still rather have a Vax, because there are things that you can do in 32 bits that are painful in 16 (but not many).
It should be noted that throwing the address space at problems often slows it down. For example, Gosling's Emacs was ported from the Vax to the PDP-11. On the Vax, the file being edited was thrown into RAM completely. On the PDP, just a few blocks of your file were in RAM, in a paged manner. On the PDP, an insert (or delete) cause only the current page to be modified. If the current page filled up, it was split, and a new page was created. On the Vax, inserts tended to touch every page of the file - which could make the whole machine page. It was quite obviously faster on the PDP-11. No one cares about this example anymore - since machines have so much more RAM and speed. But, throwing the address space at video editing will show how bad this idea really is. Programmed I/O is smarter than having the OS do it. The program knows what it's doing, and the OS doesn't. Eventually, machines may have enough RAM and speed that no one will care, but it won't happen here at the begining of the curve.
One problem that has not been solved is the memory management unit TLB. This is the table on the chip that translated between physical and virtual memory. With 16 bits of address, 256 byte pages require only 256 entries to cover the whole address space. For 32 bit processors, the page table just doesn't fit on the chip. So, the TLB is a translation cache, and on cache miss, the OS must be called to fill it.
An alternative is to use extent lists. On my Linux system, the OS manages to keep my disk files completely contiguous 99.8% of the time. If this were done for RAM, then the number of segments that would be needed for a typical process would be small - possibly as few as four. One for text (instructions), one for initialized read only data, one for read/write data, BSS and the heap, and one for the stack. You'd need one for each DLL (shared library), but IMO, shared libraries are more trouble than they're worth, and ought to be abandoned. Removing any possibility of TLB misses would improve performance, and take much of the current mystery out of designing high performance software.
For this to work, you need the hardware vendor to produce appropriate hardware, and have at least one OS support it. The risk factor seems to have prevented this from happening so far...
Re:4 GB is not a lot of memory (Score:2, Interesting)
It took most desktop users a decade and the 486 to practically push this barrier. By that time, two generations of 32-bit capable chips had been introduced to the marketplace.
If one takes this into perspective, then Intel may be quite correct that 64-bit will not make an impression on the desktop until nearly 2010, and that even waiting a few years to introduce 64-bit desktop solutions will not be too late. It may not be IA-64 that ends up on the desktop, but that doesn't change the timeline.
Your average 286 buyer in the mid-80s had 1MB of ram, or 1/16 the maximum. Even though desktop 32-bit chips weren't available ( 386 was server -targeted at the time ) when it was purchased, it was probably replaced with a 386 or 486 machine well before upgrading the ram to the maximum.
Your average user now has around 256MB of ram, or 1/16 the maximum. Most likely, even with 64-bit desktop chips not released for a few more years, we will still have a couple of product generations before everyone needs 64-bit capability.
Re:Article Back Story (Score:1, Interesting)
Re:4 GB is not a lot of memory (Score:2, Interesting)
You should be careful when saying stuff like that. I dug up an 80's electronics magazing selling computers with "16k of RAM - All the RAM you'll ever need!"
meep
Re:We need 64-bit TODAY (Score:2, Interesting)
As for the whole Itanium vs. Opteron/Athlon64 thing. Well, it kind of does look like AMD just made some modifications to the x86 Athlon and turned it into an Athlon64. That is, it's a evolution and not a revolution. Itanium on the other hand, is a completely different architecture.
I guess you can't blame Intel for not implementing the Itanium in the consumer market, since that's not what it was designed for and it would probably produce very little profit for all the money they put into R&D for the thing.
It looks like Intel just looked at their market and said, "Ok, we're entering the high-end server space of the whole market." AMD on the other hand seemed to look at their market and said, "Ok, Intel is pouring resources into this one concentrated market, and we can take advantage of it. We're going to take a smaller step in technology, and spread it out among a much larger market: Desktop, Workstation & Server"
AMD's logic makes more sense in my opinion. It might not be revolutionary and it might "enhance" an already disliked instruction set, the x86. However as markets overlap and merge more and more(ie: Workstations and Desktops), this would be the optimal solution.
Itanium could quite possibly win in the server sector, but it's very expensive and one of the biggest driving factors is that software needs to be recompiled for it with an EPIC optimzied compiler. x86-64, if it comes out on time and is what it's supposed to be, should be a very tough competitor to the Pentium4 in the desktop market assuming developers start recompiling their apps for x86-64. Kudos to Tim Sweeny & Epic Games for developing a major product with a branch geared towards this new technology. They're basically watering the x86-64 plant.
I'm not very informed when it comes to the server space, but my guess would be that it would come down to the form of software used on servers and what percentage of the market could use plain old x86/x86-64 based software for their solution. I mean the question going through my head is: Would I rather use one box with two Itanium processors, or would I rather use two boxes with four Opteron processors in each of them, and have the ability to run x86 code optimally?
I hate to be cliche, but it basically comes down to the form of software used. It also comes down to the market segments and their changing cost-effective applications.
Re:We need 64-bit TODAY (Score:2, Interesting)