Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Intel

Intel: No Rush to 64-bit Desktop 616

An anonymous reader writes "Advanced Micro Devices and Apple Computer will likely tout that they can deliver 64-bit computing to desktops this year, but Intel is in no hurry. Two of the company's top researchers said that a lack of applications, existing circumstances in the memory market, and the inherent challenges in getting the industry and consumers to migrate to new chips will likely keep Intel from coming out with a 64-bit chip--similar to those found in high-end servers and workstations--for PCs for years."
This discussion has been archived. No new comments can be posted.

Intel: No Rush to 64-bit Desktop

Comments Filter:
  • by Max Romantschuk ( 132276 ) <max@romantschuk.fi> on Monday February 24, 2003 @07:10AM (#5369496) Homepage
    Right now 4 GB of memory might be enough. But switching to 64 bit when we are already hitting the wall is not an option. The point with going to 64 bits now is that we can add memory past 4 GB without the headaches of moving to a new platform, since the transition is already done.

    If Intel keeps on braking a lot of people will get really disappointed when they realize they need more memory than their platform supports.
    • by JWhitlock ( 201845 ) <John-WhitlockNO@SPAMieee.org> on Monday February 24, 2003 @08:00AM (#5369649)
      Right now 4 GB of memory might be enough. But switching to 64 bit when we are already hitting the wall is not an option. The point with going to 64 bits now is that we can add memory past 4 GB without the headaches of moving to a new platform, since the transition is already done.

      Oh, come on! Don't you want the fun of playing with the 64-bit equivalent of extended and expanded memory? Endless tinkering of autoexec.bat and config.sys! Endless reboots! Doom 3 runs in it's own operating system (the way God intended)!

      Bring on the half-ass memory solutions! We should be deep in flavor-country by 2005.

      • by carpe_noctem ( 457178 ) on Monday February 24, 2003 @10:29AM (#5370355) Homepage Journal
        Yeah, seriously. By the time Intel's 64-bit chip is out, Duke Nukem Forever just might be released.
    • Right now 4 GB of memory might be enough. But switching to 64 bit when we are already hitting the wall is not an option. The point with going to 64 bits now is that we can add memory past 4 GB without the headaches of moving to a new platform, since the transition is already done.

      And this is great...if you're doing mainframe style computing and price is no object. Back in the day, given infinite funds, you could have purchased an Apple II or a VAX 11/780. The former, even with its 64K of memory, let you do about 80% of what you'd want to use the VAX for, and it's a lot easier to maintain, lower power, and fits on your desk.

      Now we have a similar situation. 64-bit is "better," but in a loose "for maybe 5% of all computing tasks" kind of way. That's not a compelling reason to switch all desktop PCs over to 64-bit processors. If Intel--or any other company--tries to do that, then I'll just wait until the lower end mobile processor makers improve enough that I can avoid the bloated desktop market all together.
    • by Monkelectric ( 546685 ) <slashdot.monkelectric@com> on Monday February 24, 2003 @10:17AM (#5370295)
      This isn't really about memory ... allow me to [specu/trans]late what the article really said:

      Um, Hi... this is Intel. We know you *WANT* 64 bit but, um, you dont NEED it. Really, you dont. You believe that? Great! Basically guys, this is the problem, we *screwed the pooch* on this processor. We've spent 10's of billions of dollars on development, it's years behind schedule, it ain't that fast, and the whole thing just sucks right now. So here's what we're gonna do, We're gonna hold back this technology for like ehh, 6, 7, maybe 8 years SO WE HAVE TIME TO RECOUP THE MONEY WE WASTED by selling the chip as an expensive "workstation" CPU. So, expensive high-profit workstations for now, then you can have it later once it sucks (well it already does, but once it sucks more). Other platforms have had 64 processors for a decade now you say? You want mid 90's processor technology in 2003? FUCK YOU, you can't have it, end of discussion!

      OH, and expect some dirty tricks, we know AMD is gonna be ready to sell you 64 bit way before us, so, well ... you'll just see ;)

      Thanks, Intel

    • by briancnorton ( 586947 ) on Monday February 24, 2003 @10:31AM (#5370372) Homepage
      4 GB IS a lot of memory. It's enough memory that a server can handle millions of hits a day or run big databases or search for extra-terrestrial life. Intel knows that servers need more, so they go Itanium, but they also know that your average desktop isn't making good use of the 256 MB that it already has.

      As it is right now, there isnt really a desktop application that could use 4 GB if you asked it to. Sure, some developers could use it, some CG people, and DV people, but those people can justify buying more expensive (64 bit) workstations. Joe twelvepack's $600 dell will run any consumer application faster than it needs to.

      Once developers start making good use of the power they have, then it's time to make the big financial investments required to go 64-bit for consumers. I personally have a hard time even thinking up a consumer application (besides games)that could really stretch existing computing resources.

      • Err...video encoding? After all, aren't iMovie and Windows Movie Maker aimed at the consumer market?
      • Many companies in the entertainment as well as computer chip design industries use rooms full of cheap x86 machines to perform the bulk of their batch processing. _That's_ where they're hitting the 4GB-per-process problem. We're running Linux on hundreds of Pentium III/4s, and with kernel tweaking are getting around 3.2GB per process. But even that's not enough for many job types...
      • Joe twelvepack's $600 dell will run any consumer application faster than it needs to.

        Excuse me? Intel is saying that our cheap desktops are already fast enough, so they're putting off 64-bit CPUs?

        Why should I even buy a new 32-bit CPU from Intel, then?

        (You are of course right. I'm just wondering aloud why Intel is admitting it, and how they plan to dig themselves out once they convince the public of it.)

        • The biggest advantage of a 64 bit processor is the increased memory space. Intel makes processors, not memory. The last thing that they want is a computer where Dell spends more on memory than processor.

  • Of course... (Score:5, Insightful)

    by Lynn Benfield ( 649615 ) on Monday February 24, 2003 @07:10AM (#5369498)
    They're hardly likely to talk up the benefits of 64-bits on the desktop when their current 64-bit chip is so unsuitable. As and when they have an equivalent to AMD/Apple on the desktop, you can be sure they'll be more than happy to sing its praises.

    What's interesting is the "nobody really needs 4Gb this decade" line. Just about every Mac in this room has 1Gb in it, and even the crappy test PC has 768Mb. 4Gb will be here sooner rather than later...
    • Re:Of course... (Score:3, Interesting)

      by Duds ( 100634 )
      I did some maths.

      As a semi-future-proofing-power-user. I built a PC in 1998. I put in 256MB RAM to try to keep it running as long as possible. That's price-equivilent to 2GB at todays prices.

      It's really not going to be long before the geeks feel they need to do so.
      • Re:Of course... (Score:3, Interesting)

        Interestingly, 300-400 mhz is still relatively OK as long as you have enough ram (what is that - 4 years old techology?), a fast enough disk system and you stay away from gaming. I bet 3ghz will last you even longer, given enough RAM.
    • Re:Of course... (Score:2, Informative)

      by solidox ( 650158 )
      they should just cut the crap and bring out 1024bit cpu's, that way they won't have to worry about upping to 128bit cpu's however many years down the line.
    • Re:Of course... (Score:2, Insightful)

      by Kanasta ( 70274 )
      Aah, but the question is, when will mainstream PCs need more than 4GB?

      I'm seeing 256MB std now, so I think we're still 3-5yrs away...
    • by MtViewGuy ( 197597 ) on Monday February 24, 2003 @07:55AM (#5369636)
      I think Intel is currently dismissing 64-bit computing except for specialized needs because the vast majority of current mainstream software doesn't support 64-bit operations.

      But I think that will change almost overnight once operating software that supports the Athlon 64/Opteron becomes widely available. We know that Linux is being ported to run in native Athlon 64/Opteron mode as I type this; I also believe that Microsoft is working on an Athlon 64/Opteron compatible version of Windows XP that will be available by time the Athlon 64 is released in circa September 2003 (we won't see the production version of Windows Longhorn until at least the late spring of 2004 (IMHO), well after the new AMD CPU's become widely available).
      • Intel's problem... (Score:5, Insightful)

        by ATMAvatar ( 648864 ) on Monday February 24, 2003 @09:46AM (#5370121) Journal
        ...is that their 64-bit solution requires a completely different instruction set. It's painful to switch to an Itanium from an x86 platform. On the other hand, AMD's 64-bit solution(x86-64) should be about as painless a transition as the move from 16-bit to 32-bit processors.

        Of *course* Intel is going to argue that 64bit isn't required for desktop computers. If users make the leap to AMD's x86-64, Intel will have to scramble to build a chip of their own to support it. Also, if you start getting $100, $200, $300 64-bit chips out there, I'm sure the server market's gonna stop and ask "why the hell are we spending $10k per Itanium?"

        Intel stands to lose if we move to 64-bit on desktops.
  • Well... (Score:5, Funny)

    by James_Duncan8181 ( 588316 ) on Monday February 24, 2003 @07:10AM (#5369499) Homepage
    ...I'm glad Intel just kept AMD afloat...
  • by funkman ( 13736 ) on Monday February 24, 2003 @07:13AM (#5369510)
    Well if there is no hardware, how can there be 64 bit apps?

    But the gaming market is going to drive this and the hardcore gamers already build their systems (with AMD?). Intel will lose nothing at first.
  • pc overhaul (Score:5, Insightful)

    by solidox ( 650158 ) on Monday February 24, 2003 @07:13AM (#5369511) Homepage
    the whole pc architecture should ideally be replaced. we're still using something designed in the 80's, with lil hacks here and there to make it work in this current day. unfortunatly, it would be incredibly difficult to do, as all software and hardware would have to be remade. backward compatibilty slows us down from moving forward. even if everything was replaced, how long till it would be obsolete and need a further replacement?
    • Surely a decent 64-bit cpu would kick along an x86 emulator at an acceptable rate in the same way we can emulate anything you want on the SNES or N64 fine.

      All you need to solve is the quite abysmal video rates of things like virtualPc.

      Basically you need a WinUAE for PCs.

      And the reason Intel are holding back is contained in the first line here. Their 64-bit chip is crap.
    • Re:pc overhaul (Score:5, Interesting)

      by Zocalo ( 252965 ) on Monday February 24, 2003 @07:33AM (#5369569) Homepage
      Replacing the PC architecture was one of the early selling points of Windows NT, wasn't it? Look at our shiny new OS - it runs on your existing Intel PCs, but when you need more power you can upgrade to more powerful systems running on DEC's Alpha CPU. Only you can't, because no one really bothered to port their applications, even when all that was required was a recompile, and so the Alpha foundered and the inferior x86 architecture marched on.

      Of course, if you want real hardware agnosticism, there is always Linux isn't there? That runs on 64 bit CPUs, in 64 bit mode right now, and should be ready to work on AMD's Hammer right from launch. The big gamble for Intel is, can it afford to be late to the party? Intel certainly seems to think so, but I think that the Hammer is going to end up on more desktops than they expect, unless AMD sets the price of entry too high.

    • Re:pc overhaul (Score:5, Insightful)

      by be-fan ( 61476 ) on Monday February 24, 2003 @07:49AM (#5369623)
      Actually, the modern PC architecture is just that, throughly modern.

      1) The CPU: x86? Who cares? Even the Power4 does instruction-level translation, and advances like the trace cache take decode out of the hot path. In the end, x86 is just a nice, compact, widely supported bytecode. Outside of instructions, PC processors are very modern. Highly superscaler, highly pipelined, *very* high performance.

      2) The chipset: This isn't your ISA system anymore. CPU -> chipset and chipset -> Memory interrconnects will be hitting 6.4 GB/sec by the end of the year. The Athlon 64 will have an integrated memory controller, just like the UltraSPARC. I/O hangs of the PCI bus, which is not a bottleneck given current systems. And when it does become a bottleneck, solutions like Hypertransport are already ready and working. Peripherals now hang off advanced busses like USB and Firewire, while traditional I/O methods are relegated to a tiny (cheap!) Super I/O chip. ISA is finally dead (the new Dells don't ship with ISA slots). The only thing we can't seem to get rid of is the infernal 8239 interrupt controller. The I/O APIC has been around for ages now. VIA has integrated them for years. Intel is finally getting around to putting them in, but is doing a half-assed job of it. My Inspiron has an 845 chipset, which theoretically has an IO-APIC, but it seems disabled for some reason.

      3) The firmware: OSs today ignore the BIOS anyway. They're only in place for booting and SMM mode. ACPI has replaced most of what the BIOS used to be used for. Just this month, Intel said that EFI (used in the Itanium) will finally replace the PC BIOS, and bring with it a host of new features like support for high-resolution booting modes, network drivers, advanced debugging, etc.
      • Re:pc overhaul (Score:3, Insightful)

        by Ed Avis ( 5917 )
        'x86 is a nice, compact, widely supported bytecode.'

        What are you smoking? It's widely supported, yes, and it might or might not be compact (myself, I would guess not, even RISC chips like the ARM/XScale have more compact code), but 'nice'?
    • by kahei ( 466208 ) on Monday February 24, 2003 @08:29AM (#5369750) Homepage
      the whole pc architecture should ideally be replaced. we're still using something designed in the 80's, with lil hacks here and there to make it work in this current day. unfortunatly, it would be incredibly difficult to do, as all software and hardware would have to be remade. backward compatibilty slows us down from moving forward. even if everything was replaced, how long till it would be obsolete and need a further replacement?


      The whole Linux architecture should ideally be replaced. We're still using something designed in the 70s, with lil hacks here and there to make it halfway usable in the current day. Unfortunately, it would be incredibly difficult to do, as the macrokernel system and crusty old ASCII-pipe-based GNU tools would have to be remade. Unix compatibility slows us down from moving forward. Even if everything was replaced, how long till RMS decided it was the work of Satan and began on a further replacement?

  • No hurry? (Score:5, Insightful)

    by turgid ( 580780 ) on Monday February 24, 2003 @07:16AM (#5369526) Journal
    They would say that there's no hurry to the 64-bit desktop beacause they are not in a position to provide one. They have the expensive, specialised itanic for the high-end and HP have told them to be quiet about Yamhill, their Hammer equivalent. Apple and AMD are on to a winner. Personally, I can't wait to get a 64-bit home machine. That's why I haven't upgraded for over 3 years. Intel is advocating hacks to get around the 4GB limit just like the old LIM (Lotus intel Microsoft) Expanded Memory boards for the old IBM PCs of yore : basically segmentation and paging. Anyone who can remember those days will concur. I'm afraid intel will need to pull a rabbit out of its hat very soon. Expect to see Yamhill processors announced later this year (Pentiums, Xeons?, with "64-bit extensions").
  • by Anonymous Coward

    So after this AMD is contemplating the release of Hammer and Moto/IBM/Apple are teaming on the next gen macintosh. Both teams are celebrating and letting schedules slip to ensure a good product.

    15 minutes later, Intel pulls the rug and releases a consumer level 64 bit cpu. Calling the former press release a premarketing bell weather.

  • by secondsun ( 195377 ) <secondsun@gmail.com> on Monday February 24, 2003 @07:22AM (#5369539) Journal
    Yes but some of us would actually stand to benefit from a commodity 64 bit proc. Those of us (like my Physics teach with a Phd in Biomolecular Physics) do active research and number crunching on molecular designs. People such as me need the boost to video/3d modelling apps where hitting 4gb memory limits is common. True that 64 bit solutions exist, but the problem is making them affordable. (And at 5k each, Sun Workstations and SGI boxen are not to the average college student).

  • by Daengbo ( 523424 ) <daengbo@gmaLAPLACEil.com minus math_god> on Monday February 24, 2003 @07:26AM (#5369552) Homepage Journal
    Wouldn't it make more sense to put that 64 on the server, with XXGB of RAM, and push the display to the clients? X-terms, Terminal Services, whatever? Then, what, you've got 64 bit apps on the server, and a 32 bit clients, and no worry about memory usage.
    • by will_die ( 586523 ) on Monday February 24, 2003 @07:37AM (#5369583) Homepage
      Except that the price for the client with HD,processor,memory is cheap. By the time you factor in the cost of a network able computer vs the dumb(x-term, terminal services,etc) terminal the costs are about the same.
      So now that you have a cheap smart terminal whith the capability of running its own applications, why spend large amounts of money on a huge network and backend servers.
      From a management standpoint x-term type machines would be great, everything stored on the servers for backup; easy management, just replace a broken one with a working and the user is back up, and users could move around and keep all thier settings. It keeps being tried every few years and keeps being rejected by corporations.
      • It keeps being tried every few years and keeps being rejected by corporations.
        These guys [k12ltsp.org] seem to be having no problem with being rejected. I put together my school's lab for about the cost of two serious desktops, networking included. In fact, Jim McQuillan [ltsp.org] seems to be making a reasonable living out of selling such systems. It all depends on where you sit, and what you need, I guess.
    • Bandwidth (Score:2, Insightful)

      by yerricde ( 125198 )

      Wouldn't it make more sense to put that 64 on the server, with XXGB of RAM, and push the display to the clients?

      Not if there's a dial-up link between the server and client.

      Not if the application is movie editing. 640x480 pixels x 24fps x 24-bit color = too big for even 100Mbps Ethernet.

  • Intel speak (Score:5, Funny)

    by Anonymous Coward on Monday February 24, 2003 @07:30AM (#5369558)
    Translation: We aren't done yet.
  • AMD investor. (Score:3, Interesting)

    by mjuarez ( 12463 ) on Monday February 24, 2003 @07:31AM (#5369563)
    Being an investor in AMD, I'm really happy about the path Intel has chosen to take. My almost 1000 shares of AMD stock will finally be over the water again!!! :)

    Intel is committing hara-kiri in my opinion here (thats suicide for honor in Japanese). Similar events return to my memory, and history has proved all these were utterly wrong... (Its sad to acknowledge that I REMEMBER when some of these things happened! :(

    - Intel 286 vs 386 (IBM: A 286 is enough for most people...)
    - IBM Microchannel vs ISA (The same thing)
    - 'A good programmer should be able to do anything with 1K of memory'. I don't remember the author, but probably someone from IBM in the 60s or 70s.

    Time flies...
  • by philipsblows ( 180703 ) on Monday February 24, 2003 @07:32AM (#5369564) Homepage

    Didn't Apple manage to get their (admittedly smaller) user base to switch to a better processor?

    Intel's argument against 64-bit computing seems to be an advertisement for the x86-64 concept. The article didn't mention gaming, but surely the gamer market will be a major early-adopter base. It sounds like preemptive marketing to me.

    As for memory, the article, and presumably intel, don't seem to account for the ever-increasing memory footprint of Microsoft's operating system (or for the GNOME stuff on our favorite OS), and so are perhaps too dismissive of the need for a >4GB desktop. As we all know all too well, one can never have too much memory or disk space, and applications and data will always grow to expand to the limits of both.

    Personally, I'm holding off on any new hardware for my endeavors until I see what AMD releases, though I would settle for a Power5-based desktop...

    • by Anonymous Coward on Monday February 24, 2003 @07:40AM (#5369600)
      Didn't Apple manage to get their (admittedly smaller) user base to switch to a better processor?

      Yes. They did it gradually. The first PPC Macs ran a 68k emulator which provided backwards compatability for old Mac software. Intel are trying to do the same thing; you can run IA-32 software on IA-64.

      The problem that Intel has, and that Apple didn't, is that the IA-32 mode on an Itanium is generally slower than a real IA-32. Many Mac users found that their old 68k code ran just the same, or in some cases faster on the new PPC's. Intel then, is at a disadvantage with the IA-64, speedwide. Why invest all that money in a new platform just to run your code slower?

      Now, this might not be such a problem if people were busy porting their stuff and tuning it for the IA-64, but Intel have two problem there. The first if the chicken and egg; no one is buying IA-64, so no one is porting their applications, so no one is buying IA-64. The other problem is technical; the EPIC (VLIW) instruction set is a nightmare to understand and code. Only a handful of people trully understand the full IA-64 ISA, so compilers and Operating Systems are slow to suport it. If you don't have adequate tools, how can you do the job?

      At the moment, it looks like Intel could be onto a looser with IA-64. Only time will tell.
      • by BJH ( 11355 ) on Monday February 24, 2003 @09:22AM (#5370001)

        Yes. They did it gradually. The first PPC Macs ran a 68k emulator which provided backwards compatability for old Mac software. Intel are trying to do the same thing; you can run IA-32 software on IA-64.

        The problem that Intel has, and that Apple didn't, is that the IA-32 mode on an Itanium is generally slower than a real IA-32. Many Mac users found that their old 68k code ran just the same, or in some cases faster on the new PPC's. Intel then, is at a disadvantage with the IA-64, speedwide. Why invest all that money in a new platform just to run your code slower?


        Sorry, you're wrong on two points there.

        - The PPC Macs did not run a m68k 'emulator' - an opcode translator converted m68k code to PPC code. There wasn't a clearly-defined emulator (which implies an application) - certain parts of the MacOS itself at the time consisted of m68k code, which was run through the translator.

        - The first PPCs ran m68k code *slower* than the fastest m68k Macs. In particular, the 6100/60 was badly crippled by its lack of cache, and could be quite handily beaten by the faster 68040 Macs when running m68k apps.
      • Re:"The first" PPCs? (Score:3, Interesting)

        by ianscot ( 591483 )
        The first PPC Macs ran a 68k emulator which provided backwards compatability for old Mac software. Intel are trying to do the same thing...

        Those Mac emulators still work, and still run the ancient software, on a modern OS X Mac. My father has a word processor from maybe 1987 (WriteNow) that's just fine, and continues to use it for day-to-day writing. Hey, whatever makes you comfy.

        Maybe it isn't supported in some subtle ways, and I'm sure there's stuff that's broken -- even recent OS 9 games sometimes won't run in "Classic Mode" and require booting in OS 9 instead. But Apple's taken this seriously during every OS or chip migration they've ever had, and they're still keeping their eye on pre-PPC chip software.

  • Margins (Score:4, Interesting)

    by Ledskof ( 169553 ) on Monday February 24, 2003 @07:36AM (#5369581)
    Intel still wants to keep rediculous margins for their products. AMD's approach brings everything closer together. The fastest computers are being built out of cheap consumer level processors, so why have incredibly expensive "server" processors?

    Separation of consumer and "server" processors is just marketing, which is Intel's strongest talent (like Microsoft).
  • Would someone like to break out the sock puppets and explain what other advantages (besides the 4GB Ram ceiling) that 64 bit processors will give a desktop user?
  • I agree with Intel (Score:2, Interesting)

    by Powercntrl ( 458442 )
    I am in no hurry to be the proud owner of a whole bunch of PCs that can no longer run apps based on a requirement of 64-bit code.

    Before you reply with a bunch of other reasons why my PCs are becoming more obsolete with each passing day anyway, think back to the transition between the 286 and 386. The 386 could run everything a 286 could run and it performed much better. Due to the performence benefit, most applications that couldn't be run on a 286 wouldn't have run well on a anyway.

    The transition to 64-bit on the desktop isn't going to be the same. While 640k may not be enough for everybody, 4GB is certainly enough for web browsing, wordprocessing and basic photo manipulation. I'd hate to see the horribly inefficient code that requires more than 4GB of RAM for such simple tasks.

    Realistically, the force that will cause 64-bit to be a requirement on the desktop will be the version of Windows that no longer runs on 32-bit hardware. Windows XP's minimum requirements are:


    PC with 300 megahertz (MHz) or higher processor clock speed recommended; 233-MHz minimum required;* Intel Pentium/Celeron family, AMD K6/Athlon/Duron family, or compatible processor recommended
    128 megabytes (MB) of RAM or higher recommended (64 MB minimum supported; may limit performance and some features)
    1.5 gigabyte (GB) of available hard disk space.*


    If you look at the current system requirements compared to the current top end PC hardware, it's easy to see why Intel wants to hold off on production of 64-bit processors targeted for the desktop market.
  • Object spaces (Score:5, Insightful)

    by be-fan ( 61476 ) on Monday February 24, 2003 @08:03AM (#5369660)
    64-bit CPUs are really an OS designer's wet dream. There are lots of things (bounce buffers, dynamic RAM map, prelinking headaches) that just go away with a 64-bit address space. You can just map all RAM permenently, prelink all binaries to a unique address, and move on with your life (or lack thereof). I was thinking the other day, that with the move to database oriented filesystems like Reiser4 and LonghornFS (for lack of a better name) that the time is ripe for some of that OO research from the 80's and 90's to kick in. The gist is that instead of the basic abstraction being files with a strict naming hierarchy, the basic abstraction is a set of objects with a very flexible database index. Throw in object persistence, and you've got yourself a very elegant setup, with basically and OODBMS at the core of the system. However, straightforward (fast) implementations of the scheme blow away a 4GB address space. For something like this, you really want to be able to mmap() a 120GB harddrive and remove a whole lot of intervening hacks.
  • by g4dget ( 579145 ) on Monday February 24, 2003 @08:09AM (#5369682)
    Going from 16 bit to 32 bit address spaces changed the nature of software radically. With 16 bit address spaces, a lot of text processing had to be stream oriented. Text editors were written in a way that they would text in and out from disk. Compilers consisted of many passes and performing global optimization was nearly impossible. Going to 32 bit address spaces changed all that and much more.

    Intel didn't want to make the jump to 32 bit, so they introduced "segment registers". They tried to convince people that this was actually a good thing, that it would make software better. Of course, we know better: segment registers were a mess. Software is complex enough than to have to deal with that. That's why we ended up with 32 bit flat address spaces.

    64 bit address spaces are as radical a change from 32 bit as 32 bit was from 16 bit. Right now, we can't reliably memory map files anymore because many files are bigger than 2 or 4 Gbytes. Kernel developers are furiously moving around chunks of address space in order to squeeze out another hundred megabytes here or there.

    With flat 64 bit address spaces, we can finally address all disk space on a machine uniformly. We can memory map files. We don't have to worry about our stack running into our heap anymore. Yes, many of those 64 bit words will only be filled "up to" 32 bits. But that's a small price to pay for a greatly simplified software architecture; it simply isn't worth it repeating the same mistake Intel made with the x86 series by trying to actually use segment registers. And code that actually works with a lot of data can do what we already do with 16 bit data on 32 bit processors: pack it.

    Even if having 4G of memory standard is a few years off yet, we need 64 bit address spaces. If AMD manages to release the Athlon 64 at prices comparable to 32 bit chips, they will sell like hotcakes because they are fast; but even more worrisome for Intel, an entirely new generation of software may be built on the Athlon 64, and Intel will have no chips to run it on. If AMD wins this gamble, the payoff is potentially huge.

    • by TheShadow ( 76709 ) on Monday February 24, 2003 @09:27AM (#5370024)
      Intel didn't want to make the jump to 32 bit, so they introduced "segment registers".

      Um.... no. Segment registers have been in Intel's products from the beginning (at least since the 8088). It wasn't a band-aid to stall adoption of 32-bit processors as you imply with the above comment.

      The current 32-bit processors also have segment registers and you can use them with the "flat" address space. Some OSes (like Linux) just set all the registers to the same segment and never change them. But you could have separate segments for the stack, data, code, etc.
  • big mistake IMHO (Score:5, Interesting)

    by jilles ( 20976 ) on Monday February 24, 2003 @08:11AM (#5369686) Homepage
    Intel is behaving a bit like IBM when the PC was invented. IBM had all the pieces and managed to lose their position as a market leader in no time, mostly because they didn't understand the market they were in.

    Intel currently owns the market for low end workstations and servers. If you need a web server or a cad station you get a nice P4 with some memory. This is also the market where the need for 64 bit will first come. At some point in time some people will want to put 8 GB of memory in their machine. AMD will be able to deliver that in a few months, Intel won't.

    My guess is that Intel is really not that stupid (if they are, sell your intel shares) and has a product anyway but wants to recover their investment on their 32 bit architecture before they introduce the 64 bit enhanced version of their P4. The current P4 compares quite favorably to AMDs products and AMD has had quite a bit of trouble keeping pace with Intel. AMD needs to expand their market whereas Intel needs to focus on making as much money as they can while AMD is struggling. This allows them to do R&D and optimize their products and ensure that they have good enough yields when the market for 64 bit processors has some volume. Then suddenly you need 64 bit to read your email and surf the web and Intel just happens to have this P5 with some 64 bit support. In the end, Intel will as usual be considered a safe choice.
  • The entire industry (Score:3, Interesting)

    by bob670 ( 645306 ) on Monday February 24, 2003 @08:22AM (#5369728)
    can't give away 32bit processors right now, what makes them think 64bit CPUs will impress the marketplace. Maybe it's time to find a new way to generate sales in what is clearly a maturing market sector. Stop looking at it from the scientific standpoint that 64bit CPUs open new doors and start looking from a consumer/purchaser perspective that money doesn't grow on trees.
  • Who cares about 4GB? (Score:3, Interesting)

    by Visaris ( 553352 ) on Monday February 24, 2003 @08:41AM (#5369791) Journal
    I keep hearing all this bs about the 4GB limit. I keep hearing how this is what 64 bits will fix. Sure you could have a larger memory with 64 address bits, but that's not all you get! In fact, that's not even half of it.

    I wrote a little library that strings together a bunch of unsigned longs. It in effect creates an X-bit system in software for doing precise addition, subtraction, etc. This library would be considerably faster if I could string 64 bit chunks together instead of 32 bit chunks. Does no one on /. ever want to do anything with large numbers? Does no one want to be accurate to more than 32 bits?

    What about bitwise actions like XOR, NOR, and NOT. You can now perform these operations on twice as many bits in one clock cycle. I'm not really into encryption, but I think this can speed things up there.

    Many OS's (file systems) limit the size of a file to 4GB. This is WAY crazy too small! This again stems from the use of 32 bit numbers. When the adoption of 64 bit machines is complete, this limit will be removed as well. Again, 32 bits isn't just about ram.

    I could really go on all day. The point is this: Twice the bits means twice the math getting done in the same amount of time (in some situations). So if a person were to write their code smart to take advantage of it, you would have all around faster code and a larger memory size. Sounds like a nice package to me.

    Really, give the 4GB limit a rest. Lets talk about some of the exciting optimizations we can do to our code to get a speed boost!
    • 64 bit doesn't give you significant performance improvements except in a few specialised areas (like crypto). The point is this: Twice the bits means twice the math getting done in the same amount of time - This is one of the stupidest comments I've heard in a while... think about it for a minute.

      And last I checked, most major x86 operating systems supported 64bit addressing for files.

      And if you are thinking about RAM, x86 isn't limited to 4gb. It can support up to 64gb of physical ram; Windows and Linux have both supported this for a while now... except for a few AMD chips (a number of recent AMD chips have microcode bugs which prevent you from addressing more than 4gb of RAM).

      There actually are some cool things you can do in 64bit which you can't in 32bit. You listed none of them. However, they tend to be closely tied to OS architecture, and even then few OSes take advantage of them (they aren't the kind of things you can retrofit on).

  • Ha ha ha! (Score:3, Informative)

    by Greyfox ( 87712 ) on Monday February 24, 2003 @08:47AM (#5369810) Homepage Journal
    You internet generation with your puny desktop machines and server clusters! Kids these days have no concept of the power a mainframe can bring to bear. Time and time again the desktop processor vendors promised that their next generation of chips would deliver "Mainframe power on the desktop!" And time and time again everyone bought into the hype, just to discover that those promises were sadly mistaken. What they were really promising, by the way, was that your PC would run as fast as one mainframe login session. That much at least has been delivered.

    If you need big processing, you still buy the big iron. Next time you're at the airport and the ticket agent is checking you in, sneak a peek at the logos on the terminals they're using. Oh sure they'd love to upgrade to a spiffy new-fangled GUI based dingus, just no one's figured out quite how to do that.

    When I signed on with IBM back in 1994 they were trying to replace their big iron with PCs. "By end of year 1995," they promised us, "all the mainframes will be gone and all our applications will run on Lotus Notes." Well here it is nearly a decade later and they still haven't replaced that big iron, and they'll never get rid of their RETAIN technical support database. No one can figure out how to deliver RETAIN's performance on any other platform.

    Sure, today a mainframe might consist of over a thousand high-end desktop processors working in unison, but look how many processors they had to slap in there to deliver the performance the customers expect from that big iron. And those are all wired together and working closely, unlike that (much smaller) network cluster your latest clueless technical manager just suggested.

    So what Intel is really saying here is their marketing department just realized that they will never deliver that kind of performance in a desktop or even in a 4 to 8 way "server" machine. The customers they're targeting will continue to purchase the big iron when they need that kind of processing power, and the "toy" shops are happy with the 32 bit processing power. By the way, Google essentially just built themselves a mainframe. I wonder how the cost of their solution would stack up against the biggest iron IBM currently provides...

    • by TFloore ( 27278 ) on Monday February 24, 2003 @12:48PM (#5371292)
      You're mixing up 3 classes of computing machines.

      Supercomputers are almost purely cpu number-crunching beasts. This is what you seem to think of as mainframes with "over a thousand ... processors". This is not a mainframe, this is a different category. They also generally have very high inter-cpu memory transfer rates, for handling dependent parallel computations.

      Most mainframes, like IBM's Z Series, have 24 to 36 CPUS. A mainframe is not about cpu performance, a mainframe is about data. A mainframe has system data throughput that puts almost any other system to shame. Historically, mainframes are good at supporting many simultaneously-connected users doing data queries and updates. (Yes, they run huge databases very well.)

      And then you get Beowulf clusters (your Google remark, effectively), which are really chasing the supercomputer market, and not the mainframe market. Beowulf clusters care about a limited class of supercomputer applications, they are good where you need a lot of parallel number crunching, and have very little data dependency between parallel calculations, so you don't need a lot of inter-cpu communications.

      Pick the type that's right for your job, and you'll be happy. Pick the wrong one, and you'll have nothing but problems.

      And it helps if you're stuck-up intelligently, that way people will still hate you, but won't think you're stupid any more. :)
  • by MichaelCrawford ( 610140 ) on Monday February 24, 2003 @08:49AM (#5369824) Homepage Journal
    ... then end users will soon need 5 GB of installed RAM to read their email, surf the web and edit their letters.

    As fast as the hardware engineers struggle to keep up with Moore's law, shoddy programmers backed by cheapskate management labor to set the performance gains back.

    Kids these days...

  • Whither VMware? (Score:3, Interesting)

    by 47PHA60 ( 444748 ) on Monday February 24, 2003 @08:51AM (#5369832) Journal
    Since one thing holding us up is backwards compatibility, why bother building it into the CPU at all? Partner with VMware; pay them to build a 64-bit version of the VM that will act like a 32-bit PIII or IV so people can run their apps until they're rewritten properly (or forever, if they're never rewritten). I guess first you need the 64-bit Windows to make it attractive to the corporate customer.

    With investment from Intel and Microsoft, they could release a cheap VM workstation optimized to run Windows only. They could even detect a 32-bit app starting up and shove it off to the VM, where it sounds like it might run faster. Well, easy for me to say, I guess. Make it so!

    Also, MS is buying Connectix, but their VMs are below VMware's quality, and it seems they bought it mainly for the server product. But this strategy could still work for them; build the 64-bit Windows workstation with a built in 32-bit VM.
  • by arvindn ( 542080 ) on Monday February 24, 2003 @09:10AM (#5369931) Homepage Journal
    I've got a slightly different take on the whole thing. I agree that the 4Gig address space will start to become a bottleneck if we don't start migrating now, but I think it may have some positive effects over the long run.

    Kind of like how a speed bump on the road can sometimes have a positive effect for traffic on the whole. Consider the current state of (desktop) software: its rarely written with efficiency as an important consideration. Often, there is not much incentive to do so: as long as it runs comfortably on decently new hardware, its fine. As a result, people who are forced to use bottom-of-the-line hardware are screwed. (Like me. I'm running my webserver [cjb.net] on stone-age hardware, simply because I can't afford anything more). In fact, Microsoft even goes to the extent of deliberately makign its new releases require the latest hardware to force users into an upgrade cycle. This is a Bad Thing.

    Now consider the effect that the 32-bit speedbreaker will have. Applications like gaming will be affected first. Since they have to add more features without getting more memory expensive, there will be incentive to do more efficient coding. In turn there will be pressure on underlying libraries to be more efficient. Other apps using these libs will start benefitting. There will also be more programmers catching those memory leaks which eat tons of memory rather than postponing them to a future release. More emphasis on software engg in general.

    The bottom line: more headaches for programmers for a couple of years, but smaller, faster, better software for a long time.

  • ZDNet (Score:5, Insightful)

    by PrimeNumber ( 136578 ) <PrimeNumber@@@excite...com> on Monday February 24, 2003 @09:41AM (#5370096) Homepage
    If Intel isn't spreading FUD about its 64 bit strategy, then this will be a turning point for AMD we will look back on in the future and say: "Wow Intel really screwed the pooch on that one".

    Fairly typical for ZDNet, Linux is either downplayed; or, as is the case in this article, ignored totally:
    Currently, Itanium chips do not run regular Windows code well.
    Windows software is designed to run on 32-bit systems.
    'There hasn't been much OS support'.


    Forget the number of years Linux has been running on a variety of 64 bit chips [google.com] for years.

    Articles like these are way too biased towards the Intel/Microsoft duopoly. I say go for it Intel, AMD can produce stable quality CPUs and you and Microsoft can say to each other: "No one will ever need more than 4GB of memory." ;)
  • by ebbomega ( 410207 ) on Monday February 24, 2003 @10:27AM (#5370345) Journal
    Apple:
    - Well, now that they're most recently Going out of business [slashdot.org], in steps IBM to save the day for them... a new line of iMacs is going to do insanely well, considering it's going to be the only fully-functional line of 64-bit personal computing, because I can pretty much guarantee Apple's going to have full-fledged 64-bit standardizing before anybody else. Apple's going to have an insane surge in users, a lot of the multimedia software that's been migrating to PCs is going to be happy with the better, faster and more powerful 64-bit hardware support and go back to developing for Macs... basically, Macs regain a lot of the status they've been falling behind in quickly.

    AMD:
    - Hammer sales go up! If they're really lucky, Intel will either do a harsh (and hopefully inferior) yet still more expensive knock-off of Hammer, or they're going to release Itanium in a hurry because they realize businesses like the idea of progress so they're starting to hop over to 64-bit architectures. So AMD will reclaim its status it lost about a year and a bit ago when the P4 got the title of "Best x86 on the market". Good on them.

    Linux:
    - Business as usual. Increased PPC support. Cool new Hammer patches, as well as the usual suspects (i386 still harshly dominating)

    Microsoft:
    - Well, maybe not everybody's jumping for joy... A lot of migration to PPC. But otherwise, they're still busy saying that "The Next New Windows Will Be Secure, And This Time We Mean It!" (tm).

    That about it?
    • Well, now that they're most recently Going out of business [slashdot.org], in steps IBM to save the day for them... a new line of iMacs is going to do insanely well, considering it's going to be the only fully-functional line of 64-bit personal computing, because I can pretty much guarantee Apple's going to have full-fledged 64-bit standardizing before anybody else. Apple's going to have an insane surge in users, a lot of the multimedia software that's been migrating to PCs is going to be happy with the better, faster and more powerful 64-bit hardware support and go back to developing for Macs... basically, Macs regain a lot of the status they've been falling behind in quickly.

      I wouldn't bet the farm on this. The iMac was and is marketed at the average non-geek who couldn't care about CPU bit path, or memory addressing, or upgradability. And it probably will still be marketed at the non-geek when they go 64 bit.

      Now the full on tower machines, those will be the machines to get for hot 64 bit CPU sex. not as cheap as the iMacs are, but they're a whole lot cheaper than say a Sun sparc machine, or other 64 bit box.
  • x86-64 (Score:3, Informative)

    by ShonFerg ( 652824 ) on Monday February 24, 2003 @10:47AM (#5370457)
    It surprises me that no one (at least at the top level) has mentioned this, but for the short term, what excites me the most about AMD's 64-bit implementation is the addition of new registers that comes with AMD finally designing the ISA themselves.

    Here are some general specs on x86-64:

    64-bit addressing
    8 Additional GPRs (for a total of 16)
    GPR width increased to 64-bits
    8 128-bit SSE registers (for a total of 16)
    64-bit instruction pointer and relative addressing
    Flat address space (code, data, stack)
    --Ace's hardware (http://www.aceshardware.com/read_news.jsp?id=1000 0218)

    The fact that x86 has only had 8 General Purpose Registers has been the bane of its existence for quite a while... I think that this will be the main source of speed improvement over existing 32-bit apps when compiled for the x86-64 architecture, not the fact that the system can handle more precise numbers.

    As far as selling these things, having worked in video game retail, the consumer is already very conscious of the idea of an n-bit processor from all the old console hype where the precision of the CPU was marketed as the primary "performance number" the way Mhz are on desktop PCs.

    --Shon
  • spin (Score:3, Interesting)

    by suitti ( 447395 ) on Monday February 24, 2003 @12:51PM (#5371324) Homepage
    Intel says they're in no hurry, but they've been working on 64 bit processors for awhile. The Itanium sounds like it ought to be a performer, but when they produce silicon, the benchmarks haven't shown it. Sounds like spin to me.

    I'd like to see one of two systems. Either provide backward compatibility - like AMD with it's 64 bit extensions, or start with a clean slate and produce a performer - like Digital's Alpha.

    The advantage of a 64 bit AMD is that the most used architecture can migrate without dropping everything. My PII can still run DOS binaries that ran on my 8088. This is a GOOD thing. Even running Linux, I don't want to recompile all my apps, if I don't have to. If this were the case, I might have gotten a Power PC already.

    The advantage that the Alpha has is speed, and there is only one kernel systems calls interface - 64 bits. For example, there's no lseek() and lseek64() on the Alpha. (For the history buf, first there was seek() for 16 bits, then lseek() for 32 bits. We've been here before. Now we have the off_t typedef, so it should be easier to simply change it to be 64 bits... Yet some have added off64_t, in the name of backwards compatibility.)

    Itanium may have the clean break (or it may not), but where's the speed? I'm not switching without something.

    Digital's Alpha is at least the third attempt that Digital made before getting a RISC system to perform. The Power architecture is IBM's 2nd attempt. Sometimes you design it, and it just doesn't deliver. Move on!

    When one looks at Digital's switch from 16 bits (PDP-11) to 32 bits (Vax 11/780), one notes that the new machines were more expensive, and about the same performance. I'd still rather have a Vax, because there are things that you can do in 32 bits that are painful in 16 (but not many).

    It should be noted that throwing the address space at problems often slows it down. For example, Gosling's Emacs was ported from the Vax to the PDP-11. On the Vax, the file being edited was thrown into RAM completely. On the PDP, just a few blocks of your file were in RAM, in a paged manner. On the PDP, an insert (or delete) cause only the current page to be modified. If the current page filled up, it was split, and a new page was created. On the Vax, inserts tended to touch every page of the file - which could make the whole machine page. It was quite obviously faster on the PDP-11. No one cares about this example anymore - since machines have so much more RAM and speed. But, throwing the address space at video editing will show how bad this idea really is. Programmed I/O is smarter than having the OS do it. The program knows what it's doing, and the OS doesn't. Eventually, machines may have enough RAM and speed that no one will care, but it won't happen here at the begining of the curve.

    One problem that has not been solved is the memory management unit TLB. This is the table on the chip that translated between physical and virtual memory. With 16 bits of address, 256 byte pages require only 256 entries to cover the whole address space. For 32 bit processors, the page table just doesn't fit on the chip. So, the TLB is a translation cache, and on cache miss, the OS must be called to fill it.

    An alternative is to use extent lists. On my Linux system, the OS manages to keep my disk files completely contiguous 99.8% of the time. If this were done for RAM, then the number of segments that would be needed for a typical process would be small - possibly as few as four. One for text (instructions), one for initialized read only data, one for read/write data, BSS and the heap, and one for the stack. You'd need one for each DLL (shared library), but IMO, shared libraries are more trouble than they're worth, and ought to be abandoned. Removing any possibility of TLB misses would improve performance, and take much of the current mystery out of designing high performance software.

    For this to work, you need the hardware vendor to produce appropriate hardware, and have at least one OS support it. The risk factor seems to have prevented this from happening so far...

  • by Tim Sweeney ( 59452 ) on Monday February 24, 2003 @01:56PM (#5371889)
    Intel's claims are wholly out of touch with reality.

    On a daily basis we're running into the Windows 2GB barrier with our next-generation content development and preprocessing tools.

    If cost-effective, backwards-compatible 64-bit CPU's were available today, we'd buy them today. We need them today. It looks like we'll get them in April.

    Any claim that "4GB is enough" or that address windowing extensions are a viable solution are just plain nuts. Do people really think programmers will re-adopt early 1990's bank-swapping technology?

    Many of these upcoming Opteron motherboards have 16 DIMM slots; you can fill them with 8GB of RAM for $800 at today's pricewatch.com prices. This platform is going to be a godsend for anybody running serious workstation apps. It will beat other 64-bit workstation platforms (SPARC/PA-RISC/Itanium) in price/performance by a factor of 4X or more. The days of $4000 workstation and server CPU's are over, and those of $1000 CPU's are numbered.

    Regarding this "far off" application compatibility, we've been running the 64-bit SuSE Linux distribution on Hammer for over 3 months. We're going to ship the 64-bit version of UT2003 at or before the consumer Athlon64 launch. And our next-generation engine won't just support 64-bit, but will basically REQUIRE it on the content-authoring side.

    We tell Intel this all the time, begging and pleading for a cost-effective 64-bit desktop solution. Intel should be listening to customers and taking the leadership role on the 64-bit desktop transition, not making these ridiculous "end of the decade" statements to the press.

    If the aim of this PR strategy is to protect the non-existant market for $4000 Itaniums from the soon-to-be massive market for cost-effective desktop 64-bit, it will fail very quickly.

    -Tim Sweeney, Epic Games
  • Even without having 4GB of memory installed, it is still very useful to have a 64-bit address space. Imagine being able to mmap() your entire hard drive at once! The filesystem would just simply treat the entire disk as a big data structure in virtual memory, copying when needed, instead of having to issue read and write calls to the disk. This will provide a huge performance increase.

    AGP and PCI cards, especially newer video cards, are also getting big. These need to have address space allocated to them. Even with a 64-bit PCI card, Linux still surprisingly allocates address space in 32-bit memory (the lower 4GB). If 4GB of RAM is installed, Linux must create a "hole" for PCI cards and such, as there isn't enough address space for all the RAM plus the PCI cards. This reminds me of the bad old days of ISA, where the expansion cards had to sit between 640K and 1M, creating a hole between the first 1M and all later memory. This hole still exists!

    And finally, there's lots of good reasons to have a huge address space that provides room enough for everything on the system at once. No need to decode multiple memory maps and translate between them. It would be a boon to things involving virtual memory, multiple programs, data transfer between programs, and so on.

    BTW, I use a machine at work with 4GB of memory installed. It's running Linux 2.4. Even with HIGHMEM enabled, it is still a mess, because we need that memory to be available to the kernel and PCI devices, and not just in user space. Linux is very good at doing page table tricks with PAE (Physical Address Extensions) for user programs, but this isn't true in kernel space. I'm looking forward to real 64-bit machines!

Utility is when you have one telephone, luxury is when you have two, opulence is when you have three -- and paradise is when you have none. -- Doug Larson

Working...