Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Intel

Intel: No Rush to 64-bit Desktop 616

An anonymous reader writes "Advanced Micro Devices and Apple Computer will likely tout that they can deliver 64-bit computing to desktops this year, but Intel is in no hurry. Two of the company's top researchers said that a lack of applications, existing circumstances in the memory market, and the inherent challenges in getting the industry and consumers to migrate to new chips will likely keep Intel from coming out with a 64-bit chip--similar to those found in high-end servers and workstations--for PCs for years."
This discussion has been archived. No new comments can be posted.

Intel: No Rush to 64-bit Desktop

Comments Filter:
  • Re:Of course... (Score:3, Interesting)

    by Duds ( 100634 ) <dudley@ent e r s p a c e.org> on Monday February 24, 2003 @08:13AM (#5369516) Homepage Journal
    I did some maths.

    As a semi-future-proofing-power-user. I built a PC in 1998. I put in 256MB RAM to try to keep it running as long as possible. That's price-equivilent to 2GB at todays prices.

    It's really not going to be long before the geeks feel they need to do so.
  • by BigBir3d ( 454486 ) on Monday February 24, 2003 @08:17AM (#5369530) Journal
    No need for the move from 32 to 64 yet:

    Another technique for expanding the memory capacity of current 32-bit chips is through physical memory addressing, said Dean McCarron, principal analyst of Mercury Research. This involves altering the chipset so that 32-bit chips could handle longer memory addresses. Intel has in fact already done preliminary work that would let its PC chips handle 40-bit addressing, which would let PCs hold more than 512GB of memory, according to papers published by the company.
  • by secondsun ( 195377 ) <secondsun@gmail.com> on Monday February 24, 2003 @08:22AM (#5369539) Journal
    Yes but some of us would actually stand to benefit from a commodity 64 bit proc. Those of us (like my Physics teach with a Phd in Biomolecular Physics) do active research and number crunching on molecular designs. People such as me need the boost to video/3d modelling apps where hitting 4gb memory limits is common. True that 64 bit solutions exist, but the problem is making them affordable. (And at 5k each, Sun Workstations and SGI boxen are not to the average college student).

  • by xyote ( 598794 ) on Monday February 24, 2003 @08:25AM (#5369551)
    That would be the MMU or virtual memory stuff. The address translation tables would be able to address more than 32 bits of memory, but any program or the kernel would still only be able to see or address 32 bits of memory. Like sticking two pc's next to each other. Between them, they would be able to address or access 33 bits of memory, but any one program would only see at most 32 bits.
  • by Daengbo ( 523424 ) <daengbo@gmai[ ]om ['l.c' in gap]> on Monday February 24, 2003 @08:26AM (#5369552) Homepage Journal
    Wouldn't it make more sense to put that 64 on the server, with XXGB of RAM, and push the display to the clients? X-terms, Terminal Services, whatever? Then, what, you've got 64 bit apps on the server, and a 32 bit clients, and no worry about memory usage.
  • AMD investor. (Score:3, Interesting)

    by mjuarez ( 12463 ) on Monday February 24, 2003 @08:31AM (#5369563)
    Being an investor in AMD, I'm really happy about the path Intel has chosen to take. My almost 1000 shares of AMD stock will finally be over the water again!!! :)

    Intel is committing hara-kiri in my opinion here (thats suicide for honor in Japanese). Similar events return to my memory, and history has proved all these were utterly wrong... (Its sad to acknowledge that I REMEMBER when some of these things happened! :(

    - Intel 286 vs 386 (IBM: A 286 is enough for most people...)
    - IBM Microchannel vs ISA (The same thing)
    - 'A good programmer should be able to do anything with 1K of memory'. I don't remember the author, but probably someone from IBM in the 60s or 70s.

    Time flies...
  • by philipsblows ( 180703 ) on Monday February 24, 2003 @08:32AM (#5369564) Homepage

    Didn't Apple manage to get their (admittedly smaller) user base to switch to a better processor?

    Intel's argument against 64-bit computing seems to be an advertisement for the x86-64 concept. The article didn't mention gaming, but surely the gamer market will be a major early-adopter base. It sounds like preemptive marketing to me.

    As for memory, the article, and presumably intel, don't seem to account for the ever-increasing memory footprint of Microsoft's operating system (or for the GNOME stuff on our favorite OS), and so are perhaps too dismissive of the need for a >4GB desktop. As we all know all too well, one can never have too much memory or disk space, and applications and data will always grow to expand to the limits of both.

    Personally, I'm holding off on any new hardware for my endeavors until I see what AMD releases, though I would settle for a Power5-based desktop...

  • Re:pc overhaul (Score:5, Interesting)

    by Zocalo ( 252965 ) on Monday February 24, 2003 @08:33AM (#5369569) Homepage
    Replacing the PC architecture was one of the early selling points of Windows NT, wasn't it? Look at our shiny new OS - it runs on your existing Intel PCs, but when you need more power you can upgrade to more powerful systems running on DEC's Alpha CPU. Only you can't, because no one really bothered to port their applications, even when all that was required was a recompile, and so the Alpha foundered and the inferior x86 architecture marched on.

    Of course, if you want real hardware agnosticism, there is always Linux isn't there? That runs on 64 bit CPUs, in 64 bit mode right now, and should be ready to work on AMD's Hammer right from launch. The big gamble for Intel is, can it afford to be late to the party? Intel certainly seems to think so, but I think that the Hammer is going to end up on more desktops than they expect, unless AMD sets the price of entry too high.

  • Margins (Score:4, Interesting)

    by Ledskof ( 169553 ) on Monday February 24, 2003 @08:36AM (#5369581)
    Intel still wants to keep rediculous margins for their products. AMD's approach brings everything closer together. The fastest computers are being built out of cheap consumer level processors, so why have incredibly expensive "server" processors?

    Separation of consumer and "server" processors is just marketing, which is Intel's strongest talent (like Microsoft).
  • by MtViewGuy ( 197597 ) on Monday February 24, 2003 @08:55AM (#5369636)
    I think Intel is currently dismissing 64-bit computing except for specialized needs because the vast majority of current mainstream software doesn't support 64-bit operations.

    But I think that will change almost overnight once operating software that supports the Athlon 64/Opteron becomes widely available. We know that Linux is being ported to run in native Athlon 64/Opteron mode as I type this; I also believe that Microsoft is working on an Athlon 64/Opteron compatible version of Windows XP that will be available by time the Athlon 64 is released in circa September 2003 (we won't see the production version of Windows Longhorn until at least the late spring of 2004 (IMHO), well after the new AMD CPU's become widely available).
  • by Halo1 ( 136547 ) on Monday February 24, 2003 @08:58AM (#5369642)
    The fact that you have a 64 bit processor doesn't mean that all instructions become twice as big. For example, the 64bit PowerPC's instructions are all 32 bit, just like those of the 32bit PowerPC's. That's also the reason why 64bit PPC's don't take a hit when executing 32bit code: their (user level) instruction set is exactly the same as those of the 32bit PPC's, they just have some extra instructions for 64bit-specific operations (mainly load/store and shift operations).

    In case you're wondering about constants: the PPC only supports loads of 16bit immediate values (both in the lower and upper 16bits of the lower 32bits of a register), so to load a 64bit value you may have to perform up to 5 operations (two loads, a shift and two more loads). So a PPC requires up to 64bits for a 32bit immediate load and up to 160bits to load a 64bit value (unless you store such a value in a memory location that can be addressed in a faster way). These are worst cases however, and in a lot of cases 1 or maybe two instructions is enough.

    The main downside of 64bit code is that all pointers become 64bit, so all pointer loads and stores indeed require twice as much storage and bandwidth.

  • I agree with Intel (Score:2, Interesting)

    by Powercntrl ( 458442 ) on Monday February 24, 2003 @08:59AM (#5369643) Homepage
    I am in no hurry to be the proud owner of a whole bunch of PCs that can no longer run apps based on a requirement of 64-bit code.

    Before you reply with a bunch of other reasons why my PCs are becoming more obsolete with each passing day anyway, think back to the transition between the 286 and 386. The 386 could run everything a 286 could run and it performed much better. Due to the performence benefit, most applications that couldn't be run on a 286 wouldn't have run well on a anyway.

    The transition to 64-bit on the desktop isn't going to be the same. While 640k may not be enough for everybody, 4GB is certainly enough for web browsing, wordprocessing and basic photo manipulation. I'd hate to see the horribly inefficient code that requires more than 4GB of RAM for such simple tasks.

    Realistically, the force that will cause 64-bit to be a requirement on the desktop will be the version of Windows that no longer runs on 32-bit hardware. Windows XP's minimum requirements are:


    PC with 300 megahertz (MHz) or higher processor clock speed recommended; 233-MHz minimum required;* Intel Pentium/Celeron family, AMD K6/Athlon/Duron family, or compatible processor recommended
    128 megabytes (MB) of RAM or higher recommended (64 MB minimum supported; may limit performance and some features)
    1.5 gigabyte (GB) of available hard disk space.*


    If you look at the current system requirements compared to the current top end PC hardware, it's easy to see why Intel wants to hold off on production of 64-bit processors targeted for the desktop market.
  • by MtViewGuy ( 197597 ) on Monday February 24, 2003 @09:03AM (#5369659)
    Little late asking that question.

    I've heard that Microsoft is developing an Athlon 64/Opteron native version of Windows XP; if that is true then gaming companies involved with PC-based games may be already creating games that run in native Athlon 64/Opteron 64-bit mode under Windows XP as I type this.
  • by g4dget ( 579145 ) on Monday February 24, 2003 @09:09AM (#5369682)
    Going from 16 bit to 32 bit address spaces changed the nature of software radically. With 16 bit address spaces, a lot of text processing had to be stream oriented. Text editors were written in a way that they would text in and out from disk. Compilers consisted of many passes and performing global optimization was nearly impossible. Going to 32 bit address spaces changed all that and much more.

    Intel didn't want to make the jump to 32 bit, so they introduced "segment registers". They tried to convince people that this was actually a good thing, that it would make software better. Of course, we know better: segment registers were a mess. Software is complex enough than to have to deal with that. That's why we ended up with 32 bit flat address spaces.

    64 bit address spaces are as radical a change from 32 bit as 32 bit was from 16 bit. Right now, we can't reliably memory map files anymore because many files are bigger than 2 or 4 Gbytes. Kernel developers are furiously moving around chunks of address space in order to squeeze out another hundred megabytes here or there.

    With flat 64 bit address spaces, we can finally address all disk space on a machine uniformly. We can memory map files. We don't have to worry about our stack running into our heap anymore. Yes, many of those 64 bit words will only be filled "up to" 32 bits. But that's a small price to pay for a greatly simplified software architecture; it simply isn't worth it repeating the same mistake Intel made with the x86 series by trying to actually use segment registers. And code that actually works with a lot of data can do what we already do with 16 bit data on 32 bit processors: pack it.

    Even if having 4G of memory standard is a few years off yet, we need 64 bit address spaces. If AMD manages to release the Athlon 64 at prices comparable to 32 bit chips, they will sell like hotcakes because they are fast; but even more worrisome for Intel, an entirely new generation of software may be built on the Athlon 64, and Intel will have no chips to run it on. If AMD wins this gamble, the payoff is potentially huge.

  • big mistake IMHO (Score:5, Interesting)

    by jilles ( 20976 ) on Monday February 24, 2003 @09:11AM (#5369686) Homepage
    Intel is behaving a bit like IBM when the PC was invented. IBM had all the pieces and managed to lose their position as a market leader in no time, mostly because they didn't understand the market they were in.

    Intel currently owns the market for low end workstations and servers. If you need a web server or a cad station you get a nice P4 with some memory. This is also the market where the need for 64 bit will first come. At some point in time some people will want to put 8 GB of memory in their machine. AMD will be able to deliver that in a few months, Intel won't.

    My guess is that Intel is really not that stupid (if they are, sell your intel shares) and has a product anyway but wants to recover their investment on their 32 bit architecture before they introduce the 64 bit enhanced version of their P4. The current P4 compares quite favorably to AMDs products and AMD has had quite a bit of trouble keeping pace with Intel. AMD needs to expand their market whereas Intel needs to focus on making as much money as they can while AMD is struggling. This allows them to do R&D and optimize their products and ensure that they have good enough yields when the market for 64 bit processors has some volume. Then suddenly you need 64 bit to read your email and surf the web and Intel just happens to have this P5 with some 64 bit support. In the end, Intel will as usual be considered a safe choice.
  • The entire industry (Score:3, Interesting)

    by bob670 ( 645306 ) on Monday February 24, 2003 @09:22AM (#5369728)
    can't give away 32bit processors right now, what makes them think 64bit CPUs will impress the marketplace. Maybe it's time to find a new way to generate sales in what is clearly a maturing market sector. Stop looking at it from the scientific standpoint that 64bit CPUs open new doors and start looking from a consumer/purchaser perspective that money doesn't grow on trees.
  • by Max Romantschuk ( 132276 ) <max@romantschuk.fi> on Monday February 24, 2003 @09:27AM (#5369740) Homepage
    I know amount of addressable memory is quite high, but isn't all the memory currently accessed via a bus thus sharing memory bandwidth?

    That is true, but the memory bus can be made wider, and that won't affect the adressing scheme. Take nVidia's nForce, it uses 2 DIMM slots in paralell to double the memory bandwidth (although the processor bus must be fast enough to use the bandwidth).

    The bandwidth issue scales much more easily than the fact that 32 bits is 4 GB of addressable memory, no matter what. (OK, you can do a extended-memory-kludge, but that's beside the point ;)
  • Re:No surprise (Score:1, Interesting)

    by Anonymous Coward on Monday February 24, 2003 @09:29AM (#5369748)

    Can someone explain to me why we don't already have 64-bit Pentiums? I may be a little ignorant, but i don't understand how the Pentium isn't a 64-bit processor already. Since MMX (then 3DNow! and SSE and SSE2) there have been a bunch of special-purpose 64-bit registers that can be accessed and utilized fairly simply. I can't imagine it'd be a huge leap to allow those 64-bit registers to address memory on the bus. What exactly is a "true" 64-bit processor going to give us that we didn't already have in our MMX registers?

  • Who cares about 4GB? (Score:3, Interesting)

    by Visaris ( 553352 ) on Monday February 24, 2003 @09:41AM (#5369791) Journal
    I keep hearing all this bs about the 4GB limit. I keep hearing how this is what 64 bits will fix. Sure you could have a larger memory with 64 address bits, but that's not all you get! In fact, that's not even half of it.

    I wrote a little library that strings together a bunch of unsigned longs. It in effect creates an X-bit system in software for doing precise addition, subtraction, etc. This library would be considerably faster if I could string 64 bit chunks together instead of 32 bit chunks. Does no one on /. ever want to do anything with large numbers? Does no one want to be accurate to more than 32 bits?

    What about bitwise actions like XOR, NOR, and NOT. You can now perform these operations on twice as many bits in one clock cycle. I'm not really into encryption, but I think this can speed things up there.

    Many OS's (file systems) limit the size of a file to 4GB. This is WAY crazy too small! This again stems from the use of 32 bit numbers. When the adoption of 64 bit machines is complete, this limit will be removed as well. Again, 32 bits isn't just about ram.

    I could really go on all day. The point is this: Twice the bits means twice the math getting done in the same amount of time (in some situations). So if a person were to write their code smart to take advantage of it, you would have all around faster code and a larger memory size. Sounds like a nice package to me.

    Really, give the 4GB limit a rest. Lets talk about some of the exciting optimizations we can do to our code to get a speed boost!
  • by Anonymous Coward on Monday February 24, 2003 @09:48AM (#5369817)
    Yes, basically. In order to stop a badly behaved user process from crashing the computer, the OS devides the available virtual address space into two sections. One is used by the kernel, the other is used by user space processes. The entire kernel address space can be marked as not readable, not writable for a user space process. If a user space process attempts to read or write to the kernel memory space, then the kernel can trap the access and kill the process.

    However, you only need to do this if you're making a distinction between kernel space and user space. If you're running a speciality or ad-hoc application then you might not care too much about protecting the kernel memory, so you can just map all of the available virtual address space into one big 4Gb chunk and let the kernel and the user space process(es) have full access to each others memory.
  • by MichaelCrawford ( 610140 ) on Monday February 24, 2003 @09:49AM (#5369824) Homepage Journal
    ... then end users will soon need 5 GB of installed RAM to read their email, surf the web and edit their letters.

    As fast as the hardware engineers struggle to keep up with Moore's law, shoddy programmers backed by cheapskate management labor to set the performance gains back.

    Kids these days...

  • Whither VMware? (Score:3, Interesting)

    by 47PHA60 ( 444748 ) on Monday February 24, 2003 @09:51AM (#5369832) Journal
    Since one thing holding us up is backwards compatibility, why bother building it into the CPU at all? Partner with VMware; pay them to build a 64-bit version of the VM that will act like a 32-bit PIII or IV so people can run their apps until they're rewritten properly (or forever, if they're never rewritten). I guess first you need the 64-bit Windows to make it attractive to the corporate customer.

    With investment from Intel and Microsoft, they could release a cheap VM workstation optimized to run Windows only. They could even detect a 32-bit app starting up and shove it off to the VM, where it sounds like it might run faster. Well, easy for me to say, I guess. Make it so!

    Also, MS is buying Connectix, but their VMs are below VMware's quality, and it seems they bought it mainly for the server product. But this strategy could still work for them; build the 64-bit Windows workstation with a built in 32-bit VM.
  • by fitten ( 521191 ) on Monday February 24, 2003 @09:54AM (#5369845)
    The G3 and G4 are 32-bit processors as are the 603 and the 604. The 620 was supposed to be 64-bit but that never left the ground. IBM has been using a 64-bit Power chip for quite some time. IBM is getting ready to release the first 64-bit Power CPU for consumer use this year.

    And, as other have stated, whether a CPU is 32-bit or 64-bit has nothing to do with whether is it classified as a "RISC" or a "CISC" processor. Also, make sure you know what the real differences are between what people commonly call "RISC" and "CISC". It has extremely little to do with anything being "reduced" in terms of count. Don't believe me? Go count the number of instruction op codes for the G4 and the current x86 ISA and compare.
  • by Futurian ( 152084 ) on Monday February 24, 2003 @09:58AM (#5369867)
    Bill Gates claims that he never said 640K was enough memory. His denial appeared in an interview in the New York Review of Books. In fact, he says that he believed the opposite. (The slashdot audience can decide on his veracity.) Below is a quote from the article "He's Got Mail" by James Fallows:

    One quote from Gates became infamous as a symbol of the company's arrogant attitude about such limits. It concerned how much memory, measured in kilobytes or "K," should be built into a personal computer. Gates is supposed to have said, "640K should be enough for anyone." The remark became the industry's equivalent of "Let them eat cake" because it seemed to combine lordly condescension with a lack of interest in operational details. After all, today's ordinary home computers have one hundred times as much memory as the industry's leader was calling "enough."

    It appears that it was Marie Thérèse, not Marie Antoinette, who greeted news that the people lacked bread with qu'ils mangent de la brioche. (The phrase was cited in Rousseau's Confessions, published when Marie Antoinette was thirteen years old and still living in Austria.) And it now appears that Bill Gates never said anything about getting along with 640K. One Sunday afternoon I asked a friend in Seattle who knows Gates whether the quote was accurate or apocryphal. Late that night, to my amazement, I found a long e-mail from Gates in my inbox, laying out painstakingly the reasons why he had always believed the opposite of what the notorious quote implied. His main point was that the 640K limit in early PCs was imposed by the design of processing chips, not Gates's software, and he'd been pushing to raise the limit as hard and as often as he could. Yet despite Gates's convincing denial, the quote is unlikely to die. It's too convenient an expression of the computer industry's sense that no one can be sure what will happen next.

    Click here [nybooks.com] to read the full article.
  • by Moutane ( 651836 ) <moutane AT rstack DOT org> on Monday February 24, 2003 @10:03AM (#5369897)
    IMHO, Intel just doesn't want to do the same error they did with Pentium 4, eg. release a processor with an extended instruction set when no application has been built to use it. That's what allowed AMD to grow on the market.
    Furthermore, Intel Itanium has very poor compatibility with 32-bit applications, whereas AMD Athlon64 supports them natively. So releasing Itanium too early would once again mean poor performance compared to AMD, and potentially reproduce the P4 problem.
  • Truth or Denial? (Score:2, Interesting)

    by ackthpt ( 218170 ) on Monday February 24, 2003 @10:11AM (#5369941) Homepage Journal
    No need for the move from 32 to 64 yet:

    Another technique for expanding the memory capacity of current 32-bit chips is through physical memory addressing, said Dean McCarron, principal analyst of Mercury Research. This involves altering the chipset so that 32-bit chips could handle longer memory addresses. Intel has in fact already done preliminary work that would let its PC chips handle 40-bit addressing, which would let PCs hold more than 512GB of memory, according to papers published by the company.

    I dunno about them, but my 32 bit system already has 768MB. 40 bit addressing would present the interesting effect of needing memory manufactures to buy into a different addressing standard, which, as you can well imagine, they'll be slow to do, even with Intel pitching it. Also keep in mind that AMD could follow suit, with their 32 bit line. This doesn't strike me as a very realistic direction to go.

    Intel still has some mileage in the P4, throwing more cache at it, etc., but 64 bits is something computer techies understand, and once 64 bit PC's start rolling out, everything else will seem second best, particularly if AMD plays their advertising cards right.

    Oh, and the 'no need' argument never has flown. I've been hearing it for decades. If anyone actually listened to it we'd still be pon PC-AT's with VGA.

  • by dkf ( 304284 ) <donal.k.fellows@manchester.ac.uk> on Monday February 24, 2003 @10:18AM (#5369974) Homepage
    All modern processors - heck, all processors from 25 years ago - can handle 64-bit integers. But only a 64-bit processor can perform arithmetic with them in a single instruction. Otherwise, you have to use the add-with-carry (and its friends) instruction quite a few times.
  • by BJH ( 11355 ) on Monday February 24, 2003 @10:22AM (#5370001)

    Yes. They did it gradually. The first PPC Macs ran a 68k emulator which provided backwards compatability for old Mac software. Intel are trying to do the same thing; you can run IA-32 software on IA-64.

    The problem that Intel has, and that Apple didn't, is that the IA-32 mode on an Itanium is generally slower than a real IA-32. Many Mac users found that their old 68k code ran just the same, or in some cases faster on the new PPC's. Intel then, is at a disadvantage with the IA-64, speedwide. Why invest all that money in a new platform just to run your code slower?


    Sorry, you're wrong on two points there.

    - The PPC Macs did not run a m68k 'emulator' - an opcode translator converted m68k code to PPC code. There wasn't a clearly-defined emulator (which implies an application) - certain parts of the MacOS itself at the time consisted of m68k code, which was run through the translator.

    - The first PPCs ran m68k code *slower* than the fastest m68k Macs. In particular, the 6100/60 was badly crippled by its lack of cache, and could be quite handily beaten by the faster 68040 Macs when running m68k apps.
  • Re:"The first" PPCs? (Score:3, Interesting)

    by ianscot ( 591483 ) on Monday February 24, 2003 @10:23AM (#5370006)
    The first PPC Macs ran a 68k emulator which provided backwards compatability for old Mac software. Intel are trying to do the same thing...

    Those Mac emulators still work, and still run the ancient software, on a modern OS X Mac. My father has a word processor from maybe 1987 (WriteNow) that's just fine, and continues to use it for day-to-day writing. Hey, whatever makes you comfy.

    Maybe it isn't supported in some subtle ways, and I'm sure there's stuff that's broken -- even recent OS 9 games sometimes won't run in "Classic Mode" and require booting in OS 9 instead. But Apple's taken this seriously during every OS or chip migration they've ever had, and they're still keeping their eye on pre-PPC chip software.

  • by jpmorgan ( 517966 ) on Monday February 24, 2003 @10:38AM (#5370080) Homepage
    64 bit doesn't give you significant performance improvements except in a few specialised areas (like crypto). The point is this: Twice the bits means twice the math getting done in the same amount of time - This is one of the stupidest comments I've heard in a while... think about it for a minute.

    And last I checked, most major x86 operating systems supported 64bit addressing for files.

    And if you are thinking about RAM, x86 isn't limited to 4gb. It can support up to 64gb of physical ram; Windows and Linux have both supported this for a while now... except for a few AMD chips (a number of recent AMD chips have microcode bugs which prevent you from addressing more than 4gb of RAM).

    There actually are some cool things you can do in 64bit which you can't in 32bit. You listed none of them. However, they tend to be closely tied to OS architecture, and even then few OSes take advantage of them (they aren't the kind of things you can retrofit on).

  • Re:Object spaces (Score:4, Interesting)

    by be-fan ( 61476 ) on Monday February 24, 2003 @10:40AM (#5370095)
    Memory mapping a harddrive won't make it faster to access, I agree. But simplifying parts of the code is a very big win. By memory mapping the HD, you can just let the page cache handle the I/O.
  • Re:bah! (Score:3, Interesting)

    by DNS-and-BIND ( 461968 ) on Monday February 24, 2003 @11:05AM (#5370222) Homepage
    Video editing is a specialized enterprise. Not anything close to Joe User. Don't get me wrong, I think that 64-bit applications are great. But I remember a few years back when my company ported all its apps from 32-bit Solaris to 64-bit Solaris. There wasn't much performance benefit, if any. And of course, only the PC platform is suceptible to the ridiculous 4GB memory limitation.
  • by Junks Jerzey ( 54586 ) on Monday February 24, 2003 @11:06AM (#5370225)
    Right now 4 GB of memory might be enough. But switching to 64 bit when we are already hitting the wall is not an option. The point with going to 64 bits now is that we can add memory past 4 GB without the headaches of moving to a new platform, since the transition is already done.

    And this is great...if you're doing mainframe style computing and price is no object. Back in the day, given infinite funds, you could have purchased an Apple II or a VAX 11/780. The former, even with its 64K of memory, let you do about 80% of what you'd want to use the VAX for, and it's a lot easier to maintain, lower power, and fits on your desk.

    Now we have a similar situation. 64-bit is "better," but in a loose "for maybe 5% of all computing tasks" kind of way. That's not a compelling reason to switch all desktop PCs over to 64-bit processors. If Intel--or any other company--tries to do that, then I'll just wait until the lower end mobile processor makers improve enough that I can avoid the bloated desktop market all together.
  • by Bishop ( 4500 ) on Monday February 24, 2003 @11:06AM (#5370226)
    The 32bit addressing of the 386 was put to serious use long before windows95. Early Sun workstations were 386s. OS/2 and WinNT 3.51 both benefitted from a 32bit address space. Quaterdeck's DESQView, and QEMM386 required 386s. Even under MSDOS there was that ugly task switcher that required 386s. And don't forget the games that loaded the DOS extenders. The 32bit addressing of the 386 was required for both office and home applications long before windows95.

    Let's not forget the excellent Motorola 68K chips either. The 32bit addressing 68020 was introduced in 1984. It was used in many *nix workstations.

    In 1985 Intel said the same thing they are saying now: This new CPU is for servers, you don't need it in workstations. They were wrong then. They are wrong now.
  • by ebbomega ( 410207 ) on Monday February 24, 2003 @11:27AM (#5370345) Journal
    Apple:
    - Well, now that they're most recently Going out of business [slashdot.org], in steps IBM to save the day for them... a new line of iMacs is going to do insanely well, considering it's going to be the only fully-functional line of 64-bit personal computing, because I can pretty much guarantee Apple's going to have full-fledged 64-bit standardizing before anybody else. Apple's going to have an insane surge in users, a lot of the multimedia software that's been migrating to PCs is going to be happy with the better, faster and more powerful 64-bit hardware support and go back to developing for Macs... basically, Macs regain a lot of the status they've been falling behind in quickly.

    AMD:
    - Hammer sales go up! If they're really lucky, Intel will either do a harsh (and hopefully inferior) yet still more expensive knock-off of Hammer, or they're going to release Itanium in a hurry because they realize businesses like the idea of progress so they're starting to hop over to 64-bit architectures. So AMD will reclaim its status it lost about a year and a bit ago when the P4 got the title of "Best x86 on the market". Good on them.

    Linux:
    - Business as usual. Increased PPC support. Cool new Hammer patches, as well as the usual suspects (i386 still harshly dominating)

    Microsoft:
    - Well, maybe not everybody's jumping for joy... A lot of migration to PPC. But otherwise, they're still busy saying that "The Next New Windows Will Be Secure, And This Time We Mean It!" (tm).

    That about it?
  • Re:No hurry? (Score:3, Interesting)

    by turgid ( 580780 ) on Monday February 24, 2003 @11:39AM (#5370415) Journal
    From what a little bird has told me, rumours of Yamhill's demise have been greatly exagerated to keep HP happy since its strategy is itanium. But that's just what a little bird told me, not gospel.
  • Re:RISC vs. CISC (Score:2, Interesting)

    by fitten ( 521191 ) on Monday February 24, 2003 @11:53AM (#5370507)
    However.... these days, there is little that is "reduced", certainly not the count of legal operands, between processors touted as RISC vs. those touted as CISC (go count the G4 ISA opcodes, then count the P4 ISA opcodes).
  • Re:bah! (Score:3, Interesting)

    by thatguywhoiam ( 524290 ) on Monday February 24, 2003 @11:55AM (#5370515)
    Video editing is a specialized enterprise.

    Was a specialized enterprise. Not anymore; witness iMovie or Final Cut Express.

    I am still stunned by this. I remember building and demo'ing Media 100 systems in 1997; you needed at least $20k for something reasonable (i.e. Big Mac w/gobs of RAM, SCSI arrays, specialized PCI board and breakout box, industrial VTR, preview monitor, time-base corrector...) and that didn't get you fancy realtime effects.

    A $1500 iMac just spanks the crap out of this system I used to sell, requires no extra hardware (firewire is beautiful), and the quality is superior.

    So, past tense.

    Now, back on topic, accessing 4GB of memory is very desirable in this situation; 4GB of DV footage is measured in minutes. It would be nice to manuipulate more than minutes in RAM, no? (also, RAM Preview in After Effects would be really sweet).

  • Re:Of course... (Score:3, Interesting)

    by Jeppe Salvesen ( 101622 ) on Monday February 24, 2003 @01:24PM (#5371091)
    Interestingly, 300-400 mhz is still relatively OK as long as you have enough ram (what is that - 4 years old techology?), a fast enough disk system and you stay away from gaming. I bet 3ghz will last you even longer, given enough RAM.
  • spin (Score:3, Interesting)

    by suitti ( 447395 ) on Monday February 24, 2003 @01:51PM (#5371324) Homepage
    Intel says they're in no hurry, but they've been working on 64 bit processors for awhile. The Itanium sounds like it ought to be a performer, but when they produce silicon, the benchmarks haven't shown it. Sounds like spin to me.

    I'd like to see one of two systems. Either provide backward compatibility - like AMD with it's 64 bit extensions, or start with a clean slate and produce a performer - like Digital's Alpha.

    The advantage of a 64 bit AMD is that the most used architecture can migrate without dropping everything. My PII can still run DOS binaries that ran on my 8088. This is a GOOD thing. Even running Linux, I don't want to recompile all my apps, if I don't have to. If this were the case, I might have gotten a Power PC already.

    The advantage that the Alpha has is speed, and there is only one kernel systems calls interface - 64 bits. For example, there's no lseek() and lseek64() on the Alpha. (For the history buf, first there was seek() for 16 bits, then lseek() for 32 bits. We've been here before. Now we have the off_t typedef, so it should be easier to simply change it to be 64 bits... Yet some have added off64_t, in the name of backwards compatibility.)

    Itanium may have the clean break (or it may not), but where's the speed? I'm not switching without something.

    Digital's Alpha is at least the third attempt that Digital made before getting a RISC system to perform. The Power architecture is IBM's 2nd attempt. Sometimes you design it, and it just doesn't deliver. Move on!

    When one looks at Digital's switch from 16 bits (PDP-11) to 32 bits (Vax 11/780), one notes that the new machines were more expensive, and about the same performance. I'd still rather have a Vax, because there are things that you can do in 32 bits that are painful in 16 (but not many).

    It should be noted that throwing the address space at problems often slows it down. For example, Gosling's Emacs was ported from the Vax to the PDP-11. On the Vax, the file being edited was thrown into RAM completely. On the PDP, just a few blocks of your file were in RAM, in a paged manner. On the PDP, an insert (or delete) cause only the current page to be modified. If the current page filled up, it was split, and a new page was created. On the Vax, inserts tended to touch every page of the file - which could make the whole machine page. It was quite obviously faster on the PDP-11. No one cares about this example anymore - since machines have so much more RAM and speed. But, throwing the address space at video editing will show how bad this idea really is. Programmed I/O is smarter than having the OS do it. The program knows what it's doing, and the OS doesn't. Eventually, machines may have enough RAM and speed that no one will care, but it won't happen here at the begining of the curve.

    One problem that has not been solved is the memory management unit TLB. This is the table on the chip that translated between physical and virtual memory. With 16 bits of address, 256 byte pages require only 256 entries to cover the whole address space. For 32 bit processors, the page table just doesn't fit on the chip. So, the TLB is a translation cache, and on cache miss, the OS must be called to fill it.

    An alternative is to use extent lists. On my Linux system, the OS manages to keep my disk files completely contiguous 99.8% of the time. If this were done for RAM, then the number of segments that would be needed for a typical process would be small - possibly as few as four. One for text (instructions), one for initialized read only data, one for read/write data, BSS and the heap, and one for the stack. You'd need one for each DLL (shared library), but IMO, shared libraries are more trouble than they're worth, and ought to be abandoned. Removing any possibility of TLB misses would improve performance, and take much of the current mystery out of designing high performance software.

    For this to work, you need the hardware vendor to produce appropriate hardware, and have at least one OS support it. The risk factor seems to have prevented this from happening so far...

  • by default luser ( 529332 ) on Monday February 24, 2003 @01:54PM (#5371344) Journal
    The 286 brought us 24-bit addressing ( 16MB ).

    It took most desktop users a decade and the 486 to practically push this barrier. By that time, two generations of 32-bit capable chips had been introduced to the marketplace.

    If one takes this into perspective, then Intel may be quite correct that 64-bit will not make an impression on the desktop until nearly 2010, and that even waiting a few years to introduce 64-bit desktop solutions will not be too late. It may not be IA-64 that ends up on the desktop, but that doesn't change the timeline.

    Your average 286 buyer in the mid-80s had 1MB of ram, or 1/16 the maximum. Even though desktop 32-bit chips weren't available ( 386 was server -targeted at the time ) when it was purchased, it was probably replaced with a 386 or 486 machine well before upgrading the ram to the maximum.

    Your average user now has around 256MB of ram, or 1/16 the maximum. Most likely, even with 64-bit desktop chips not released for a few more years, we will still have a couple of product generations before everyone needs 64-bit capability.
  • by Anonymous Coward on Monday February 24, 2003 @04:05PM (#5372504)
    One hard performance problem on the Itanium is compilation, since there is very little dynamic scheduling going on in the chip. This may indicate a good general VLIW compilation strategy or it may mean that the benchmarks are a sort of special case that is amenable to the VLIW optimizations used in the compiler (probably a bit of both).
  • by EuroChild ( 523969 ) on Tuesday February 25, 2003 @02:40AM (#5377045)
    That ought to be enough for anyone

    You should be careful when saying stuff like that. I dug up an 80's electronics magazing selling computers with "16k of RAM - All the RAM you'll ever need!"

    meep

  • by Anonymous Coward on Wednesday February 26, 2003 @09:15AM (#5385825)
    I've heard people say that 64-bit computing isn't necessary, or that the consumer doesn't need it. As a developer all I can say to that is their opening seemed misinformed. 64-bit obviously allows greater addresses and more parallel data processing as well as a host of other features. It's just a natural progression, and it's inevitable.

    As for the whole Itanium vs. Opteron/Athlon64 thing. Well, it kind of does look like AMD just made some modifications to the x86 Athlon and turned it into an Athlon64. That is, it's a evolution and not a revolution. Itanium on the other hand, is a completely different architecture.

    I guess you can't blame Intel for not implementing the Itanium in the consumer market, since that's not what it was designed for and it would probably produce very little profit for all the money they put into R&D for the thing.

    It looks like Intel just looked at their market and said, "Ok, we're entering the high-end server space of the whole market." AMD on the other hand seemed to look at their market and said, "Ok, Intel is pouring resources into this one concentrated market, and we can take advantage of it. We're going to take a smaller step in technology, and spread it out among a much larger market: Desktop, Workstation & Server"

    AMD's logic makes more sense in my opinion. It might not be revolutionary and it might "enhance" an already disliked instruction set, the x86. However as markets overlap and merge more and more(ie: Workstations and Desktops), this would be the optimal solution.

    Itanium could quite possibly win in the server sector, but it's very expensive and one of the biggest driving factors is that software needs to be recompiled for it with an EPIC optimzied compiler. x86-64, if it comes out on time and is what it's supposed to be, should be a very tough competitor to the Pentium4 in the desktop market assuming developers start recompiling their apps for x86-64. Kudos to Tim Sweeny & Epic Games for developing a major product with a branch geared towards this new technology. They're basically watering the x86-64 plant.

    I'm not very informed when it comes to the server space, but my guess would be that it would come down to the form of software used on servers and what percentage of the market could use plain old x86/x86-64 based software for their solution. I mean the question going through my head is: Would I rather use one box with two Itanium processors, or would I rather use two boxes with four Opteron processors in each of them, and have the ability to run x86 code optimally?

    I hate to be cliche, but it basically comes down to the form of software used. It also comes down to the market segments and their changing cost-effective applications.
  • by BruceShankle ( 653745 ) on Wednesday February 26, 2003 @09:55PM (#5392124) Homepage
    I tend to agree with Tim, but for different reasons. I say we need 64-bit to save more lives! When we [datascoutsoftware.com] conduct studies of various pharmaceutical compounds we end up with several gigabytes of data which we'd love to have all in-memory at once to speed up our analysis process. We basically end up having to keep the data on hard drives and sort thru it piece-meal. Unfortunately, our customers are just not gonna spend bazillions of bucks on expensive 64-bit eqipment from Sun et. al. because it is possible to do the work with kludgy 32-bit techniques. So, in essence, I could make a case for cheap 64-bit making new (better more useful) compounds available to doctors and pharmacies and ultimately make for a healthier world. So, I'll assert that we need cheap 64-bit now.

"If you want to know what happens to you when you die, go look at some dead stuff." -- Dave Enyeart

Working...