
IBM Itanium Based Systems and Linux 125
ErrantKbd writes "An
article at Infoworld discusses IBM's plans to release Itanium-based systems sometime in the January/February timespan. They will be building systems running Windows of course, but also ready-made servers running RedHat, Caldera, TurboLinux, and SuSE. Should be pretty sweet provided everything goes smoothly with the 64-bit processor. Note: there is an error in the article, a 64 bit system can directly address approximately 1 billion times more than the article suggests." Those'll be one helluva desktop box.
Re:addressable memory (Score:1)
That's great and all, *but*... (Score:1)
It seems like having a very fast processor with the current PC hardware is like having a 1,000 horse power engine in a Ford Pinto.
The IO between devices needs to be worked on a lot more than the processor.
They were right... (Score:2)
IBM? Linux?! (Score:1)
---
Re:Just what Linux and FreeBSD has been needing... (Score:2)
based, is that their hardware is rock solid in comparison
yeah, and swapping to a new CPU is going to change that? Face it, Intel and it's cronies just want to sell commodity hardware at enterprise prices. As long as they continue to do so, they will not unseat Sun - unless Sun decides to try to do the same thing (hey, why did that plastic face plate just snap off of my brand new $20k Sun E250? )
Re:Is 64 bit addressing practical? (Score:1)
steve
Re:The really interesting part... (Score:2)
The Itanium, on the other hand, will run 32-bit software like a one-legged garden slug; it will debut no higher than 800 Mhz, and clock-for-clock will be terrible on 32-bit code (as in, much worse than any other Intel chip currently on the market). But if you must have a 64-bit chip now (for values of now equal to early next year), it's the only x86-ish game in town.
(Though given its performance shortfalls, that it will be a brand new chip -- with all the baggage that carries -- and the expense, I'm not sure why anyone who needs 64-bit now wouldn't go buy something from one of the big-box vendors...)
Don't forget the Alpha! (Score:1)
Re:Why 64 bits (Score:1)
Am I the only one... (Score:1)
I'm surely not the only one with a nice 64bit CPU in my current computer (mind you: not an Intel one :) Alpha, G3/4,... we've already entered the 64bit scene a loooong time ago.
Okay... I'll do the stupid things first, then you shy people follow.
Re:Debunking 64 bit (Score:1)
Having a 64-bit register doesn't necessarily mean you must work with 64-bit types. You could also operate on eight 8-bit values at once, and see a commersurate speed gain over a narrower system. (remember rep stosb vs. rep stosd, assembly coders?)
This kind of thing already happens on today's systems, thanks to the hordes of Vector Instruction Sets With Silly Names™. In some ways, you could get away with calling a Pentium III with SSE a "128-bit" procesor...
Re:Don't forget the Alpha! (Score:1)
Re:size_t (Score:2)
Basically C is full of assumptions that an integer can store the difference between pointers. You can change all the arguments that you know are "sizes" to size_t, but you will eventually find code that takes this and calls functions (like math functions) where it is perfectly legit to pass an integer and you don't want to change those to size_t, so you end up with impossible-to-remove type conflicts. Size_t is also causing all kinds of portability problems when trying to go between platforms that make it the same or different than int, or that don't define it, for instance I have to type in a prototype for the missing snprintf function a lot and it is different on every machine.
The problem is of course huge amounts of code that assumme int==32 bits. C should have defined some syntax to say exactly how many bits a variable has, perhaps "int var:32", much like a bitfield (the compiler is not required to support all possible sizes, only 8,16,32,sizeof(int)*8 and can round smaller sizes up and can produce an error if larger than the largest is requested).
Unfortunately that did not happen and we are in the mess we are now with all these typedefs and the inability to do clean pointer arithmetic.
Re:addressable memory (Score:1)
64 bit is old news already (Score:2)
It really drives me nuts to see people screaming about how hot the "new" 64-bit Itanium is. Like it's never been done before.
The Alpha processors have been 64-bit for a long time already. I went through college thinking 64-bit was perfectly standard because we were using an Alpha. Then I graduated a few years back and found that the rest of the world was still stuck at 32 bits, waiting breathlessly for the Itanium.
I've been running 64-bit apps under a 64-bit OS on a 64-bit chip for quite a while (recent Solaris on a V9 UltraSPARC cpu).
Re:Two points (Score:1)
Re:Two points (Score:2)
OS/2 5.0 also has a morass of 16-bit code in system areas, still left over from OS/2 1.3, and a lot more Windows for Workgroups 3.11 code and architecture is in Windows Me than OS/2 1.3 code and architecture is in OS/2 5.0
Re:Two points (Score:2)
Well, the 486SX wasn't supposed to exist. SXs were merely DXs whose FPUs failed in testing, and were shipped with the FPU disabled.
why a 64bit VM space is useful (Score:3)
Free information ;-) (Score:2)
For Clawhammer/Sledgehammer, you can run legacy 16- and 32- bit software under a new 64-bit x86 OS, or you can contiune to run your 32-bit or 16-bit x86 OS on the chip.
Personally, I expect that the Itanium will wind replacing Alphas running Linux and NT, and inherit the current PA-RISC market. Intel will wind up creating server variants of its x86 chips to hold on to the current x86 server/workstation market, with marketing demanding those to stay confined to 32 bit instruction sets.
The Sledgehammer will thus have no real competition as it seizes the entire Linux-on-x86 server and workstation markets, with a 64-to-32 bit advantage. If Microsoft delivers an x86-64 NT, the NT-on-x86 market will certainly go Sledgehammer; otherwise, the high end will migrate to Itanium and the rest stay on Intel and AMD x86 chips running 32-bit NT.
If the marketers were to be shoved aside, Intel would crash-engineer and release its own 64-bit x86, and maintain unquestioned dominance. They won't be. Instead, Intel will enter a market where it will be one of four players (with Compaq, IBM, and Sun), and lose dominance of its current cash-cow market to a codominion with AMD.
Re:Itanium is 42-bit, not 64-bit (Score:2)
No, an individual instruction cannot carry a full 64 bit address - but then neither can a single 32bit RISC instruction carry a full 32bit value, nor a 64bit RISC instruction carry a full 64bit value. No difference on MIPS or Sparc.
If you need to load a new 64 bit address you probably have to do it it two instructions - one containing the lower 32bits and one containing the upper 32bits. But how often are you going to have individual program with a grobal dat segment in excess of 4gb?
(btw, the instructions are 41 bit, not 42.)
cheers,
G
Why 64 bits (Score:2)
I'd say that transferring more data and having more registers to play with are more important features, as well as being able to do 32-bit computations in paralell. (having 64-bit computations in hardware is nice too; that makes it all possible)
Also, remember that the Itanium is an architecture that's designed to grow. Much like how Transmeta's chips will improve in speed as the software is being fine-tuned, the Itanium's software should show massive speedups once (a) the compiler is optimized, (b) everything is recompiled natively, and (c) code is rewritten (as needed) to exploit the architectural featueres.
I'd say that we've already seen a preview of what sort of difference this sort of thing can make with the Pentium 4. (if you missed it, it's on Tom's Hardware) It can make a huge difference. I'll be interested in seeing how Linux stacks up, and how optimized gcc is at the moment; I'm sure we'll have our work cut out for us.
---
pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
Re:Address space less than 64-bits? (Score:1)
(I think the legacy OS/2 1.x 16-bit stuff was/is limited to a 16MB address space.)
Memory is not the Biggest Deal (Score:1)
Re:Only commercial distros? (Score:1)
I can understand their logic (though I disagree with it)
Re:Two points (Score:1)
Because Bill Gates told me so
If I had any intellectual curosity
I'd have read "Undocumented Windows 95"
But that stuff is not for me!
VLIW is a bigger deal than people seem to realize (Score:1)
The Itanium is not about 64-bits, or more than 4 GB of RAM (That the PPro and above can do, as a lot of people missed in a few threads), its all about VLIW, or at least, that's all that is *important* about it. As others mentioned, Many 64 bit processors already exist.
The reason purely functional languages would be more optimizable is simply the fact that with purely functional languages, it is easy to find instructions to run in parallel, and the compiler can easily use the VLIW to its advantage, and put many instructions in parallel, whereas a typical C or C++ compiler would have a very difficult time finding things to run in parallel.
Rant: How much memory? (Score:1)
Not less than half of the top posts declared that 64bit architecture would not be useful in one way or another. 'The apps have to be 64 bit. The OS has to be 64 bit. The chipsets have to support 64 bit. blah blah blah!!'
Do any of you laugh at the guy that said we would never need more than 640k of RAM? Do you not remember 16bit processors? Do you not remember 40MB hard drives?
They will build the hardware that runs fast, we will make the software that uses the speed. We will expand the software to fill the available bit width.
Re:_Should_ be pretty cool (Score:1)
Re:Iridium's memory cap.. (Score:1)
Iridium is either an element or a satellite commenications system devised by Motorola.
Though, if Itanium is a sign of a new naming scheme, I suppose it's successor could be "Ridium"
SETI stats tell all! (Score:2)
Most of the top rated systems throughout the world, sending packets for SETI@Home [berkeley.edu], are Compaq servers running Tru64 Unix. Most of this is due to the scientific data using 64bit accuracy, for which the "contemporary" systems of 32 bits just aren't adequate.
Other applications that crush with 64 bits include high-quality graphic rendering, vast database addressing, and, oh yeah, NETSCAPE 6! ;-)
Arithmetic Shmarithemtic (Score:1)
on a desktop-type machine, most (90%+??) of the numbers traversing the registers are will within the 32-bit range
What you're missing is that calculations on numbers from a user's spreadsheet or personal finance program is something computers don't do much of. Most of the arithmetic processors do is (1) address arithmetic, where what's needed is for the registers to be the same size as the address bus and (2) boolean logic, where 32-bit registers are already far too large.
What processors do do a lot of is moving data, and as the desktop becomes more and more a multimedia machine, the volumes of data that the processor has to load, cruch a little and fire off to a peripheral will only increase. Think hard about what a CPU has to do to play high-quality streaming video (the kind our network connections aren't yet fast enough to support) and tell me there's no benefit in larger registers!
Re:Wait... Windows runs on Itanium? (Score:1)
Re:addressable memory (Score:2)
On a more serious note: Unless overall RAM bandwidth starts taking some major leaps soon, it will become an ever narrower bottleneck to overall system performance.
#include "disclaim.h"
"All the best people in life seem to like LINUX." - Steve Wozniak
Re:Two points (Score:1)
Intel's last true architectural change was with the introduction of the 386SX processor. Since then, we have only had patchlevel additions of little things here and there. The 386DX was a huge step up from the 386SX, but it was still only a patchlevel increase in functionality.
The fact of the matter is that we are hitting some logical and structural limitations in Intel's current 32-bit architecture that we simply must overcome. This has been even more apparent with the influx of flaky and poorly-performing motherboard chipsets from Intel, which has been a rarity until recently.
I don't think it's a matter of if we're ready to go forward - are we willing to stay where we are, on a backwards-designed architecture with design bottlenecks?
Re:Iridium's memory cap.. (Score:1)
Already being tested... (Score:1)
See this link [eltoday.com]. SGI had a cluster of 8 dual Itanium systems with Myrinet on the floor of Supercomputing 2000, last month. I know because my code was one of the ones they were demonstrating on it; they've loaned us (OSC) 4 dual Itanium boxes and Myrinet to do porting and development on.
My guess (given there's almost nothing to go in in the article) is that IBM will be selling the same Itanium workstation chassis that SGI, Dell, and everybody else will be.
Yup... (Score:1)
Re:Only commercial distros? (Score:1)
Neat .sig (Score:1)
I have a weapon more powerful than you can possibly imagine. Hand the money over and no-one needs to gets hurt
</piss-poor translation>
Unfortunately, I can't make my browser display the canonical response in Greek (and my Greek is pathetic anyway), so here it is in English:
<repeated chanting>Come and have a go if you think you're hard enough </repeated chanting>
Re:Only commercial distros? (Score:1)
-----
# cd /
Re:Two points (Score:2)
They released the 386 with the co processor onboard, then removed and and sold the SX as a cheaper model.
The did the same thing with the 486, releasing DX and SX models of them as well.
Want to make your IBM sales rep turn colors? (Score:2)
I managed to make one turn a fascinating shade of puce by asking him "So, are you actually confident that you'll be able to ship ia64 boxes in quantity by the end of Q1?" He managed to choke out something along the lines of "well, obviously we're somewhat constrained by other vendors here" before changing the subject back to how nice AIX5L was going to be.
If I were Scott McNealy, I would not be overly concerned.
Two points (Score:2)
Re:Two points (Score:2)
Wrong, that was only the 486. The difference between the 386SX and 386Dx was that the latter had full 32-bit data paths and bus paths, while the 386SX had a mixed 32bit/16bit architecture (much like the Motorola m68000).
Ya they're sweet! (Score:1)
Re:I an't ait... (Score:2)
Maybe you should get a new keyboard first...
--
addressable memory (Score:3)
Oh come on... 16 gigabytes ought to be enough for everybody.
--
A night out campin... (Score:1)
I an't ait... (Score:1)
Re:Two points (Score:1)
Not exactly true. That was the original setup, but at some point Intel improved their quality control. They had chips that they could have shipped as DXs but did not. Likewise, they eventually had Pentium 90s they could have shipped at 100s, and on through time...it's the same reason they do not sell the Celerons as SMP-capable, even though they clearly are.
Why? Economics. They want to appeal to both the people who want a cheap computer and the people who will pay the extra buck for performance. They could sell them all cheaply, but they would not get the extra profit off the bleeding-edge people. So they create an artificial distinction by crippling the lower-end product in some way.
This is the sort of thing that goes away when you get a lot of competition.
Is 64 bit addressing practical? (Score:2)
Is there any practical application for a single system to require more than 4 GB of RAM? It seems to me that once a task becomes so huge as to require 4GB of RAM, it might be time for a cluster or a mainframe type solution rather than one massive system.
Don't get me wrong, I think the development of the 64 bit technology is awesome; I just wanted to raise the question of practicallity.
hmmmm (Score:2)
Actually, no they won't. Not unless all your apps are 64-bit, and even then....
-----
Re:Two points (Score:2)
If they weren't, how I would be able to use so much Windows 95/98 software in Windows 2000? 2000's a purebred, 32-bit OS.
Re:Debunking 64 bit (Score:2)
YOUR statements are baseless. I beleive I have a basis for my statements. I believe I've repeated it enough times here: The vast majority of all uses of the registers will be for =32 numbers... the wasted silicon and engineering could have been spent elsewhere... GET IT? Brandon
who wants one? (Score:1)
Will anyone buy one of these as a server that soon? I can see someone buying them as a desktop for testing and evaluation.
There are so many unknows that there is no way anyone running any "serious" servers will put them into service any time soon. There are bound to be issues with hardware/software interaction.
Mandatory comment... (Score:1)
Re:Debunking 64 bit (Score:2)
Address space less than 64-bits? (Score:4)
There's a difference between the architecture and the implementation. The architecture may allow for a 64-bit address space, but not require it. In many 64-bit processors, many of the address lines are hard-wired to zero. I would not be at all surprised if this is true for Itanium.
Also, even if the processor actually supports true 64-bit addresses, that doesn't mean the motherboard chipsets will support it. Hence, real systems may be limited in their memory configurations.
Re:64 bit is old news already (Score:2)
Please, to everyone who read this thread: Did you pay attention to my disclaimer??? I LOVE 64-BIT CPUs! Get it? I'm only arguing that they are a waste of silicon and effort on desktop PCs that run Microsoft Office, mostly not doing any more math than maybe an expense report that deals with 2 decimal places... oooohhh.
Good timing (Score:1)
Just what Linux and FreeBSD has been needing... (Score:2)
Re:Debunking 64 bit (Score:2)
Someday, when everyone's standard gui interface is a a full VR gear type of thing, 64-bitness will be neccesary at the desktop, but not today. What I'm fighting against is the marketing of 64-bit CPU's as a great new feautre for desktops
Itanium, or AMD's 64-bit x86? (Score:2)
God does not play dice with the universe. Albert Einstein
Re:Debunking 64 bit (Score:1)
Quality Control???? (Score:1)
AMD vs. Intel, Compaq vs. Ibm, Dell vs. Gateway - "Who will be the first to market"....
I guess this explains all the "first posts" on /. too.
Re:Address space less than 64-bits? (Score:5)
It also has 51 bits of virtual addressing (51 address bits + 3 region index bits). 50 bits of virtual addressing are guaranteed by IA64, implementations are free to implement more.
Most general-purpose 64-bit processors implement between 40 and 44 bits of physical address.
The only 64-bit processor that I know of with a full 64-bit MMU (ie, 64-bit virtual addresses) is UltraSPARC III.
Re:Two points (Score:1)
Behind the smoke and mirrors of fast processors, lies the potential behind real processing. Is that a dumb thing to say?
Re:Two points (Score:1)
Re:Is 64 bit addressing practical? (Score:4)
here [hp.com] is a link to a HP server that supports up to 128GB of memory in one box. I know it's a high end unix server, but wasn't itanium intel's pathetic attempt to compete with these kind of machines?
then there is the coveted Sun Enterprise 1000 [sun.com] which seems to support up to 68GB of RAM, plus a bunch of others from SUN [sun.com]
Then there is this bad-boy [ibm.com] from IBM, which supports up to 96GB
Of course there are the Alpha servers, of which the GS series [compaq.com] is an example. Up to 256MB.
There are boards that support way more than 8 RAM slots and have been for some time. Hell, you can get a system that supports more than 16GB from ebay.
PS, anyone who wants to donate one of the linked systems, please reply to this and we will arrange something
-----
# cd /
Re:Debunking 64 bit (Score:1)
Frankly, by that definition, a car over 50 HP is also impractical, because it will still be able to handle top speed in city limits most (non-interstate) highways.
There's a lot more available that just more data in the datapaths... smart assembly hackers will learn to pack and hack smaller bits of data through single registers to reduce processor ticks, just as they did when 32bit processors became available. Current on-the-fly rendering (like desktop animations, software and hardware DVD playback, etc) will greatly benefit from the bus increases, datapath size, and capability of simply dumping bigger numbers through fewer cycles. It sure as hell beats chomping a 36-bit number into 32-bit segments to process each segment of it and then reassemble it for the user. Tasks like that will be orders of magnitude faster (that example would be 2^4, or 16 times, faster), and you will see a whole lore more of them in the very near future.
GCC on itanium? (Score:1)
I read in "Open Source Development" that gcc's code generation on itanium is pretty low quality due to problems with the GCC architecture.
The itanium architecture is pretty radically different from your typical x86 and sparc, so getting fast code on it will not be trivial.
Re:That's great and all, *but*... (Score:1)
Re:Two points (Score:1)
Getting a good IA64 compiler is a lot more than a "little thing."
Pardon? Pentium? PPro? P4? MMX? SSE? Intel has really been a leader in pushing processor performance. The fact that they got such a clunky ISA to run fast is absolutely amazing.
That said, if Intel can make a smooth transition to a new ISA while keeping IA32 compatibility, that will be a very good thing for them. It's debatable whether Itanium will provide enough incentive for users to switch, however. I'm waiting for McKinley.
--
Re:Two points (Score:1)
The 386SX came out when Intel realized that the DX was too pricy. By trimming the address bus to 24 bits (16M of RAM), they would be able to release a more economical CPU, and the "cripple" of 16M wasn't that big of an issue back then.
The 486DX added in pipelining, one of Intel's first attempts at RISC-like behavior in a CISC chip. This was also the first point where Intel made an onchip FPU. The 487 was merely a DX chip that took over the functions of the pitiful 486SX chip, a crippled CPU that probably had no right to exist.
P5/P6 architecture took on multiple pipes, and that's about it.
I'd have to pretty much agree that the IA64 architecture is the first big step in a long time, but that's also because most of the other advancements were hidden. The P6 architecture pretty much contains a 64-bit RISC chip with a CISC wrapper around it, so it's much faster than the older chips internally, but forced (in hardware, no less!) to act like its older siblings.
::Sigh::
Intel... did we actually expect them to make *sense???*
Raptor
Re:Ya they're sweet! (Score:1)
Re:Just what Linux and FreeBSD has been needing... (Score:1)
garc
64 bit? (Score:1)
Re:Two points (Score:2)
It's a goddamn PC for christsakes. There's no difference. Except the price tag. All part of the nifty little "market segmentation" thingie Intel dreamed up. Basically a scam to artificially constrain supplies in the market, while not suffering from the constraint in manufacturing, and exploiting that constraint for maximum profit.
Again though, if you want your 32 bit apps to run, you'll have to run them in SLOOOOW software emulation.
Unless, of course you pay even MORE $$$ so intel can set a jumper somewhere and enable the built-in hardware emulation. Just more bit-crunching goodness from Intel.
Re:Just what Linux and FreeBSD has been needing... (Score:1)
Re:Am I the only one... (Score:1)
Itanium (Score:2)
Linux is ready for IA64 - by the time I left the compiler and OS is relatively stable enough to compile most things. Though Intel still has a few things it need to iron out in the hardware........
Most stuff in fact compile directly - I used turbolinux frontier ia64 (http://frontier.turbolinux.com/ia64 - they got helixcode and stuff working! There is a porting guide on that website as well and those of you who have an opensource project on sourceforge should be able to use the sample hardware to try to recompile and test your software.
IBM is really big about Itanium - wait for more and more announcement
Re:hmmmm (Score:2)
Even then, they're unlikely to come with an AGP slot. They'll probably be PCI only, so you're not going to be putting a GeForce card in it any time soon. I think Matrox are doing a PCI version of the G450, but that's probably the best you'll manage for a desktop Itanium machine in the near future.
Re:GCC on itanium? (Score:2)
One might also use the Pro64 [sgi.com] compiler from SGI. This compiler does implement IA-64 specific optimizations and it even generates assembler code which is easily readable. The compiler does not come with an assembler or a linker, however, so you'll have to rely on GCC to do that part of the job for you.
Re:Is 64 bit addressing practical? (Score:3)
Re:Two points (Score:3)
Re:hmmmm (Score:2)
My PHB ain't gonna get ME one unless HE gets one, too. So, are there 64-bit versions of Solitaire and Minesweeper? ;^)
More than 4GB needed (Score:3)
I've seen low-end storage systems based on Linux in the one TB range. As these systems grow up, they'll quickly get into the >4GB range if they want any sort of performance.
Itanium only implements 50 address bits. (Score:2)
Re:Debunking 64 bit (Score:2)
Iridium's memory cap.. (Score:2)
from Sharky Extreme Article [sharkyextreme.com]
Re:Address space less than 64-bits? (Score:2)
I just looked it up [thinkquest.org]; the 286 apparently had 24 address bits; 2^24 == 16 MB.
Also, I seem to remember that under normal circumstances (real mode => backwards compatibility) you could only use 20 bits, which would bring you back down to 1 MB. But I could be wrong...
The 386 actually did have 32 address bits, though, which gives us the current 4GB limit...
---
pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
Even a web browser (Score:2)
For a less pie-in-the-sky example, most any RDBMS will use up every byte of memory you can throw at it. Page cache, page cache, page cache. High-volume enterprise systems suck up RAM like no tomorrow, and put it to good use.
--
Re:Two points (Score:2)
OnTopic! IBM Support of Linux Distros (Score:2)
IBM's support of its own hardware choices for Linux systems is sketchy at best... ThinkPads were merely the best example because of the fact they must use cutting edge technology to provide the best performance per battery costs.
Just as the S3 video for a ThinkPad's Mobile/Savage IX is hard to configure, so it is with the majority of the S3 line IBM uses. Does IBM take notice? If you examine the servers on their website, They say they support their hardware, but in the asterisksed footnotes, they say it is only tested to work is a plain-jane SVGA display.
Recently DELL made an announcement that it would incentivize hardware manufacturers to be more forthcoming on their specifications for Linux drivers. Can't IBM do likewise? Is the crippled support they actually impliment worth claiming as support at all?
Another site to check is Red Hat [redhat.com]. They sort supported systems by manufacturer, including IBM. There you can see which systems are "supported" for RedHat (which in turn should mean support for redhat compatible Mandrake), and in what ways the support is held short.
Debunking 64 bit (Score:3)
That being said... In many circumstances today 64-bit processors are a waste... especially in a desktop. 64-bit (and wider) data paths are certainly a big help even on a consumer desktop. 64-bit registers and instructions to natively and atomically handle 64-bit values are not a gain, they are a loss. My reasoning here is that on a desktop-type machine, most (90%+??) of the numbers traversing the registers are will within the 32-bit range... and you've wasted a buttload of {silicon|power|heat|engineering_talent} on that 64-bit support that could've been spent elsewhere.
Given two machines with wide data paths, 4GB of memory (which fits in both architectures) a 32-bit processor would blow the socks off of a 64-bit processor assuming both have equivalent number of transistors, power input, and engineering input. And remember, I'm talking about desktop apps and games here.... Obviously everything I've said above is invalid when you do _real_ scientific computing, which regularly involves >32 bit numbers, or really needs direct access >4GB of memory.
Re:Free information ;-) (Score:2)
A Pentium 2/3 core basically has
The only difference, is that in the Itanium you have the choice to either execute x86 instructions like normal, or to switch off the x86 decoder and to start fetching 128 VLIW instructions that break down to 3 * 41bit RISC instructions, that execute directly on the internal execution units
But the way that x86 instructions are executed in the Itanium is in effect the same as in a pentium 2/3.
Furthermore, the processor support switching mode (64 -> 32 or vice versa) whenever it is interpted, so an almost fully 32bit OS can cheerfully support 64bit apps, even servicing its system calls with 32bit interrupt handlers. Conversely a 64bit OS can run 32bit apps, servicing its system calls with 64bit interupt handlers.
One could speculate that Intel looked at the amount of 16bit code still kicking around in win 9x, and decided that it would be a long while after release that we saw a fully 64bit windows :-)
cheers,
G
Only commercial distros? (Score:2)
-----
# cd /
Re:Debunking 64 bit (Score:2)
I respectfully disagree. I'm virtually certain that if you administered truth serum to application writers who know anything about numerics, they would swear up and down that they never want to deal with anything other than IEEE 754 standard 64-bit double precision numbers, and are only forced to do so due to dorky efficiency concerns with stock (commodity) hardware.
The legacy of backward compatibility (which amounts to backward capability in many situations) is one of the biggest barriers to advances in consumer and desktop machines at this time. An interesting (and possibly vital) point about Free and/or Open software is that it's far quicker and easier to adapt older applications to new platforms because enough of the affected users are empowered to improve and change the legacy apps.
One other nit about about the need for more precision and floating point: for slightly more than historical reasons, there is still at best squeamishness about using FP arithmetic for certain financial calculations, and a 32-bit unsigned integer quantity is only able to represent values in the range of the milli-Gates or milcro-GDP...
Re:Address space less than 64-bits? (Score:2)
Regardless, this is a correct assessment. Intel released an 8-bit processor that could address 640k and a 16-bit processor that could address what, 4MB? It definitely wasn't 2^8 or 2^16 in either case.
---
pb Reply or e-mail; don't vaguely moderate [ncsu.edu].