Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
IBM

IBM Itanium Based Systems and Linux 125

ErrantKbd writes "An article at Infoworld discusses IBM's plans to release Itanium-based systems sometime in the January/February timespan. They will be building systems running Windows of course, but also ready-made servers running RedHat, Caldera, TurboLinux, and SuSE. Should be pretty sweet provided everything goes smoothly with the 64-bit processor. Note: there is an error in the article, a 64 bit system can directly address approximately 1 billion times more than the article suggests." Those'll be one helluva desktop box.
This discussion has been archived. No new comments can be posted.

IBM Itanium Based Systems and Linux

Comments Filter:
  • Oh 640k should be enough RAM for anyone -Bill Gates 1980
  • by Anonymous Coward

    It seems like having a very fast processor with the current PC hardware is like having a 1,000 horse power engine in a Ford Pinto.

    The IO between devices needs to be worked on a lot more than the processor.

  • The first Itaniums WILL only be able to address 16-64 gigabytes, because of chipset limitations. A later release of the motherboard chipset will expand beyond that.
  • Now I've seen it all!
    ---
  • but the reason the Enterprise market has been so Solaris
    based, is that their hardware is rock solid in comparison


    yeah, and swapping to a new CPU is going to change that? Face it, Intel and it's cronies just want to sell commodity hardware at enterprise prices. As long as they continue to do so, they will not unseat Sun - unless Sun decides to try to do the same thing (hey, why did that plastic face plate just snap off of my brand new $20k Sun E250? )
  • Your servers can only hold 8 DIMM's? Sounds like weak servers to me. My SuperMicro 8050 has 16 slots, and it's not exactly top of the heap.....

    steve
  • AMD's Hammer chips (the Sledgehammer for servers and the Clawhammer for desktops -- the core is the same; the main difference, IIRC, is in the amount of cache) will (according to AMD, anyway) run 32-bit software just as quickly as a 32-bit chip. From what I have heard, this is actually a credible claim, and not just marketing blather. It is also expected to debut at speeds near 2 Ghz. Unfortunately, not until 1Q2002. :-(

    The Itanium, on the other hand, will run 32-bit software like a one-legged garden slug; it will debut no higher than 800 Mhz, and clock-for-clock will be terrible on 32-bit code (as in, much worse than any other Intel chip currently on the market). But if you must have a 64-bit chip now (for values of now equal to early next year), it's the only x86-ish game in town.

    (Though given its performance shortfalls, that it will be a brand new chip -- with all the baggage that carries -- and the expense, I'm not sure why anyone who needs 64-bit now wouldn't go buy something from one of the big-box vendors...)

  • Windows 2000 has been running in a 64-bit form for quite some time. Before Compaq killed NT-on-Alpha, it was going to be the first 64-bit Windows platform. Microsoft had been using the Alpha internally for the 64-bit development, and even continued since there was no Itanium chip to develop for. The Alpha allowed Microsoft to develop its 64-bit code long before Intel was ready to deliver its platform.
  • Think about it. Go the other way from 32 bits back to 16.. That's a 286. How about back to 8 bits? An IBM XT with a max address space of 1 meg. 640K memory after you reserve some of the address space for I/O. You know useless stuff like video, serial ports, floppy and hard drive controllers and such. Would you ever go back? I thought not. Now a chance to make the step from 32 bit to 64? Shouldn't even be a question.
  • ... who is smiling silently when seeing the 64 bit Intel CPU's being hyped as the great step forward in the future?

    I'm surely not the only one with a nice 64bit CPU in my current computer (mind you: not an Intel one :) Alpha, G3/4,... we've already entered the 64bit scene a loooong time ago.


    Okay... I'll do the stupid things first, then you shy people follow.

  • 64-bit registers and instructions to natively and atomically handle 64-bit values are not a gain, they are a loss. My reasoning here is that on a desktop-type machine, most (90%+??) of the numbers traversing the registers are will within the 32-bit range...

    Having a 64-bit register doesn't necessarily mean you must work with 64-bit types. You could also operate on eight 8-bit values at once, and see a commersurate speed gain over a narrower system. (remember rep stosb vs. rep stosd, assembly coders?)

    This kind of thing already happens on today's systems, thanks to the hordes of Vector Instruction Sets With Silly Names&#8482. In some ways, you could get away with calling a Pentium III with SSE a "128-bit" procesor...

  • I thought Microsoft was the one who killed NT on the Alpha, and Compaq dropped support accordingly.
  • IMHO, size_t is an unfortunate mistake. C should define int and pointers and differences between pointers as all being the same size and losslessly convertable between each other.

    Basically C is full of assumptions that an integer can store the difference between pointers. You can change all the arguments that you know are "sizes" to size_t, but you will eventually find code that takes this and calls functions (like math functions) where it is perfectly legit to pass an integer and you don't want to change those to size_t, so you end up with impossible-to-remove type conflicts. Size_t is also causing all kinds of portability problems when trying to go between platforms that make it the same or different than int, or that don't define it, for instance I have to type in a prototype for the missing snprintf function a lot and it is different on every machine.

    The problem is of course huge amounts of code that assumme int==32 bits. C should have defined some syntax to say exactly how many bits a variable has, perhaps "int var:32", much like a bitfield (the compiler is not required to support all possible sizes, only 8,16,32,sizeof(int)*8 and can round smaller sizes up and can produce an error if larger than the largest is requested).

    Unfortunately that did not happen and we are in the mess we are now with all these typedefs and the inability to do clean pointer arithmetic.

  • Incendo tuum catapultam.

  • It really drives me nuts to see people screaming about how hot the "new" 64-bit Itanium is. Like it's never been done before.

    The Alpha processors have been 64-bit for a long time already. I went through college thinking 64-bit was perfectly standard because we were using an Alpha. Then I graduated a few years back and found that the rest of the world was still stuck at 32 bits, waiting breathlessly for the Itanium.

    I've been running 64-bit apps under a 64-bit OS on a 64-bit chip for quite a while (recent Solaris on a V9 UltraSPARC cpu).

  • Fervent wins. 95 and up have been 32.
  • Windows Me still has 16-bit system code necessary even if you run only 32-bit software. It's got less then 98 did, which had less than 95, which had less than WfWG 3.11 w/Win32s and 32-bit file access enabled did; but it's still around.

    OS/2 5.0 also has a morass of 16-bit code in system areas, still left over from OS/2 1.3, and a lot more Windows for Workgroups 3.11 code and architecture is in Windows Me than OS/2 1.3 code and architecture is in OS/2 5.0
  • pitiful 486SX chip, a crippled CPU that probably had no right to exist

    Well, the 486SX wasn't supposed to exist. SXs were merely DXs whose FPUs failed in testing, and were shipped with the FPU disabled.
  • by paulbd ( 118132 ) on Wednesday December 06, 2000 @07:20PM (#576847) Homepage
    Nobody has noted the real virtue of a 64 bit address space, even if the Itanium itself only supports about 50 for VM. With a 64 bit address space, there is no longer any need to run applications in their own address space. You can finally recognize that protection is orthogonal to addressing, and start to gain the benefits of not having to invalidate the TLB and other parts of the VM system when you do a context switch. That is, all processes run in the same address space, so they can share memory with no effort whatsoever, and you use an explicit protection mechanism to avoid memory stomping. Opal was an experimental system that tried to explore some of these ideas. It was a PhD thesis at the University of Washington. The tech report notes that with a 64MB address space, you can allocate 1MB/sec and not run out of VM space for a period of time larger than the estimated current life of the sun. The real benefits of 64 bit addressing have little to do with increasing the data width. Avoiding a TLB flush when doing a context switch will provide one of the most dramatic speedups for multi-tasking systems that you can imagine.
  • Actually, the Itanium will run x86 code (slowly); it has hardware emulation (which can't take full advantage of the Itanium's parallelism). You must have an OS compiled for the EPIC instruction set.

    For Clawhammer/Sledgehammer, you can run legacy 16- and 32- bit software under a new 64-bit x86 OS, or you can contiune to run your 32-bit or 16-bit x86 OS on the chip.

    Personally, I expect that the Itanium will wind replacing Alphas running Linux and NT, and inherit the current PA-RISC market. Intel will wind up creating server variants of its x86 chips to hold on to the current x86 server/workstation market, with marketing demanding those to stay confined to 32 bit instruction sets.

    The Sledgehammer will thus have no real competition as it seizes the entire Linux-on-x86 server and workstation markets, with a 64-to-32 bit advantage. If Microsoft delivers an x86-64 NT, the NT-on-x86 market will certainly go Sledgehammer; otherwise, the high end will migrate to Itanium and the rest stay on Intel and AMD x86 chips running 32-bit NT.

    If the marketers were to be shoved aside, Intel would crash-engineer and release its own 64-bit x86, and maintain unquestioned dominance. They won't be. Instead, Intel will enter a market where it will be one of four players (with Compaq, IBM, and Sun), and lose dominance of its current cash-cow market to a codominion with AMD.
  • It is a 64 bit processor because it has 64 bit registers, ALUs (execution units), and memory space.

    No, an individual instruction cannot carry a full 64 bit address - but then neither can a single 32bit RISC instruction carry a full 32bit value, nor a 64bit RISC instruction carry a full 64bit value. No difference on MIPS or Sparc.

    If you need to load a new 64 bit address you probably have to do it it two instructions - one containing the lower 32bits and one containing the upper 32bits. But how often are you going to have individual program with a grobal dat segment in excess of 4gb?

    (btw, the instructions are 41 bit, not 42.)

    cheers,
    G
  • Some people don't seem to get why a 64-bit architecture would be useful. Let me say that it's not just about doing 64-bit computations and having a larger address space.

    I'd say that transferring more data and having more registers to play with are more important features, as well as being able to do 32-bit computations in paralell. (having 64-bit computations in hardware is nice too; that makes it all possible)

    Also, remember that the Itanium is an architecture that's designed to grow. Much like how Transmeta's chips will improve in speed as the software is being fine-tuned, the Itanium's software should show massive speedups once (a) the compiler is optimized, (b) everything is recompiled natively, and (c) code is rewritten (as needed) to exploit the architectural featueres.

    I'd say that we've already seen a preview of what sort of difference this sort of thing can make with the Pentium 4. (if you missed it, it's on Tom's Hardware) It can make a huge difference. I'll be interested in seeing how Linux stacks up, and how optimized gcc is at the moment; I'm sure we'll have our work cut out for us.
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • OS/2 2.x could definately access more than 16MB of RAM, as long as you were running 32-bit software. It was als the first '32-bit' version of OS/2, not Warp3, which was really more like OS/2 2.5

    (I think the legacy OS/2 1.x 16-bit stuff was/is limited to a 16MB address space.)
  • Yes the Itanium can Address more than the 32-bit 4GB limitation. But thats not the only thing that Itanium is Good for. The Development of EPIC (Explicitly Parallel Instruction Code) and Predication Instruction Cacheing are nother BIG improvements over Intels 32-bit architecture. It will be interesting to see how the newer 64-bit applications use these features. My guess is These CPU's will be uber for serveing. I believe Its ability to 'look ahead' and pre-cache instructions on this kind of scale will make it quite the chipset for the next couple years.
  • They're probably only using the commercial distros because IBM Has more experience with commercial suppliers. I doubt IBM could handle a non-commercial project like Debian or Slackware. Additionally, a lot of their customers would want a commercial distro, if only because they're used to the idea of centralized supply. For IBM to design/support on these systems with Debian/Slack would cost money and not generate too much corporate interest.

    I can understand their logic (though I disagree with it)
  • Windows is 32-bit, this I know
    Because Bill Gates told me so
    If I had any intellectual curosity
    I'd have read "Undocumented Windows 95"
    But that stuff is not for me!
  • .. as it may make functional languages so much more optimizable than C\C++ compilers as to almost obselete C\C++ completely (Used solely for the simplicity of their optimization).

    The Itanium is not about 64-bits, or more than 4 GB of RAM (That the PPro and above can do, as a lot of people missed in a few threads), its all about VLIW, or at least, that's all that is *important* about it. As others mentioned, Many 64 bit processors already exist.

    The reason purely functional languages would be more optimizable is simply the fact that with purely functional languages, it is easy to find instructions to run in parallel, and the compiler can easily use the VLIW to its advantage, and put many instructions in parallel, whereas a typical C or C++ compiler would have a very difficult time finding things to run in parallel.
  • Begin Rant How much memory do the people posting have here? I'm not talking about RAM here, I'm talking about the brain.

    Not less than half of the top posts declared that 64bit architecture would not be useful in one way or another. 'The apps have to be 64 bit. The OS has to be 64 bit. The chipsets have to support 64 bit. blah blah blah!!'

    Do any of you laugh at the guy that said we would never need more than 640k of RAM? Do you not remember 16bit processors? Do you not remember 40MB hard drives?

    They will build the hardware that runs fast, we will make the software that uses the speed. We will expand the software to fill the available bit width.

  • by Anonymous Coward
    Nice try. The pentium 4 was NOT recalled. Some SYSTEMS built by a 3rd party that USE the Pentium 4 were recalled. The manufacturer did not have the correct/latest bios on the i850 chipset. If you still want to blame intel, then you'd have to say it was a chipset problem since the Bios lives in the Motherboard.
  • Ever hear of "Preview"?

    Iridium is either an element or a satellite commenications system devised by Motorola.

    Though, if Itanium is a sign of a new naming scheme, I suppose it's successor could be "Ridium" ;o)
  • 64 bit processors are LEAGUES ahead of 32 bit processors when it comes to number crunching.

    Most of the top rated systems throughout the world, sending packets for SETI@Home [berkeley.edu], are Compaq servers running Tru64 Unix. Most of this is due to the scientific data using 64bit accuracy, for which the "contemporary" systems of 32 bits just aren't adequate.

    Other applications that crush with 64 bits include high-quality graphic rendering, vast database addressing, and, oh yeah, NETSCAPE 6! ;-)

  • on a desktop-type machine, most (90%+??) of the numbers traversing the registers are will within the 32-bit range

    What you're missing is that calculations on numbers from a user's spreadsheet or personal finance program is something computers don't do much of. Most of the arithmetic processors do is (1) address arithmetic, where what's needed is for the registers to be the same size as the address bus and (2) boolean logic, where 32-bit registers are already far too large.

    What processors do do a lot of is moving data, and as the desktop becomes more and more a multimedia machine, the volumes of data that the processor has to load, cruch a little and fire off to a peripheral will only increase. Think hard about what a CPU has to do to play high-quality streaming video (the kind our network connections aren't yet fast enough to support) and tell me there's no benefit in larger registers!

  • Is this in 64bit mode or in some lamed assed 32 bit emulation bullshit?
    It's 32-bit translation by the CPU. 64-bit Windows? Get real!
  • I get impatient enough as it is waiting for self-test of 4 GB RAM on some of my larger boxes. Imagine waiting around for 4 TB! Better hope they never have to reboot....

    On a more serious note: Unless overall RAM bandwidth starts taking some major leaps soon, it will become an ever narrower bottleneck to overall system performance.

    #include "disclaim.h"
    "All the best people in life seem to like LINUX." - Steve Wozniak
  • I don't think it really matters if we have to do little things like rework our compilers...

    Intel's last true architectural change was with the introduction of the 386SX processor. Since then, we have only had patchlevel additions of little things here and there. The 386DX was a huge step up from the 386SX, but it was still only a patchlevel increase in functionality.

    The fact of the matter is that we are hitting some logical and structural limitations in Intel's current 32-bit architecture that we simply must overcome. This has been even more apparent with the influx of flaky and poorly-performing motherboard chipsets from Intel, which has been a rarity until recently.

    I don't think it's a matter of if we're ready to go forward - are we willing to stay where we are, on a backwards-designed architecture with design bottlenecks?
  • I was suffering a buffer overflow error. Itanium eh? What they heck were they thinking when they came up with that. I remember when things were named after what they did. Like the Commodore64, and the Mac512. Why not call the Itanium the intel64. What they really ought to call it is the Intel-Finaly-Got-Around-To-The-Stuff-They-Put-In-T he-SparcII
  • See this link [eltoday.com]. SGI had a cluster of 8 dual Itanium systems with Myrinet on the floor of Supercomputing 2000, last month. I know because my code was one of the ones they were demonstrating on it; they've loaned us (OSC) 4 dual Itanium boxes and Myrinet to do porting and development on.

    My guess (given there's almost nothing to go in in the article) is that IBM will be selling the same Itanium workstation chassis that SGI, Dell, and everybody else will be.

    --Troy
  • My 43P has been 64 bit for years...
  • According to http://www.debian.org/ports/ there is no ia64 port. I would doubt Slackware has one either.
  • <piss-poor translation>
    I have a weapon more powerful than you can possibly imagine. Hand the money over and no-one needs to gets hurt
    </piss-poor translation>

    Unfortunately, I can't make my browser display the canonical response in Greek (and my Greek is pathetic anyway), so here it is in English:


    <repeated chanting>Come and have a go if you think you're hard enough </repeated chanting>

  • I have confessed (some) ignorance. See the comment below [slashdot.org] and my reply if you care. Hopefully, once Debian has an ia64 version, IBM will support it right along the four commercial distributions they are supporting. Same goes for Slack.

    -----
    # cd /
  • The math co processor was onboard during the 386DX as well. That was the difference between the DX and the SX.

    They released the 386 with the co processor onboard, then removed and and sold the SX as a cheaper model.

    The did the same thing with the 486, releasing DX and SX models of them as well.
  • IBM's enterprise server sales reps have been pushing these vapor-boxes (and the equally vaporous AIX 5L) really hard for the last six months, I guess to draw everyone's attention away from the fact that their low- and mid-range RISC boxes are getting roundly smacked by sun, and are basically stalled speed-wise. (They're still using the 604e in many models.)

    I managed to make one turn a fascinating shade of puce by asking him "So, are you actually confident that you'll be able to ship ia64 boxes in quantity by the end of Q1?" He managed to choke out something along the lines of "well, obviously we're somewhat constrained by other vendors here" before changing the subject back to how nice AIX5L was going to be.

    If I were Scott McNealy, I would not be overly concerned.

  • Point 1: from what I've read, Itanium will only be offered in high-end server configurations for the first year. Desktops will come noticably late afterwards. Point 2, more importantly: is Linux ready to take advantage of Itanium features? Support for P3 instructions in Linux have been slow at best, and Itanium will apparently be so different from x86 code that entire software that recompilations of software and OS's will be necessary in general. Are we ready to take the plunge?
  • The math co processor was onboard during the 386DX as well. That was the difference between the DX and the SX.

    Wrong, that was only the 486. The difference between the 386SX and 386Dx was that the latter had full 32-bit data paths and bus paths, while the 386SX had a mixed 32bit/16bit architecture (much like the Motorola m68000).

  • I've been using the IA64 on compaqs site. they give you access to a whole shlew of server to play around with for 30 days. The Itanium server I've been playing with has been using "linux64". I wanted to run some programs on it but nothings ported to the Itanium yet. I could only make a few scripts to play with
  • I ust an't ait to ave ne of ose ystems... the ought of unning inux on a uge-ass ocessor... an, I ust an't ait!

    Maybe you should get a new keyboard first...

    --

  • by Mike Schiraldi ( 18296 ) on Wednesday December 06, 2000 @11:14AM (#576876) Homepage Journal
    Note: there is an error in the article, a 64 bit system can directly address approximately 1 billion times more than the article suggests

    Oh come on... 16 gigabytes ought to be enough for everybody.

    --

  • Looks like i'll be be settin up camp outside the local computer shop the day before these bad boys are released. I been waiting for this forever.
  • I ust an't ait to ave ne of ose ystems... the ought of unning inux on a uge-ass ocessor... an, I ust an't ait!

  • by Anonymous Coward

    Not exactly true. That was the original setup, but at some point Intel improved their quality control. They had chips that they could have shipped as DXs but did not. Likewise, they eventually had Pentium 90s they could have shipped at 100s, and on through time...it's the same reason they do not sell the Celerons as SMP-capable, even though they clearly are.

    Why? Economics. They want to appeal to both the people who want a cheap computer and the people who will pay the extra buck for performance. They could sell them all cheaply, but they would not get the extra profit off the bleeding-edge people. So they create an artificial distinction by crippling the lower-end product in some way.

    This is the sort of thing that goes away when you get a lot of competition.

  • I'm just wondering if having a 64 bit RAM address bus is really practical right now or in the near future? A typical server board can hold at most about 8 memory slots. The largest chip that I know of is a 512MB. That's 4 GB of RAM, which is in the 32 bit addressing scheme.

    Is there any practical application for a single system to require more than 4 GB of RAM? It seems to me that once a task becomes so huge as to require 4GB of RAM, it might be time for a cluster or a mainframe type solution rather than one massive system.

    Don't get me wrong, I think the development of the 64 bit technology is awesome; I just wanted to raise the question of practicallity.
  • Those'll be one helluva desktop box.

    Actually, no they won't. Not unless all your apps are 64-bit, and even then....

    -----
  • Very little 16-bit code. Thunking (going from 16-bits to 32-bits and back by the system) is done as an option, not by default. There are still some trivial things handled by the system in 16-bit (changing the computer time/date, for example), but most of the other commands have been converted to 32-bit.

    If they weren't, how I would be able to use so much Windows 95/98 software in Windows 2000? 2000's a purebred, 32-bit OS.

  • You shut the fuck up, ANONYMOUS COWARD..damn

    YOUR statements are baseless. I beleive I have a basis for my statements. I believe I've repeated it enough times here: The vast majority of all uses of the registers will be for =32 numbers... the wasted silicon and engineering could have been spent elsewhere... GET IT? Brandon

  • Question:
    Will anyone buy one of these as a server that soon? I can see someone buying them as a desktop for testing and evaluation.
    There are so many unknows that there is no way anyone running any "serious" servers will put them into service any time soon. There are bound to be issues with hardware/software interaction.
  • I want a Beowulf cluster of these!
  • I'm not aguing against VLIW.... whole other matter... your argument there makes me feel better about Itanium. I'm just arguing against 64-bit processors for non-engineering/scientific/heavy-visualisation tasks.
  • by crow ( 16139 ) on Wednesday December 06, 2000 @11:20AM (#576887) Homepage Journal
    The address space may be less than 64-bits wide.

    There's a difference between the architecture and the implementation. The architecture may allow for a 64-bit address space, but not require it. In many 64-bit processors, many of the address lines are hard-wired to zero. I would not be at all surprised if this is true for Itanium.

    Also, even if the processor actually supports true 64-bit addresses, that doesn't mean the motherboard chipsets will support it. Hence, real systems may be limited in their memory configurations.
  • I use 64-bit processors every day too. I have a reason to do so. I'm glad they exist.

    Please, to everyone who read this thread: Did you pay attention to my disclaimer??? I LOVE 64-BIT CPUs! Get it? I'm only arguing that they are a waste of silicon and effort on desktop PCs that run Microsoft Office, mostly not doing any more math than maybe an expense report that deals with 2 decimal places... oooohhh.

    1. read article
    2. place order
    3. goto previous article
  • I wouldn't rate Solaris over say Linux or FreeBSD, but the reason the Enterprise market has been so Solaris based, is that their hardware is rock solid in comparison and their use of SMP (Symetric Multi Processing). Linux and FreeBSD have come along way in utilizing these features in their kernels... but hardware is where they have been lacking. When XFS or ext3 JFS's (Journaling File Systems) become stable and with an Enterprise class processor like the Itanium, I see a big change in the .com industry or any E-commerce industry... Comments? flames? questions? discussions? arguments?
  • It doesn't take an eletrical engineer to see the problems I just stated. The problems I stated also don't invlaidate the efforts of these engineers. 64-bitness is neccesary in high end tasks these days, and therefore the processors must be built.

    Someday, when everyone's standard gui interface is a a full VR gear type of thing, 64-bitness will be neccesary at the desktop, but not today. What I'm fighting against is the marketing of 64-bit CPU's as a great new feautre for desktops

  • When I first heard about Intel's and AMD's plans for their 64 bit processors, I thought back to what I'd been told in my intro micro-architecture course, to paraphrase...
    So Intel decided to design a new ISA with 32 bits. They put all their resources into it, and almost all of their best people, but one guy tried something else, he worked on a 32-bit extension to the existing x86 architecture. Well, the new ISA failed misserably. They had compiler problems and couldn't get enough programs. But, the 32-bit x86 chip, the 386, became one of the most popular and succesfull designs of all time.
    Well, they've got backwards-compatability with old x86 code this time, they've already got 200+ programs to run... I think Intel might have done it right this time. We'll see. If not, we'll at least have AMD's 64-bit x86 to fall back on.

    God does not play dice with the universe. Albert Einstein

  • 64 bits means that numbers can be more precise, without requiring more CPU cicles (takes a lot longer to calculate 64 bit numbers on 32 bit systems). 64 bit datapaths also mean more bandwith =)
  • This is another example of what we just discussed in the topic before."Quality Control in Computer Companies". [slashdot.org]

    AMD vs. Intel, Compaq vs. Ibm, Dell vs. Gateway - "Who will be the first to market"....

    I guess this explains all the "first posts" on /. too.

  • by Anonymous Coward on Wednesday December 06, 2000 @11:59AM (#576895)
    Itanium reportedly has 44 bits of physical addressing (16TB, just like the article said).

    It also has 51 bits of virtual addressing (51 address bits + 3 region index bits). 50 bits of virtual addressing are guaranteed by IA64, implementations are free to implement more.

    Most general-purpose 64-bit processors implement between 40 and 44 bits of physical address.

    The only 64-bit processor that I know of with a full 64-bit MMU (ie, 64-bit virtual addresses) is UltraSPARC III.
  • Pardon this for being a lowtech answer but, I seriously can't wait to see the outcome of this. May we finally bid farewell to 16/32 bit?

    Behind the smoke and mirrors of fast processors, lies the potential behind real processing. Is that a dumb thing to say?
  • I could be wrong (and offtopic), but I thought the 386DX came out first, and then Intel lobotimized it and released the SX?
  • by spectatorion ( 209877 ) on Wednesday December 06, 2000 @12:02PM (#576898)
    I know that VA Linux sells some systems that have 16 memory slots (yes, Intel machines!).

    here [hp.com] is a link to a HP server that supports up to 128GB of memory in one box. I know it's a high end unix server, but wasn't itanium intel's pathetic attempt to compete with these kind of machines?

    then there is the coveted Sun Enterprise 1000 [sun.com] which seems to support up to 68GB of RAM, plus a bunch of others from SUN [sun.com]

    Then there is this bad-boy [ibm.com] from IBM, which supports up to 96GB

    Of course there are the Alpha servers, of which the GS series [compaq.com] is an example. Up to 256MB.

    There are boards that support way more than 8 RAM slots and have been for some time. Hell, you can get a system that supports more than 16GB from ebay.

    PS, anyone who wants to donate one of the linked systems, please reply to this and we will arrange something :-).

    -----
    # cd /
  • the REASON why 64-bit usage isn't currently practical is because it is not currently available for developers to create optimized software for!

    Frankly, by that definition, a car over 50 HP is also impractical, because it will still be able to handle top speed in city limits most (non-interstate) highways.

    There's a lot more available that just more data in the datapaths... smart assembly hackers will learn to pack and hack smaller bits of data through single registers to reduce processor ticks, just as they did when 32bit processors became available. Current on-the-fly rendering (like desktop animations, software and hardware DVD playback, etc) will greatly benefit from the bus increases, datapath size, and capability of simply dumping bigger numbers through fewer cycles. It sure as hell beats chomping a 36-bit number into 32-bit segments to process each segment of it and then reassemble it for the user. Tasks like that will be orders of magnitude faster (that example would be 2^4, or 16 times, faster), and you will see a whole lore more of them in the very near future.
  • by Anonymous Coward
    What is the state of GCC's Itanium code generator?

    I read in "Open Source Development" that gcc's code generation on itanium is pretty low quality due to problems with the GCC architecture.

    The itanium architecture is pretty radically different from your typical x86 and sparc, so getting fast code on it will not be trivial.

  • at least i'm not the only fucking psycho who keeps spouting off about this...
  • I don't think it really matters if we have to do little things like rework our compilers...

    Getting a good IA64 compiler is a lot more than a "little thing."

    Intel's last true architectural change was with the introduction of the 386SX processor.

    Pardon? Pentium? PPro? P4? MMX? SSE? Intel has really been a leader in pushing processor performance. The fact that they got such a clunky ISA to run fast is absolutely amazing.

    That said, if Intel can make a smooth transition to a new ISA while keeping IA32 compatibility, that will be a very good thing for them. It's debatable whether Itanium will provide enough incentive for users to switch, however. I'm waiting for McKinley.

    --

  • Actually, the DX was first. That was Intel's chip capable of going to protected mode (and back!) without a hitch, which was the 286's main issue problem. It also was the first to have a full 32-bit address bus.
    The 386SX came out when Intel realized that the DX was too pricy. By trimming the address bus to 24 bits (16M of RAM), they would be able to release a more economical CPU, and the "cripple" of 16M wasn't that big of an issue back then.

    The 486DX added in pipelining, one of Intel's first attempts at RISC-like behavior in a CISC chip. This was also the first point where Intel made an onchip FPU. The 487 was merely a DX chip that took over the functions of the pitiful 486SX chip, a crippled CPU that probably had no right to exist.

    P5/P6 architecture took on multiple pipes, and that's about it.

    I'd have to pretty much agree that the IA64 architecture is the first big step in a long time, but that's also because most of the other advancements were hidden. The P6 architecture pretty much contains a 64-bit RISC chip with a CISC wrapper around it, so it's much faster than the older chips internally, but forced (in hardware, no less!) to act like its older siblings.

    ::Sigh::

    Intel... did we actually expect them to make *sense???*


    Raptor
  • testdrive.compaq.com ...Great program compaq has. You should all head over there and see how badly an alpha will kick an itanium's ass.
  • You forget that linux runs on Sun hardware now, so your argument is kinda without merit.

    garc
  • been there [compaq.com], done that
  • "High-end server configurations" ?!?!?!?

    It's a goddamn PC for christsakes. There's no difference. Except the price tag. All part of the nifty little "market segmentation" thingie Intel dreamed up. Basically a scam to artificially constrain supplies in the market, while not suffering from the constraint in manufacturing, and exploiting that constraint for maximum profit.

    Again though, if you want your 32 bit apps to run, you'll have to run them in SLOOOOW software emulation.
    Unless, of course you pay even MORE $$$ so intel can set a jumper somewhere and enable the built-in hardware emulation. Just more bit-crunching goodness from Intel.
  • Oh yeah, and there is a stable version for 64bit Ultra processors... I THINK NOT!!! THANK YOU VERY MUCH, and I have made my point quite well...
  • G3 is not 64 bit. G4 is also 32 bit but its Altivec instructions can be 128 bit ( or two 64 bit, or four 32 bit)
  • I have been working in IBM Java Tech Centre (http://developers.ibm.com/java this summer (as an intern) with an Intel Itanium box developing the Java VM for IA64.

    Linux is ready for IA64 - by the time I left the compiler and OS is relatively stable enough to compile most things. Though Intel still has a few things it need to iron out in the hardware........

    Most stuff in fact compile directly - I used turbolinux frontier ia64 (http://frontier.turbolinux.com/ia64 - they got helixcode and stuff working! There is a porting guide on that website as well and those of you who have an opensource project on sourceforge should be able to use the sample hardware to try to recompile and test your software.

    IBM is really big about Itanium - wait for more and more announcement ;-p
  • by Tet ( 2721 )
    Not unless all your apps are 64-bit, and even then....

    Even then, they're unlikely to come with an AGP slot. They'll probably be PCI only, so you're not going to be putting a GeForce card in it any time soon. I think Matrox are doing a PCI version of the G450, but that's probably the best you'll manage for a desktop Itanium machine in the near future.

  • Works (sort of). As far as I know, it still does not contain any IA-64 specific optimizations though (e.g., making use of register rotation). I think some independent groups have been working on IA-64 issues, but they have not yet merged these additions into the main development tree. Also, the release [cygnus.com] is just one big bundle with GCC, Binutils, GDB, and everything thrown into one big source tree. And it seems that nothing has been updated on that front since mid May.

    One might also use the Pro64 [sgi.com] compiler from SGI. This compiler does implement IA-64 specific optimizations and it even generates assembler code which is easily readable. The compiler does not come with an assembler or a linker, however, so you'll have to rely on GCC to do that part of the job for you.

  • by BZ ( 40346 ) on Wednesday December 06, 2000 @11:23AM (#576917)
    It's not just a matter of address bus... If you have a bunch of programs and you want to do virtual memory and you want each program to see the full address space.... well, you need 64bit addressing in your virtual memory system. It helps when that's just an int you can stick in a register....
  • by ibpooks ( 127372 ) on Wednesday December 06, 2000 @11:24AM (#576918) Homepage
    I think we're ready to take the plunge. All Itanium really is is just another platform. I don't see any difference in the relationship between Itanium and x86 and say the relationship between x86 and SPARC. Once the compilers are ported to the new architecture, I'd say a good portion of the existing code will compile nicely on the Itanium.
  • Those'll be one helluva desktop box.

    Actually, no they won't. Not unless all your apps are 64-bit, and even then....

    My PHB ain't gonna get ME one unless HE gets one, too. So, are there 64-bit versions of Solitaire and Minesweeper? ;^)

  • by crow ( 16139 ) on Wednesday December 06, 2000 @11:26AM (#576922) Homepage Journal
    At EMC, we sell high-end storage systems. They're essentially supercomputers dedicated to providing high-performance ultra-reliable storage. We currently support upto 32GB of cache RAM in one system.

    I've seen low-end storage systems based on Linux in the one TB range. As these systems grow up, they'll quickly get into the >4GB range if they want any sort of performance.
  • This still allows for more then 16Gb of RAM, however the workstations probably only allow 16Gb of RAM. This is probably not an error. It doesn't necissarily have to be a processor limitation, it can be a motherboard limitation...
  • We don't need more precision in desktop computing. 32 bits is arguably plenty for any floating point number in use on a desktop. I also covered the data path argument in the first post of this thread. Data paths are independant of the 32/64-bitness of a CPU.

  • The first generation of Itanium systems, using the 460GX chipset, will be expandable with up to 64GB of memory. Generations beyond that will be able to take more memory. Higher end Itanium systems designed by the likes of SGI, IBM and HP should eventually be able to take far more than 64GB. While it may be hard to imagine 4GB or even 64GB of memory being a bottleneck to performance, when you consider SGI has mentioned plans to eventually build machines using 512 Itanium processors accessing more than a terabyte of data in main memory, 64GB of memory, let alone 4GB, begins to look rather small

    from Sharky Extreme Article [sharkyextreme.com]

  • No, that doesn't follow at all.

    I just looked it up [thinkquest.org]; the 286 apparently had 24 address bits; 2^24 == 16 MB.

    Also, I seem to remember that under normal circumstances (real mode => backwards compatibility) you could only use 20 bits, which would bring you back down to 1 MB. But I could be wrong...

    The 386 actually did have 32 address bits, though, which gives us the current 4GB limit...
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • Any major app can be designed to make happy use of large volumes of RAM. Even Mozilla. Suppose you designed a desktop system to be left on forever, with the browser always in memory and all file cache memory resident instead. You'd have a blazing fast browser thanks to your mondo memory.

    For a less pie-in-the-sky example, most any RDBMS will use up every byte of memory you can throw at it. Page cache, page cache, page cache. High-volume enterprise systems suck up RAM like no tomorrow, and put it to good use.
    --

  • You're pretty close, but you're off by 16-bits.
  • How This got moderated down *twice* as offtopic is a mystery... the responsible moderators must be bereft of the significance:

    IBM's support of its own hardware choices for Linux systems is sketchy at best... ThinkPads were merely the best example because of the fact they must use cutting edge technology to provide the best performance per battery costs.

    Just as the S3 video for a ThinkPad's Mobile/Savage IX is hard to configure, so it is with the majority of the S3 line IBM uses. Does IBM take notice? If you examine the servers on their website, They say they support their hardware, but in the asterisksed footnotes, they say it is only tested to work is a plain-jane SVGA display.

    Recently DELL made an announcement that it would incentivize hardware manufacturers to be more forthcoming on their specifications for Linux drivers. Can't IBM do likewise? Is the crippled support they actually impliment worth claiming as support at all?

    Another site to check is Red Hat [redhat.com]. They sort supported systems by manufacturer, including IBM. There you can see which systems are "supported" for RedHat (which in turn should mean support for redhat compatible Mandrake), and in what ways the support is held short.

  • by photon317 ( 208409 ) on Wednesday December 06, 2000 @11:31AM (#576949)
    Let me preface this by saying: I'm all for the continued development of 64-bit processors. They are important.

    That being said... In many circumstances today 64-bit processors are a waste... especially in a desktop. 64-bit (and wider) data paths are certainly a big help even on a consumer desktop. 64-bit registers and instructions to natively and atomically handle 64-bit values are not a gain, they are a loss. My reasoning here is that on a desktop-type machine, most (90%+??) of the numbers traversing the registers are will within the 32-bit range... and you've wasted a buttload of {silicon|power|heat|engineering_talent} on that 64-bit support that could've been spent elsewhere.

    Given two machines with wide data paths, 4GB of memory (which fits in both architectures) a 32-bit processor would blow the socks off of a 64-bit processor assuming both have equivalent number of transistors, power input, and engineering input. And remember, I'm talking about desktop apps and games here.... Obviously everything I've said above is invalid when you do _real_ scientific computing, which regularly involves >32 bit numbers, or really needs direct access >4GB of memory.

    • Actually, the Itanium will run x86 code (slowly); it has hardware emulation (which can't take full advantage of the Itanium's parallelism).
    Hmmm - not sure about this "hardware emulation".

    A Pentium 2/3 core basically has

    1. an x86 -> RISC decoder
    2. a bunch of RISC execution units.
    An Itanium core basically has
    1. an x86 -> RISC decoder
    2. a bunch of RISC execution units.

    The only difference, is that in the Itanium you have the choice to either execute x86 instructions like normal, or to switch off the x86 decoder and to start fetching 128 VLIW instructions that break down to 3 * 41bit RISC instructions, that execute directly on the internal execution units

    But the way that x86 instructions are executed in the Itanium is in effect the same as in a pentium 2/3.

    • You must have an OS compiled for the EPIC instruction set.
    Since the processor boots into x86 mode, provided that you have a backwards compatible system architecture, an IA64 machine should run DOS, Minix, x86 Linux, etc - any IA32 OS - without recompilation.

    Furthermore, the processor support switching mode (64 -> 32 or vice versa) whenever it is interpted, so an almost fully 32bit OS can cheerfully support 64bit apps, even servicing its system calls with 32bit interrupt handlers. Conversely a 64bit OS can run 32bit apps, servicing its system calls with 64bit interupt handlers.

    One could speculate that Intel looked at the amount of 16bit code still kicking around in win 9x, and decided that it would be a long while after release that we saw a fully 64bit windows :-)

    cheers,
    G

  • It is great that IBM is offering a choice of distributions, rather thatn just RedHat (whihc is what most OEMs do), but there doesn't seem to be any mention of Debian or Slackware, which I thought were very popular. I don't know if they count as "top 4" which is what the article says IBM is supporting, but I know they're very widely used. Is this a sign of corporate foul play or just financial necessity. It doesn't seem that if they're supporting (or at least installing) four different distributions that it would hurt them terribly to install one or two more, especially since Slack users tend to be pretty Linux-savvy already and one could probably say the same about Debian users, too. I'd be inclined to say that IBM is just afraid of non-commercial backing for the distributions it supports, which is unfounded if you ask me.

    -----
    # cd /
  • We don't need more precision in desktop computing. 32 bits is arguably plenty for any floating point number in use on a desktop.

    I respectfully disagree. I'm virtually certain that if you administered truth serum to application writers who know anything about numerics, they would swear up and down that they never want to deal with anything other than IEEE 754 standard 64-bit double precision numbers, and are only forced to do so due to dorky efficiency concerns with stock (commodity) hardware.

    The legacy of backward compatibility (which amounts to backward capability in many situations) is one of the biggest barriers to advances in consumer and desktop machines at this time. An interesting (and possibly vital) point about Free and/or Open software is that it's far quicker and easier to adapt older applications to new platforms because enough of the affected users are empowered to improve and change the legacy apps.

    One other nit about about the need for more precision and floating point: for slightly more than historical reasons, there is still at best squeamishness about using FP arithmetic for certain financial calculations, and a 32-bit unsigned integer quantity is only able to represent values in the range of the milli-Gates or milcro-GDP...

  • Yes; I thought it was 54 bits, but I could be wrong.

    Regardless, this is a correct assessment. Intel released an 8-bit processor that could address 640k and a 16-bit processor that could address what, 4MB? It definitely wasn't 2^8 or 2^16 in either case.
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].

We are each entitled to our own opinion, but no one is entitled to his own facts. -- Patrick Moynihan

Working...