Follow Slashdot stories on Twitter


Forgot your password?
Silicon Graphics

New SGI Altix 3000 225

dlloyd writes "SGI has just publicly announced the Altix 3000 series of computers that can scale from 4 to hundreds of processors, with up to 64 processors per single system image. Processors each come in a C brick that has 4 CPUs. I/O is done though IX and PX bricks (12 PCI slots per brick, IX bricks have a base I/O controler and two ultra 160 disks inside), just like on the Origin 3900 series. Anything more than 8 CPUs (2 C bricks) is connected by R bricks, which route the NumaLink packets between nodes. The NumaLink network is good for an aggregate 6.4 gigabytes/sec to *each* node. That scales as you add more C and R bricks. Basically, you can think of this as SGI's origin 3000 series, except that it runs Linux and has Itanium2 processors. The performance and scalability is like nothing that has ever run Linux and is *far* ahead of the competition. For those of you who wonder why anyone would need a 64 processor Linux machine, many scientific and technical customers prefer running their code on large, single system image machines. Large single system image machines are also less labor intensive to maintain and admin, plus they work much better on code that needs to share memory and pass messages between threads (even myrinet and mpi is glacial compared to the SGI numalink network and running code multithreaded)."
This discussion has been archived. No new comments can be posted.

New SGI Altix 3000

Comments Filter:
  • by yppiz ( 574466 ) on Tuesday January 07, 2003 @02:00PM (#5033875) Homepage
    What is keeping SGI afloat? Service contracts on existing machines?

    --Pat /
    • From the NASDAQ Summary []:
      Revenue: $1.3 billion for fiscal year 2002
      servers accounted for 38% of fiscal 2002 revenues; Global services, 34%; Visual workstations, 18% and other, 10%

      To answer your question, the revenue from the sales of services is only about one-third of their total revenues. I don't know if this is considered a lot or not.
      IBM has a similar report: global services accounted for 41% of 2001 revenues. This is before the purchase of PWC, so it is probably going to be higher in 2002.

    • What is keeping SGI afloat?

      I think that they've been developing hardware for Sony []. ;)
  • Why Linux? (Score:4, Interesting)

    by HeelToe ( 615905 ) on Tuesday January 07, 2003 @02:00PM (#5033878) Homepage
    I still don't understand why SGI has foregone such a great OS as IRIX. Why go with Linux? Just trendy, or does it really offer advantages for scientific computing?
    • Re:Why Linux? (Score:5, Insightful)

      by larien ( 5608 ) on Tuesday January 07, 2003 @02:02PM (#5033916) Homepage Journal
      Yup, IRIX was good, but maintaining a full OS takes a lot of money. This way, they can piggy-back on investments made by other people & companies while still having a modern OS. They've already integrated XFS into linux, and it wouldn't surprise me to see other SGI/IRIX technologies coming into linux in the same way. Similarly, IBM have migrated JFS into linux.
      • Yup, IRIX was good, but maintaining a full OS takes a lot of money. This way, they can piggy-back on investments made by other people & companies while still having a modern OS
        That's exactly the reasonong SGI had last time it adopted Windows NT, and bet the farm on it.. and almost died.

        I'm afraid of these sudden changes in the direction of, let's face it, trendy technologies. Linux still has to prove itslef in systems with many CPUs. There really isn't any reason to chose Linux over IRIX, performance-wise.
        • Re:Why Linux? (Score:2, Informative)

          by RageEX ( 624517 )
          "I'm afraid of these sudden changes in the direction of, let's face it, trendy technologies. Linux still has to prove itslef in systems with many CPUs."

          This is not so sudden, they've been planning such a change for many years. Some of their delays have been tied to Intel's delays. SGI has had large development systems based on Itanium for a long time. And they've been trying to improve Linux (with some resistance) for some time.

          "There really isn't any reason to chose Linux over IRIX, performance-wise."

          Except that SGI has tied Linux to IA-64 at a certain price point. If you want a large SGI IA-64 system then you're stuck with Linux. If you need IRIX &/or > 64 CPUs in a single image system then you should buy an Origin.

          It was determined a long time ago that porting IRIX was way too costly and complicated.
    • Re:Why Linux? (Score:2, Insightful)

      by ozzee ( 612196 )

      Having worked on IRIX, I can say that Linux moves much faster and IRIX will eventually fall behind in features simply because the investment is huge.

      When SGI bought Cray and then started the fatal Win32 effort, it was obvious that they were not going to be able to make it succeed without increasing the cost dramatically.

      Customers also want Linux now, with IBM pushing Linux, customers want a simpler maintenance strategy. Large outfits with a heterogenous network running AIX, IRIX, HP-UX etc is harder to maintain that a heterogeneous network all running Linux.

      • Re:Why Linux? (Score:2, Insightful)

        by Nexx ( 75873 )
        Let me speak as an example of a customer. The best server-side solution is one that is inexpensive to obtain and maintain.

        Of course, what the manufacturers want is something that is expensive to obtain, and almost impossible to maintain without an expensive support contract :)

        Seriously, Linux is cheap now, with the inexpensive talent around. What prevents most companies from deploying Linux is the (perceived) lack of quality *commercial* support (i.e. someone they can sue when something goes wrong :P)

    • Not foregone, by any means. Irix is still offered to customers. It's only logical, since Irix is the _only_ Unix variant that can run on a 1024 CPU single-image system. Yes, i am aware of the IBM and HP systems with a similar number of CPUs, but those are not single-image, those are partitioned into many separated smaller systems each one running it's own kernel.
      Irix can run a single kernel on 1024 CPUs simultaneously. It's the only one, until now. Linux can do only 64.
      • Linux can do only 64.

        this feeble OpenSource operating system has quite some shortcomings. someone call Bill, i'm sure he'll get his eXPerienced developers working on a 65 cpu OS, maybe 66.
    • Re:Why Linux? (Score:3, Informative)

      by drinkypoo ( 153816 )
      IRIX has a terrible name among geeks who have only dabbled with it (like me) who have noticed their amazing track record with security: Amazingly goddamned stupid. Executing "xhost +" by default every time you log into X is the most idiotic thing I can think of. The patch cluster for IRIX 5.3 is bigger than IRIC 5.3! And so on. Given how bad they've been it's hard to trust them now.

      On the other hand, Linux has two things which give it a better name that IRIX; it's open/free, which is obvious, and it's new, so it has an excuse to have some 'issues'. Meanwhile IRIX was around a long time and still horribly, terribly insecure. It's not worth it to me to dress up Unix prettily and easy to use in exchange for security; I frankly want both.

      Linux is also growing in leaps and bounds and implementations using it like this one only serve to prove this; support for vast numbers of processors is one place Linux has traditionally flailed, but as time goes by and manufacturers expend more money on making Linux scale, the last few blocks to running the same OS on a PDA and a supercomputer (IE, from a common, unpatched codebase) are going away. That is undoubtedly powerful because your code will (theoretically) work on any platform large enough to contain it with nothing more than a recompile.

      Granted, there are attempts to do that at the application level rather than the OS level -- and I'm talking about Java and .NET here -- But they fit a somewhat different need, and they will likely never be as compatible between disparate platforms as having the same operating system underneath your program (duh.)

      • Irix may not have the best track record when it comes to security, but using Irix 5.3 as a yardstick isn't really fair. It'd be like complaining that the security in Slackware 1 or Ultrix. All Unix vendors had security problems back then, and the way Irix tried to make things friendy to the end user didn't help.
        • I know that that's an old version, though when I played with IRIX (a few years ago) people were using exactly the same arguments in favor of IRIX, but that was the most recent version which would run on my Indigo R3000. The fact that it was fast at all is a testament to the beauty of Unix, let me tell you... but the case was just adorable! Anyway, what does IRIX have to offer that Linux doesn't have already or will have any day now (like in the next minor rev of the kernel?) Anything that couldn't be added or ported relatively trivially?
          • Lots of scalability things mostly. Irix is built to scale to impossible sizes. Linux for the most part is still optimized for small machines (no more than 2 processors). It's all of the little things that do it: few arbitray size restrictions (and none that aren't tweakable). Select calls that run in O(n log n) instead of O(n^2) time, etc... It's all a matter of design. The price they pay is that Irix runs kinda slow on low end machines compared to Linux, alhtough it is hard to compare because nobody ever installs Irix on low end PC hardware.
    • Why increase the perception that you're tying the end user down to a technology that will soon be unsupportable? The real question would be: what would SGI gain from spending the money on porting Irix? Would that expense be worth the benefit?
  • by RebelTycoon ( 584591 ) on Tuesday January 07, 2003 @02:00PM (#5033884) Homepage
    IX, C and R bricks

    The more expensive his LEGO gets...

  • Shoot me (Score:3, Funny)

    by Znonymous Coward ( 615009 ) on Tuesday January 07, 2003 @02:01PM (#5033892) Journal
    Imagine a... Bang!

  • What... (Score:1, Funny)

    by Anonymous Coward
    For a moment I realized I had the chance to make a silly first post remark. Then I hesitated a minute and realized I be more on topic with a mandatory Beowulf remark.
  • But (Score:1, Interesting)

    by Anonymous Coward
    How many keys/sec? []
  • nuff said. Makes a nice webserver one should think ;-)

    Seriously - sounds like SGI is trying to stop people moving away from their system. Maybe they'll succeed in the higher end of the market with this.
  • by 56 ( 527333 ) on Tuesday January 07, 2003 @02:03PM (#5033923)
    ...and pass messages between threads

    Is that something you would like to share with the class, Altix?

  • by ozzee ( 612196 ) on Tuesday January 07, 2003 @02:04PM (#5033931)

    Scientific computing has allways been SGI's niche. They unfortunately stumbled around the time that Belluzzo took the helm and wasted the entire internet bubble recovering from the mess that caused.

    It's great to see that they're finally back and doing some really serious new stuff.

    It's a shame though that they won't be running the AMD 64 bit chips, although, I'll be someone is looking into that.

    Congrats SGI !

    • I dont think you can call this any way new. They are simply packaging other peoples (intel + linux) products into boxes (Yes I know: not as easy as it sounds).
      • It is new. Find me another 64-node shared memory I64 Linux system. I'd much rather develop for a shared memory system than a message passing system. And for a lot of supercomputing apps, a Beowulf cluster just won't cut it.

        • much rather develop for a shared memory system than a message passing system.

          Even for distributed memory message passing applications I kind of like the convenience of a single system image.

          Running on a Sun E10K was more convenient compared to running on the various clusters and going through a batch queueing system.

      • Actually, the packaging itself sounds interesting, something that others haven't done yet.

        Even if it does use chips made by someone else, there is a lot of work that goes into making those chips work together, and probably a fair amount of work adjusting the Linux kernel into something that scales so large.

        I bet they had a lot of work cut out into making this thing modular like it seems to be. Probably has a lot of work just in the chipset too.

        It is unfortunate that it uses Itanium2 chips. I don't think competing Opteron systems will be available for at least little while yet. I think I heard from an AMD rep that Cray had commited to making systems based on Opteron.
  • Don't forget (Score:5, Insightful)

    by PD ( 9577 ) <> on Tuesday January 07, 2003 @02:05PM (#5033937) Homepage Journal
    These machines support 512 GB of RAM in one chunk. A Linux cluster might outperform this thing, but you'll need to chunk your data up to fit into the individual nodes' memory. Sometimes this can be a pain in the neck to do, hence the market for something like this.
  • I/O is done though IX and PX bricks (12 PCI slots per brick, IX bricks have a base I/O controler and two ultra 160 disks inside)

    Looks like the Machine Planet is coming sooner than we realized! We just have to watch out for the bloody Tlielaxians.

    Pahwindah Dirt
  • by Anonymous Coward
    Don't believe me? Why don't you look at dlloyd's posting history []. There is none!
  • From the story posters website

    Work: Field Technical Analyst, SGI

    Now don't everyone go submitting their products at once. ;)
    • I was just about to post this.

      We all know /. is posting one advertisment a day (or thereabouts) as a news item. Today, it's this. Yesterday, it was the cool-your-pc-into-the-wall [] company.

      Does anyone else feel a disclaimer, or flag, or something should be used to mark these news-vertisements? Maybe a new topic icon?
    • Who cares? Would you have felt better if I had submitted it (I would have if I had noticed the press release sooner) or if some AC had done so?

      This is News for Nerds. Groundbreaking stuff for Linux in terms of perfomance and scalability. Article already written at Newsforge about it. As long as the editors post interesting stories, who cares who submitted it? Isn't this better than a story about how Microsoft did stupid thing X today?

  • I just got my copy of Linux Journal, what, a week ago, and you guys are just now reporting on this? You didn't even steal your "news" from the right source!
  • i always wanted a nice, cozy, warm, ALTIX brick house... i bet i wont even need a fireplace!
  • Brick processor (Score:3, Interesting)

    by intermodal ( 534361 ) on Tuesday January 07, 2003 @02:13PM (#5034009) Homepage Journal
    I rather like the concept of more trying to pair up older processors when you run across a board someone is getting rid of a few years down the road. I recall getting a couple dual-PII workstations a year or so ago, and finding a pair of matching (and working) processors to put in them was hell...this way I could have just searched Ebay or my parts stash for a single old part.
  • by mckwant ( 65143 ) on Tuesday January 07, 2003 @02:14PM (#5034015)
    I don't do this for a living, but it seems that $/MIPS is the only benchmark even worth discussing, so shouldn't one be able to put together massive clusters of boxen to do the same thing, only without the SGI price tag?

    Correct me, because I'm almost certian I'm wrong.
    • by XaXXon ( 202882 ) <> on Tuesday January 07, 2003 @02:20PM (#5034056) Homepage
      Yeah, you're wrong. This isn't a beowulf, it's a multiprocessor box. It runs standard software. That means you don't have to re-write everything to support clustered solutions.

      Lots of people don't understand that a 1024-processor beowulf won't run battlefield 1942 (if you've ever played it, you understand what I'm talking about), because it's not like a 2-processor workstation box. You have to write your software so that discrete pieces can be offloaded to other nodes and have the results posted back. A beowulf cluster is similar to SETI@Home or whatever distributed computing project you like. Though the interconnects are faster, the general idea behind how the software works is similar.

      With this SGI system, it's like a 2-processor workstation on steroids. You can run standard multi-threaded code on it and actually use 1024 processors (and could possibly run battlefield 1942).
      • Correction (Score:3, Informative)

        by XaXXon ( 202882 )
        Now that I've RTFM, let me correct my previous comment -- The Altix3000 runs a single Linux image over up to 64 processors and 512 GB RAM. After that, it's NUMA.

        It can, however, do high-speed shared memory over all nodes in the cluster, allowing you to store HUGE shared data sets. Here's a link [] to the info on the memory.

        • It's 64 processors with linux and 1024 with irix - so you are both right.

          (I wonder, however, if linux managed to support more processors whether this would just work with more processors (on a single image etc etc))
        • Re:Correction (Score:4, Informative)

          by fgodfrey ( 116175 ) <> on Tuesday January 07, 2003 @09:29PM (#5036686) Homepage
          I think you're confusing NUMA with message passing. NUMA stands for Non-Uniform Memory Access (actually, this machine is cc-NUMA - Cache Coherent NUMA, but I digress). NUMA means that when I do a standard memory reference it will go faster or slower depending on where in memory that reference goes. This is accomplished by having a group of processors and a group of RAM DIMMs tied to each other with a memory controller that is also a router. If you want someone else's memory, you go over the router to the other memory controller and it returns the answer. That takes longer than your local memory (longer vs. shorter is not uniform), hence the machine is NUMA. IIRC, this machine is NUMA after 2 processors up to the max system size. Despite running multiple Linux kernels, all the memory is visible to all the processors even outside your own kernel. It seems they've picked 64p as the maximum useful size for a single kernel.

          What is your quad P4? That's SMP - symetric multiprocessing. Symetric means that all memory accesses take the "same" amount of time since there is only one pool of memory for all the processor and no processor is closer to it than any other. SMP systems larger than around 32 processors are rare since your single memory subsystem needs to feed *all* theprocessors.

          So what is a Beowulf cluster then? A typical Beowulf cluster (well, just a cluster in general) is a group of nodes which can't directly address each other's memory and hence have to send a message to the other guy to read/write his memory. Cards like Myrinet exist to try to get some form of shared memory between the nodes in the cluster to varying success. Compared with this, they are low bandwidth and high latency. (Of course compared with a Cray X1, this machine is low bandwidth, but I'm biased :)

          There have been a variety of NUMA machines released over the years. Highlights other than this thing include the Thinking Machines boxes, the Cray T3D and T3E, the SGI Origin series, and the Cray X1.

    • by Zathrus ( 232140 ) on Tuesday January 07, 2003 @02:37PM (#5034190) Homepage
      No, because $/MIPS is a misrepresentation. Heck, MIPS alone is meaningless, because all it does is take a theoretical maximum of CPU speed. MIPS doesn't take into account anything beyond CPU speed - like memory speed, backplanes, drive arrays, etc.

      If you have heavily interrelated datasets, like in just about any thermal dynamics/plasma/weather problem, then there is so much interdependancy between adjacent "cells" that each work unit needs information from adjacent work units constantly. Spread that system out on a cluster solution and you're DOA because your communications between boxes are horrendously slow, with latencies measured in milliseconds instead of nanoseconds. So while you may have some absurd number of MIPS, the reality is that the CPUs are sitting idle 90% of the time waiting for data from some other CPU/memory block.

      Take all those CPUs, all that memory, put them in a single box and do the backplanes and memory interfaces right (this is where the cost comes in by the way) and your latency becomes reasonable and you actually get all those MIPS.

      It boils down to what the problem set is. If you need an obscene amount of transactions or have a highly interdependant problem set then you're better off with a single large box. If you can break up the problem set and minimize interactions then clustering is your friend.

      There's also the issue of maintainance, and while the hardware costs may be lower for a large cluster, the time spent fixing the hundreds of boxes may kill you. Have a single box that's designed for redundancy and you'll pay a fortune for the support contract, but you won't spend an appreciable amount of your time on hardware support on the rare occasions it actually needs something.
      • If you have heavily interrelated datasets, like in just about any thermal dynamics/plasma/weather problem, then there is so much interdependancy between adjacent "cells" that each work unit needs information from adjacent work units constantly. Spread that system out on a cluster solution and you're DOA because your communications between boxes are horrendously slow, with latencies measured in milliseconds instead of nanoseconds.

        You've obviously never actually researched how distributed finite-element simulations work, because you're absolutely wrong.

        In most physical FE methods, each cell interacts only with its 6 nearest neighbors. Yes, the computation requires information that spans across cells, but there's no reason to assign a different CPU to each cell. The cells can be grouped into blocks and assigned to processors that way.

        Remember that surface area increases slower than volume. As the sizes of your cell groups increase, there is more volume within them per unit surface area. And since data only needs to be communicated across the SURFACE of each cell group per simulation timestep, the method actually gets MORE efficient as you make the simulation bigger.

        So your "obscene" number of transactions turn out to be highly localized in space, which minimizes communication overhead. In fact, if your cell blocks are box-like in shape, then each block requires only 6 logical communication links to the adjacent boxes. This could be realized by a traditional switching fabric, or with actual physical links.

  • Itaniums? (Score:1, Offtopic)

    Could someone shoot me some info in-regards to when Itaniums will be available in stores?

    If I understand correctly.. the new AMD hammer (x86-64) will be marketed towards regular consumers. Is the same not being done with Itaniums?
    • Re:Itaniums? (Score:2, Informative)

      by mrnick ( 108356 )

      My understanding is that the Itaniums do not contain a 32bit instruction set. That being true then if it were released today on PC motherboards it would most likely not have an operating system to run on. You could run Linux but not M$ Windows. Now to this crowd that might sound nice but to the masses it would be useless. AMD's hammer does have backward compatibility to the 32bit X86 instruction set... so one could run M$ Windows 32 bit until there was a 64 bit version and still use that machine to run enhanced 64bit applications and/or operating systems.

      Just my 2cents...

      Nick Powers
      • That being true then if it were released today on PC motherboards it would most likely not have an operating system to run on.

        Windows XP 64-bit Edition [] is available for the Itanium architecture.

        It even has a service pack [] out.

        Moreover, the Itanium can run 32-bit programs. It just isn't very fast at doing so.

    • Re:Itaniums? (Score:5, Informative)

      by jbischof ( 139557 ) on Tuesday January 07, 2003 @02:48PM (#5034271) Journal
      Itaniums are marketed toward high-end servers only. Generally not available to, and too expensive for, the general public (if that is what you mean by regular consumer).

      In Intel's mind, the Itanium doesn't compete with the Opteron. Opteron will be at Xeon's throat, trying to tear up some of the 95% market share that xeon has in corporate and other mid-range servers.

    • Besides the excellent points made by the poster, there is one detail left out. Itanium is on the way out (in spite of systems being sold with them today) as soon as hammer gets here, in the form of clawhammer even (let alone sledgehammer.) Itanium2 is faster but will still be expensive (Though I suspect not QUITE as) when hammer hits shelves. Sledgehammer will probably massacre itanium entirely and it is supposed to come out at the same time as clawhammer, though if motherboard manufacturers don't have enough 8 way motherboards cooked up by then I can see they might delay sledge until some more show up.

      intel is taking advantage of being the first (between them and AMD) to being a 64 bit solution to the market. Since it's from intel then you know all of their partners and customers with the cash will roll out itanium systems, you know there's a market for anything from intel, no matter how shitty. Why these people aren't running ultrasparcs is beyond me, it's a mature 64 bit architecture and it really doesn't seem to be any more expensive than itanium.

      Anyway right now hammer isn't out, there's no competition, and itanium processor modules are still well up over a grand last I looked. When hammer debuts hopefully around $300-500 for the range -- supposedly hammer will be priced "comparatively" with athlon xp whatever that means -- itanium will wither and die for all but existing contracts, itanium2 will take a serious hit (but how serious?) and intel will probably announce something new on the horizon which will scale beyond hammer but not for a couple years. Sound familiar?

    • Itanium is not suitable for consumers because:
      1) It runs 32 bit software quite slowly. 2) It has so many transistors that it's really big, and the cost of chips goes up with size.

      so it would hardly be easier to market to end-users than, say, Sun's Sparc chip.

  • Price? (Score:2, Interesting)

    by s3xyb17ch ( 638037 )
    Just curious, but did anybody notice an estimated price for various configurations either the 3300 or 3700? I couldn't find any price info on their site.
    • The NewsForge link does mention the price for the 64-processor SGI Altix 3000. And the answer to your question is, if you need to ask you cannot afford it.

    • but did anybody notice an estimated price for various configurations either the 3300 or 3700? I couldn't find any price info on their site.

      See This link [] and look at the bottom paragraph. Ouch.
      • Re:Price? (Score:3, Informative)

        by afidel ( 530433 )
        Those prices really aren't bad. $70,000 for 4 cpu's and 32GB of ram is almost exactly what we paid for our 4 cpu 32GB Sun V440 a couple months ago and this thing has more cpu power and a lot lot lot more expandability. $1.1 million for 64 cpu's is pretty cost competitive with Sun and IBM too.
        • Also remember that many Unix software vendors license their software by the CPU. If you can put a faster commodity CPU (Intel) into a serious enterprise chasis (IBM,Sun,SGI), then you could significantly reduce the number of CPU's that you application needs. The savings possible in that sort of situation are ENORMOUS.

          Just consider that Oracle Enterprise licensing costs more per CPU than Sun V series hardware.
        • But this being Slashdot... maybe an OpenSource plug is in order: what good it is to have a closed-source CPU like the Itanium, when you can have an open architecture where anybody can contribute, like the SPARC?

          But seriously, the Sun Fire series have some bells and whistles that make them rather attractive. What with the great backplane interconnect and the bandwidth-to-storage, the fibre-channel etc. Also remember: the SGI machine we're talking about is a NUMA architecture, which means that the software will need to be written for it. Unlike classic SMP (like the Sun Fire), it has a kernel image for each CPU. Besides, there isn't much 3rd party software for the Linux on Itanium yet, anyway.

          So, don't have sour grapes for your investment, I believe the V440 will get much more work done, in the foreseable future.
  • Looks nice too... (Score:4, Insightful)

    by Midnight Thunder ( 17205 ) on Tuesday January 07, 2003 @02:21PM (#5034058) Homepage Journal
    One thing that I have always liked about SGI systems, is that not only do I get a high performance system, but I also get something that looks good design wise. Other companies, such as IBM give me the feeling that I am buying, in equivalent terms an F1 car with the body of a Lada. If I pay top of the line prices, I also like to have something nice to show off.
  • Altix? (Score:5, Funny)

    by Gudlyf ( 544445 ) <(moc.ketsilaer) (ta) (fyldug)> on Tuesday January 07, 2003 @02:21PM (#5034062) Homepage Journal
    With all those bricks invovled, maybe they should call it the SGI Tetrix.
  • Cool (Score:2, Interesting)

    by WindBourne ( 631190 )
    As Linux gets into high-end systems, it will drive the industry to compete. Allmost certainly, Sun and HP will have to release high-end systems with Linux rather than trying to keep it on low-end only. Otherwise, it will be SGI and IBM only
    Now, If a major would start using Linux in an innovative way rather than simply trying to lower their costs. That would help drive real sales.
    • Re:Cool (Score:3, Insightful)

      by Quill_28 ( 553921 )
      Why would SUN and HP have to release a version of linux on their high-end systems?

      SUN cost for Solaris wouldn't be higher than SGI to port Linux to their systems.

      It would seem that people producing high-end systems costs would go down, using a no-cost license fee os like Linux.
      • Sun should no way drop solaris. The only reasom most people buy suns nowdays is because they can run thir cad software. And the cad companies only release their software on suns is because they know that not that many hackers can get their hands on them.
  • by cant_get_a_good_nick ( 172131 ) on Tuesday January 07, 2003 @02:44PM (#5034240)
    From the Register []
  • MPI? (Score:4, Interesting)

    by Alex Belits ( 437 ) on Tuesday January 07, 2003 @02:45PM (#5034255) Homepage

    even myrinet and mpi is glacial compared to the SGI numalink network and running code multithreaded

    Don't mix shitty parallel computation libraries and actual performance. Multithreaded applications without MPI are, of course, faster than anything with MPI, however it says absolutely nothing about:

    1. Multithreades vs. multiple processes.
    2. Myrinet
    3. Network programming
    4. Clustering
    5. NUMA implementations that reduce everything to SMP with a cache that gets flushed a lot
  • by more ( 452266 )
    Don't get it installed in your office! :-)

    You can use the 64-processor version for not only the simulation of, but also for the real-life purpose of melting iron. A double Itanium2 HP ZX6000 is heating up my office like no computer before. When I turned the ZX6000 on, my daughters' self-made art (taped on the wall) started flapping in the warm air. To me it looks like Itanium2 is server room hardware, at least until we get the 130 nm version.

  • by LinuxParanoid ( 64467 ) on Tuesday January 07, 2003 @03:49PM (#5034884) Homepage Journal
    You guys are all missing the main point!

    SGI is the first billion-dollar systems vendor to move their totally high-end million dollar hardware to run Linux, and not just to run Linux poorly, but instead their mega-boxes *require* Linux to performe excellently (unlike, say IBM "Linux/390" mainframes where Linux is not really the native OS supporting all the hardware features and is mostly a curiousity or very expensive Apache server.)

    The other vendors, Sun, HP, DEC, IBM have not been nearly as aggressive and are depending on their own UNIXes to remain on their high-end boxes.

    SGI is depending on Linux and has tweaked it enough to run huge, 64-way complex NUMA systems. This is a major infrastructure bet on Linux, and (assuming this is a shipping, working product) a huge mark of progress for Linux that it can, today, support this sort of high-end scalable hardware.

    We all knew it *could*, in theory, but SGI has invested in making sure that *it does*!

    This marks a major shift of SGI to an Intel/Linux pure play. It's not just a bunch of low-end Linux server boxes (which they've done before, and Sun/HP/IBM also do), or boxes that you can run either Linux or some proprietary UNIX. It's a full-scale massive 64-way NUMA SMP server that is optimized to run Linux.

    Hats off to SGI, I say.

    (I wish they had better business prospects but its hard to do that with a niche sort of product like high-end SMP/NUMA technical computing. We'll see if they can push it into a broader customer base with sufficient application support.)

    I wonder how Oracle would do on this sort of puppy?

  • If you look at their developer platform [] for this machine, you see Fortran, C++, and C listed. No Java.

    Just a thought for all the Java folk who got so defensive about my comparisons of their language to others. Java is a useful, powerful tool -- but if you want to develop for top-flight parallel hardware, you don't use Java.

  • SGI is a great technology company and like other technology companies they don't know how to market their way out of a paper bag. I really hope they can get the word out and sell enough of these systems and keep the doors open.
  • The Register also has a blurb on this []. I like the dig at Sun at the end, meow indeed.

"The C Programming Language -- A language which combines the flexibility of assembly language with the power of assembly language."