Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Silicon Graphics

SGI NUMAflex Linux System On Display @ SC2002 149

jarrod.smith writes " According to SGI will unveil its Intel® Itanium® 2 NUMAflex shared-memory supercomputer architecture (which runs Linux as its OS) at Supercomputing 2002 which runs this week in Baltimore, MD. The link at SGI says the system will be on display at the show. The exhibit floor opens this evening. Unfortunately I did not go this year. Can those lucky enough to be at the meeting scope it out and post comments?"
This discussion has been archived. No new comments can be posted.

SGI NUMAflex Linux System On Display @ SC2002

Comments Filter:
  • by swordboy ( 472941 ) on Tuesday November 19, 2002 @06:32PM (#4710737) Journal
    Wow!

    NumaFLEX... And to think... All that AMD could come up with was Athlon 64 [com.com].

    You'da thunk that they'd at least stuck a period or an 'e' on there somewhere...

    eAthlon.64?
  • by Meat Blaster ( 578650 ) on Tuesday November 19, 2002 @06:33PM (#4710746)
    Having helped set things up, I was offered an opportunity to see the system in action. It's fast, much faster than previous offerings in the line, and apparently enough so (as marketing tells me) it's well worth upgrading aging supercomputers or clusters.

    Additionally, it offers unparalleled scalability in the line of Linux supercomputing. This is a system built to grow with a business, although your business better be pretty much grown already to back the check you'd need to fill out to buy it.

    My conclusion: it's an excellent largish solution for academia seeking a more stable environment than can be achieved with Beowulf clustering and excellent pricewise solution for businesses seeking to expand without sinking a lot of money into unnecessary costs.

    • it offers unparalleled scalability in the line of Linux supercomputing

      Wow, sorry but that sentence smacked of marketing/speak. "Unparalleled scalability"? Hopefully this was just a play on words ;)

      businesses seeking to expand without sinking a lot of money into unnecessary costs

      What would those "unnecessary costs" be? (just asking).
      • What would those "unnecessary costs" be? (just asking).

        Proprietary software. The bulk of the costs with anything supercomputing falls across the non-standardized but more reliable hardware, the service contracts necessary in a mission-critical environment, and the software that runs on the system. Having Linux cuts back on that, although no doubt some software tailored to work in this environment will still be pricier than its counterparts on our x86 hardware because of the smaller customer base and ability to pay.

    • My problem with one of these is that Itanium2s are so damn hot, I wonder about the density of computational power. Granted, an individual Itanium2 smokes the hell out of an R14k, but since our 600MHz R14ks use on the order of 15W and I think an Itanium2 is up around 150W, we can get many times the computrons (R14ks aren't fast, but they aren't *that* slow) per Watt. Cooling is kind of limiting factor, because these machines go in labs, not prepared computer rooms.

      Right now we have a couple of 8-way Onyx2s and we're in the process of building an 8-way Origin 300. For the kind of work we do (realtime simulations), where latency is king, we prefer to put each part of a task on its own CPU, so having 8 processors is nice, whereas having an 8-way Itanium2 would be prohibitively costly to cool, although it'd be nice for when we just crank: see 483/499 SpecInt/FP base for 600MHz R14k vs. 810/1350 SpecInt/FP base for 1GHz Itanium2.

      I *would* like to move to Linux from IRIX, I think. I really like the IRIX realtime support (all the REACT stuff), but I am tired of poor tool support and limited lack of updates, etc. I think $500k worth of machines (in *that* lab) would warrant a better resolution of some issues we've had.

      Finally, I very much anticipate the day that these Linux scalability improvements filter down into something like a 4-way Clawhammer system. That system could very likely do a lot of our work (at what, maybe $20k for a system?) that we now drop $50k on for SGIs.
    • SGI's tweaking allows them to achieve very close to linear scalability.

      Just remember when a few months/years ago, Linux wasn't scaling well over 8 cpu's. SGI has a 128 CPU solution already. I have to admit that these guys made a VERY good job!
  • According to SGI will unveil its

    According to who?

    I demand an answer!
    • According to who?

      SGI. The sentence use the practical reverse-bananana [tuxedo.org] compression idiom, the original which was first revealed in the holy book of HAKMEM [inwap.com]

      Please add the string "I am not a" in front of your Geek Member Card.

    • My original submission had the URL inserted between "to" and "SGI". So it read: According to http://www.sgi.com/features/2002/nov/hpc/ SGI will unveil...
  • yep, I was slated to go, and then got told "no, we don't have the money in the budget"...and to top it off, even /. is rubbing it in...
    puts head down and weeps as images of shiny, multi-colored SGI systems float through my head
  • by Anonymous Coward on Tuesday November 19, 2002 @06:39PM (#4710790)
    (which runs Linux as its OS)

    WRONG! It runs linux as it's kernel.
  • WHY LINUX!? (Score:5, Funny)

    by stevejsmith ( 614145 ) on Tuesday November 19, 2002 @07:02PM (#4710948) Homepage
    LINUX!? WHY LINUX!? Why not a stable OS...




    Like Windows ME!
  • by erroneus ( 253617 ) on Tuesday November 19, 2002 @07:14PM (#4711045) Homepage
    ...okay so Linux is being applied to all these terrific projects of scale both large and small. Is it because it's an open system with seemingly hyperactive development or is it because it's simply better than anything else out there?

    I'm trying my best to maintain a level of respect for the MS operating system product so I'd like to know if anyone knows of any amazing projects MS OSs are being used for. For that matter, what about other OSs in general?

    I think it's terrific that Linux is used this way but I wonder if it's because of its availability or because of its technology. I tend to think it's for its availability but I'm no expert. I think answers and other points of perspective from others in the Slashdot community would help to show some objectivity here.
    • by sql*kitten ( 1359 ) on Tuesday November 19, 2002 @07:43PM (#4711270)
      ...okay so Linux is being applied to all these terrific projects of scale both large and small. Is it because it's an open system with seemingly hyperactive development or is it because it's simply better than anything else out there?

      Linux is being used because there's no x86/Itanium port of Irix. SGI use Irix, which as of 6.5 is a superb Unix implementation, on their MIPS hardware. IBM use Linux because of all the software that's available for it, but Linux runs within a virtual machine on top of their proprietary zOS.

      XFS has already made it into Linux, maybe some other Irix stuff like GRIO will be next.
    • by ChaosDiscord ( 4913 ) on Tuesday November 19, 2002 @07:47PM (#4711294) Homepage Journal
      I think it's terrific that Linux is used this way but I wonder if it's because of its availability or because of its technology.

      I'm involved with a number of high energy physics experiments around the world (from a "physicist needs an obscene amount of computer power but a minimal budget, I try to give it to them" standpoint). Everyone is using Linux clusters at the moment. Why? Two reasons.

      The first is price. None of these projects are rolling in money. Saving a few thousand dollars while setting up a hundred node cluster is a big win. The people working on the projects are technically skilled enough that a Unix varient is not significantly harder to use than a Windows variant, so there is no increase in TCO due to support.

      The second is trust. They've been repeatedly burned by proprietary software. They run into a problem and the publisher isn't inclined to help (or wants more money than they have to fix it), and they're forced to fine another solution. Linux may not be perfect, but they're free to fix their own problems. They don't view it from a "Free Software is Ethical" view, but from a pragmatic "we've been repeatedly screwed and it isn't happening again" view.

      • by Anonymous Coward
        Exactly, I've been once burnt by proprietary
        OSes, not for supercomputing but for astronomical
        data acquisition systems (not very fast embedded
        systems, the real time properties being almost
        irrelevant).


        The story:


        - We get a copy of a software, find a showstopper bug, mentioned it after having the certainty that
        it was in the OS and not in our code.


        - we receive an upgrade less than 2 weeks after mentioning the bug(there was a release every 3 or 4 months and we had been developing our own software in the meantime).


        - install the upgrade in a very optimistic mood
        since the first ilne in the release notes was
        that the specific bug we mentioned had been fixed


        - test again, bummer, the bug had not been fixed
        or at least not properly. Waiting another 3 months for the upgrade would have a major setback


        - I did disassemble the relevant part of the software, ended up with about 50Mb of text files
        on the disk and found the bug in about 2 days.
        I patched the code with a hex editor (6 bytes
        to patch).


        This was in 1992, the system worked fine with my patch until we upgraded it wit faster processors
        in 1999. Guess what, the new systems run Linux.
        There may be bugs, but at least I have the source code so fixing or tuning the code is easy.

      • I think your two reasons are somewhat conflicting. An institute that will choose Linux to save a few thousand dollars is unlikely to be able to spare the manpower to debug/develop Linux. To put this another way, how many months of Linux-developing does your few thousand dollars buy you?

        Also, in my experience, institutions with this much computing power (and they _do_ buy a lot of power), have a lot of clout when it comes to getting OS problems fixed.

        I think the main reason is the cost of the hardware - it's simply much cheaper to rack up cheap IA32 boxes than to buy RISC workstations/servers.

        The result is, as you say, a total dominance of Linux in the HEP computing world.
    • by raehl ( 609729 ) <(moc.oohay) (ta) (113lhear)> on Tuesday November 19, 2002 @07:56PM (#4711348) Homepage
      You pay a lot of money to get a very large computer that can do very large tasks very fast.

      Wasting 20-40% of the resources of your $2k desktop on your OS's feature bloat may not be too bad, but wasting 20-40% of the resources of your $5 mil supercomputer is a lot of money.

      Or put another way, Linux is used in supercomputers because it can be set up to do exactly what you want it to, and ONLY that - which for most HPC applications is compiling and running custom code to solve Big Problems.

      You're not going to use a 512 processor supercomputer to Save Christmas by being able to get those pictures off your digital camera without spending 3 hours trying to download the drivers.

      • ..that some people DON'T CARE ABOUT THE PRICE! All they want is serious kickass performance. They want the answer NOW!

        This is what sets SGI apart, their performance...

        Even if the Dodge Viper is a fast car, how come you don't see it racing against F1's...? It's simply because don't care about the price. They don't mind spending $20k on a friggin' steering wheel...!
      • Wasting 20-40% of the resources of your $2k desktop on your OS's feature bloat may not be too bad, but wasting 20-40% of the resources of your $5 mil supercomputer is a lot of money.

        <sarcasm>Well, it's a pity that the supercomputer vendors haven't realised that then, isn't it?</sarcasm>

        Come on now, you think that the vendors don't optimise their OSs for every last bit of performance? Of course they do. They're all in competition here, and every one of them is struggling to get every last FLOP out of their processors.

        The reason that we're only just now starting to see large CPU counts in Linux boxen is because it's only just now becoming viable to scale Linux. Other vendors' UNIXs mostly scale much better than Linux.

        I believe that SGI's Linux will (at first) scale to only 32 CPUs, and I'll be interested to see what sort of performance they get on such machines. I'd wager that Linux will not scale nearly as well as, say, IRIX.

        OTOH, the recent slashdot thread on the performance of the 2.5 kernel is reassuring - way better performance on parallelised tasks. I suspect that these improvements are being driven by all the vendors who want to use Linux on their high-end, high CPU count machines.
    • http://www.microsoft.com/windows2000/hpc/

      Cornell has some windows clusters that they seem to like ok.

      http://www.tc.cornell.edu/

    • by tyler_larson ( 558763 ) on Tuesday November 19, 2002 @08:17PM (#4711532) Homepage
      Linux is great for many projects like this because it posesses some qualities you won't find most other places. In particular:
      • No royalties. They can use it, hack it, sell it. Whatever they want, and never have to cut a check to anyone.
      • The resources. The Linux development community is unlike any other. Using Linux means you have access to all sorts of development and product resources for absolutely free. The newsgroups are friendly, the documentation is deep. And if you're doing something weird, do it with Linux and chances are someone will help you.
      • The name. If you need to impress the suits and get funding, Linux is a name you want to include. For a lot of people, Linux=cutting-edge technology. They don't understand it, but they know it's powerful, and they know it's gaining ground fast.
      • The power. There's no two ways around it. Linux is a powerful and flexible system. You can push it, pull it, tune it and tweak it to do just about anything. Unlike some other OSes, the kernel was written to stand on its own, not necessarily part of any prefab package. There's no GUI code in the TCP/IP stack, and it's just as happy in a PDA as it is in a supercomputer. Could you honestly immagine LLNL buying a Windows-based clustered supercomputer? Yeah. Me neither.
      Using Linux helps companies keep from having to re-invent the wheel while at the same time keeping their options open and their money in their own pocket. It works so well it's a wonder more companies don't use it.

      For those afraid of the GPL, BSD presents a tempting alternative. But again, you lose a bit of the development resources and don't have the name to use to get your funding. For most people, though, GPL isn't a problem.

    • SGI = best throughput/bandwidth, Database = Need throughput, Oracle != IRIX but Oracle = Linux.

      Obviously, SGI is in need of money. They know how to do the right thing with performance so this will open some doors that were closed before...

      Also, you can bring machines closer to the admins w/o IRIX experience. Some people are scared to learn, you know...

      Now remember, SGI offers up to 128 cpu in single system image. This is serious compute power and no one else can offer it. You're not talking beowolf, clusters, etc. It's *1* machine.
      • Quoth the poster:
        Also, you can bring machines closer to the admins w/o IRIX experience. Some people are scared to learn, you know...

        Sssssh ... don't tell anybody this ... but a competent Linux admin can probably get up to speed on Irix in about one day ... well ... maybe not on hardware like the Origin 3000, but on SGI workstations the system layout is so similar it's scary ...
  • what kernel version?
  • by isaac ( 2852 ) on Tuesday November 19, 2002 @07:20PM (#4711080)
    I've been hanging out at SC2002 all day, and I can tell you that nearly every booth on the show floor is showcasing Linux. Of course all the Linux cluster vendors have it, but so does sgi, Sun, IBM, Intel, AMD, HP, Compaq (separate booths - guess the merger isn't *really* done yet), and all the smaller vendors, to say nothing of all the research labs, etc.

    Large linux systems and clusters are really all the rage right now in SC circles. I think the only booths I saw here not using Linux were the Apple booth (though they did have one gorgeous brand-new G4 running Xfree and twm, the sick bastards!) and the Japanese manufacturers NEC and Fujitsu (off in their own worlds, as always).

    Linux isn't a big surprise to the SC set, though - this is a group that's used to UNIX. Hell, Microsoft doesn't even have a booth here, and they were at the last LinuxWorld conference.

    -Isaac

    • Windows can scale to what? 32 cpu's?
      Are they any x86 solutions that offer 32 cpu's systems anyway?
    • Apple Computer has a booth there? Wow. Guess they really are trying to live down that whole "it's just a toy." complex they got a long time ago. :-)

      I saw on the floor plan that this is indeed correct. Amazing how times change. Maybe they really are getting their act together. The new XServe looks like a nice 1U server.

      Wish it were time to upgrade our main servers again.
  • i want to SEE it. What does it look like, give me some jpgs to oogle over. Im sick of just imagining these supercomputers and beowulf cluster, lets see them now! Or dont they have digital cameras at these fancy affairs.
  • ...I had a visit from my friendly SGi representative and he was trying to sell me... this thing after I asked about Linux clusters. I didn't pay too much attention but he was all hush-hush about it, saying that it hadn't been announced yet. It seems impressive. The smallest machine will have 16 nodes and NumaFlex certainly beats the shit out of a Gb Ethernet. He couldn't quote prices, not that I wanted to know...
    • by leeet ( 543121 )
      Having the chance to work with an IRIX version, I can tell you that you can actually do TCP over Numa at the speed of 3.2Gpbs so it's more than 3 times.

      They also have a faster interconnection that allows 6.4 Gpbs so you can copy the equivalent of a whole CD rom per SECOND(!).

      I wish my ISP could do that, unfortunately, my PC is decades behind that kind of performance...
      • by green pizza ( 159161 ) on Tuesday November 19, 2002 @11:39PM (#4712803) Homepage
        The current generation of SGI NUMAflex based machines use a mesh of full duplex 3.2 GByte/sec interconnects. That's 25.6 Gbit/sec.

        That's way more than 3 times. Plus the latency is several orders of magnitude less.

        The tradeoff is cost. A fully populated rack (32 Itanium2 CPUs or 128 MIPS R1x000 CPUs) starts at $1M can can easily run upwards of $4M. If your task is CPU bound, then a homebrew cluster will be almost as good. If your task is I/O bound, you can't beat the Origin. Until the Cray X1 ships, anyway.

        Also keep in mind that while an Origin system can be partitioned, they are typically run as one single image system. The beasts easily expand from 2 CPUs up to 512 (even 1024 with special support from SGI). The cross-system memory latency increases with the larger configurations, but the net cross-section bandwidth/thruput increases linearlly with the CPU count. Very efficent design.

        Pretty sweet machine. Again, until the Cray X1 ships! :)
  • by Leebert ( 1694 ) on Tuesday November 19, 2002 @11:00PM (#4712569)
    It's funny, I talked with an SGI rep there and he said "We haven't annouced this yet, so you don't see it sitting there." To which I replied "Why don't you not tell me about what's not sitting there." "Sure, I won't", he said, and proceeded to tell me all about it. :)

    Seriously, it looks pretty sweet, but I was more excited by the Origin 3900 -- 16 processors in one C-Brick (4U).
    • Seriously, it looks pretty sweet, but I was more excited by the Origin 3900 -- 16 processors in one C-Brick (4U).

      Shh. Don't tell anybody. The new server is nothing more than an Origin 3900 (a.k.a. SN2) with Itanium2 processors instead of R14000 processors.

      Wrong, the Origin 3900 is just a repackaged Origin 3000. It doesn't use the SN2-related ASICs at all.

      The inside of a MIPS CX-brick is mostly empty space. It was designed this way, back in the mid-1990s, specifically to accommodate IA-64 CPUs.

      The CX-brick is the new C-brick for the Origin 3900. The old C-brick was mostly empty space.
  • For if there is a sin against life, it consists perhaps not so much in
    despairing of life as in hoping for another life and in eluding the
    implacable grandeur of this life.
    -- Albert Camus

    - this post brought to you by the Automated Last Post Generator...

Stellar rays prove fibbing never pays. Embezzlement is another matter.

Working...