Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Cray Wins $52 Million Supercomputer Contract 133

The Interfacer writes "Cray and the U.S. Department of Energy (DOE) Office of Science announced that Cray has won the contract to install a next-generation supercomputer at the DOE's National Energy Research Scientific Computing Center (NERSC). The systems and multi-year services contract, valued at over $52 million, includes delivery of a Cray massively parallel processor supercomputer, code-named 'Hood.'"
This discussion has been archived. No new comments can be posted.

Cray Wins $52 Million Supercomputer Contract

Comments Filter:
  • by edwardpickman ( 965122 ) on Thursday August 10, 2006 @09:44PM (#15886006)
    Hood is within specs for Vista. A big relief for Cray since they weren't sure it'd meet memmory specs for Vista.
  • by boner ( 27505 ) on Thursday August 10, 2006 @09:44PM (#15886009)
    52 million dollars over a couple of years.... Not much to keep a high-end computer company running on.
  • Cash Machine (Score:0, Flamebait)

    by Doc Ruby ( 173196 ) on Thursday August 10, 2006 @09:46PM (#15886014) Homepage Journal
    Boy am I glad that Bush has destroyed socialism.
  • by __aaclcg7560 ( 824291 ) on Thursday August 10, 2006 @09:48PM (#15886027)
    Oh, wow! Does it run Virtual Machines?
  • by Anonymous Coward on Thursday August 10, 2006 @09:49PM (#15886031)
    That the DOE isn't hoodwinked by using such an energy consuming device to research energy consumption.
  • Pinky... (Score:3, Funny)

    by Null Nihils ( 965047 ) on Thursday August 10, 2006 @09:50PM (#15886033) Journal
    are you pondering what I'm pondering?

    I think so, Brain! NERSC! POIT!
  • by free space ( 13714 ) on Thursday August 10, 2006 @09:50PM (#15886037)
    The system uses thousands of AMD Opteron processors...


    Because of it's power requirements, Cray's only possible customer was the Department of Energy :)

  • by Jerk City Troll ( 661616 ) on Thursday August 10, 2006 @09:51PM (#15886048) Homepage
    Cray finally figured it out. I have been saying for years: HPC/Beowulf clusters are about building machines around problems

    That is why Clusters are such a powerful paradigm. If your problem needs more processors/memory/bandwidth/data access, you can design a cluster to fit your problem and only buy what your need. In the past you had to buy a large supercomputer with lots of engineering you did not need. Designing clusters is an art, but the payoff is very good price-to-performance. A good article on this topic is the Cluster Urban Legends [clustermonkey.net], which explains many of these issues.
  • Hood? (Score:5, Funny)

    by nsushkin ( 222407 ) on Thursday August 10, 2006 @10:09PM (#15886142)
    It's named "Hood"? What are they going to calculate, protein folding in ice cream? ;)
  • by Loconut1389 ( 455297 ) on Thursday August 10, 2006 @10:31PM (#15886275)
    I thought SGI owns Cray now? Wouldn't that mean that they made a deal with SGI?

    Even so- I doubt 52 million is enough to save SGI in the long haul- especially if anything more than a few percent goes to actual hardware/research costs (and it will).
  • good to see... (Score:3, Interesting)

    by Connie_Lingus ( 317691 ) on Thursday August 10, 2006 @10:39PM (#15886331) Homepage
    the Cray brand making a comeback in the super-computer area. I can remember fondly the days of my engineer CS days longing looking at those Cray supercomputers (was that a couch around it?!? COOL!) in awe and just wondering what they could possibly be computing with 512M of RAM and a 2G super-cooled processor. SUPER COOLED!

    Then it was back to my PDP-11 ...reality bit.
  • by vistic ( 556838 ) on Thursday August 10, 2006 @10:47PM (#15886377)
    I don't have any concept of scale when it comes to price for these things. Is this a big contract as far as supercomputing contracts go? The biggest? Average?

    Will this thing be cooled with that cool nonconductive liquid goo stuff that it all just bathes in?
    • by Brett Buck ( 811747 ) on Thursday August 10, 2006 @11:05PM (#15886473)
      $52 million is ultra-cheap in the supercomputer world.

                Brett
      • by afidel ( 530433 ) on Friday August 11, 2006 @12:59AM (#15886991)
        Huh? The COLSA G5 based supercomputer which is currently ranked #21 in the world only cost $5.8 million so I wouldn't call 10x that much Ultra-cheap. The IBM JS21 at #23 is in the same balpark with a retail cost per CPU of $2500 and 2,048 processors (I couldn't find exact pricing for the unit, only the blades). Sure, breaking into the top 10 is expensive, but that's to be expected when even #10 has over 5K processors.
    • by Anonymous Coward on Friday August 11, 2006 @12:04AM (#15886750)
      I work in the industry. 'Course it's easy for an AC to say that, isn't it?

      $52M is rather large nowadays. At least, for a 'commodity' part cluster it is. For a 'vector' supercomputer, it may be only medium sized.

      You can easily break the top 50 for less than $10M. A couple thousand nodes, each with two dual-core Opteron/Xenons, InfiniBand or Myrinet (maybe 10GigE), and a compiler that optimizes better than gcc... no problem.

      That being said, NERSC is a pathologically tough customer. Cray will have to work very hard to earn each and every penny they get. It may very well be a 'live or die' deal for Cray.
  • specmarks? (Score:1, Funny)

    by Anonymous Coward on Thursday August 10, 2006 @10:55PM (#15886430)
    Back in the day, one of the selling points of the soon to be released Cray 3 was that it was so fast, it could do an infinite loop in only 4 days! How have things progressed since '91:: Does an infinite loop only take a day or few hours now?

  • by sotweed ( 118223 ) on Thursday August 10, 2006 @10:57PM (#15886437)
    Does anyone know who else bid on this contract? Was BlueGene a contender?

    It would be interesting to know the other bids and their performance ...
    • Re:Who else bid? (Score:3, Interesting)

      by Kadin2048 ( 468275 ) <slashdot.kadin@xox y . net> on Thursday August 10, 2006 @11:40PM (#15886641) Homepage Journal
      That was my first reaction: somebody at IBM is in deep shit.

      It seems like they had a lock on the last few big DoE supers (and supercomputer sales in general); now all of a sudden we see Cray getting back in there. I wonder if IBM stepped on somebody's toes and got given the boot on this one (it's small, maybe this is just a spanking), or if they've gotten behind in the research and power/dollar worlds because they were doing so well for so long? Or is this just the government trying to spread the love around, giving a small project to somebody else for a change?

      Reminds me a little of the whole Thinking Machines business a few years ago; they were the real darlings of the govt.-contract world, and then Cray and IBM started to get upset that TM was eating out of their rice bowl and lobbied Congress to even things out. Given that they're not around anymore, I think we can all figure how how that went.
      • by afidel ( 530433 ) on Friday August 11, 2006 @01:02AM (#15887011)
        Nah, DoE and DoD like to spread the wealth around enough to keep a couple suppliers alive and at least somewhat healthy. They don't like the idea of only having one supplier to turn too because they know that would cost them more than throwing some contracts at the less robust suppliers. Btw it's not just in computing that this happens, but in all defense contracting.
      • by flaming-opus ( 8186 ) on Friday August 11, 2006 @12:06PM (#15889622)
        Well, the Thinking Machines story is a little more complicated than that.
        http://www.inc.com/magazine/19950915/2622.html [inc.com]

        Basically, IBM and Cray got caught by surprise when the MPPs, of which TMC was just one, came onto the market. Eventually they got their act together and put out the SP and the T3D, which were both good products. Thinking machines got hit by the same post-cold-war lull in supercomputer buying that hit everyone else, and they just weren't big enough to ride it out. Even at their peak, they were a $100million/year (inflation adjusted) company. The corporate landscape is littered with the corpses of supercomputing companies that rose and fell, particularly those that rose in the late 80's, and disappeared in the 90's.
    • Re:Who else bid? (Score:3, Informative)

      by cannonfodda ( 557893 ) on Friday August 11, 2006 @04:21AM (#15887589)
      I would imagine that IBM probably did bid. They would be crazy not to for $52M.

      But....... "the Hood system installed at NERSC will be among the world's fastest general-purpose systems".

      Nersc are looking for general purpose computing systems to fill the needs of 2500 users. Blue gene is blindingly fast at some things, but general purpose it aint. I've benchmarked both the XT3 and Blue Gene with a set of general Scientific Codes and the opteron delivers much better general price/performance for a representative set of tasks. Blue gene will fly if you have the time to get REALLY low level in your optimisation but most scientists don't have the time or knowledge to start dealing with that ind of thing.
    • by Rabid Cougar ( 643908 ) on Friday August 11, 2006 @01:08PM (#15890005)

      This is actually related to this story [slashdot.org] that ran on Slashdot a month ago. Turns out the Inquirer article that everyone ripped to shreds for being light on details was right all along. (I saw sanitized excerpts from e-mails regarding the incident, so I can tell you that Intel's Woodcrest chips performed abysmally in the DOE's testing compared to the Opterons.) The competitor that lost was IBM and the reason was because of problems with Woodcrest. The supercomputer in question will be running on 24,000 quad core Opterons. I will leave it up to the rest of you to draw your own conclusions from this.

      • by sotweed ( 118223 ) on Friday August 11, 2006 @01:43PM (#15890236)
        Wait a minute! Are you saying that IBM bid a machine based on Woodcrest? If IBM bid
        anything here, it would have been either a BlueGene, or, perhaps, something like the
        ASCI machines, which are conventional PowerPCs with a fast interconnect. Hard - no - impossible -
        to believe that IBM would have bid an Intel processor.
        • by Rabid Cougar ( 643908 ) on Friday August 11, 2006 @02:06PM (#15890386)

          I agree that it sounds crazy. I'm just passing along the information I was given. Your impression telling you that there's no way IBM would bid an Intel chip makes a lot of sense. It's not been their standard M.O. in the past. All I know is that Cray won the bid with Opterons, the e-mails I read gave unfavorable reviews of Woodcrest chips, and that Woodcrest is supposed to kick the snot out of Opteron. In any case, the fact that Cray won the bid with an Opteron-based supercomputer should be more than a little eye-opening.

          In any case, based on the things I have witnessed with my own eyes, I stand by my assertion that IBM used Intel chips in their bid.

  • by mikek2 ( 562884 ) * on Thursday August 10, 2006 @11:12PM (#15886503)
    ...systems and multi-year services contract, valued at over $52 million

    Ummm... no offense to Cray, but that's pretty f*ing lame.

    This, ladies and gentlemen, is why we need to 'encourage' our kids to desire scientific jobs.
    • by boethius ( 14423 ) on Friday August 11, 2006 @03:24AM (#15887452)
      That's a nice sentiment but you can't teach your kids to desire scientific jobs. You can teach your kids about science and see if they take to it. No matter your enthusiasm, your kids might lean to the artsy-fartsy, literature, or driving a bus.
  • by Anonymous Coward on Thursday August 10, 2006 @11:46PM (#15886669)
    Unfortunately this seems to be one of the topics that the slashdot bias and ignorance comes out in full force on.

    * Clusters can not compete with supercomputers. They aren't even in the same market space. Cray doesn't make clusters, and clusters have not taken away their business.

    * Cray doesn't take off the shelf hardware and sell it as fancy clusters. Actually look into the details of these machines. While processors sometimes are off the shelf much of the surrounding hardware and software is custom.

    * This 50 million contract is one of many that cray has. They also just recently in the news got a 200 million dollar contract. They also are a contender in the DARPA HPCS thing. That could be a lot more if they get it. They aren't dieing.

    * They aren't owned by SGI any longer. They were bought from SGI by Tera who renamed themselves cray.

    * The top500 list is nonsense. It is based off of 1 benchmark (linpack.) That benchmark doesn't stress the interconnect too much and can allow clusters to appear to compete with supercomputers if you manage to ignore all the other factors. The number of teraflops has very little to do with performance. To see a more well rounded and thought out measurement of top systems check out HPCC's website. http://icl.cs.utk.edu/hpcc/hpcc_results.cgi [utk.edu]

    * Bluegene doesn't kick Cray's ass. See the above and then see how it really performs overall. In some areas it does better and in others it just gets destroyed. Depending on the real world problem a full size blue gene may not even be able to perform as well as a much smaller Cray.

    If you don't know what you are talking about look it up before posting. Just because it's the common belief doesn't mean there is any truth to it!
    • by compupc1 ( 138208 ) on Friday August 11, 2006 @12:55AM (#15886976)
      THANK YOU for setting the record straight. You are correct; clusters are very different. Some types of problems can be broken into mostly separate chunks of work; these work well on clusters. But for those types of problems which depend on a lot of inter-processor communication (e.g. the results of one computation are required for a significant number of subsequent computations), conventional clusters don't cut it. In these cases everything comes down to the bus between processors -- how they are connected together, how they share memory, etc. Without a specially designed network between processors (even off-the-shelf processors), your large "cluster" of processors won't perform all that much better than a small number of them.
    • It is always good to see a /.er bust out with a few facts rather than the usual bad puns, stale jokes, half-baked opinions and misconceptions.

      Cray making a comeback!? Now if that don't beat all.

      What's next? Borland selling a good, cheap Pascal compiler again?

    • by bobcat7677 ( 561727 ) on Friday August 11, 2006 @01:15AM (#15887068) Homepage
      I dunno man. First of all, asking people to mod you up is kinda lame.

      Secondly, to say the computers that Cray sells is not "off the shelf" can be argued depending on how you look at it. Today's Crays are not the fully proprietary machines of yesteryear. They all use AMD Opteron processors and leverage the onboard memory controller and hypertransport bus to make a processor fabric simple. The main custom items in the system are the "interconnect routers" that tie all the hypertransport busses together. Even the FGPA components that facilitates handling specific custom tasks on hardware are somewhat "off the shelf" and just woven into the greater hypertransport happiness fabric.

      Sure, the average person is not going to be able to build a "supercomputer" like this with stuff they bought off the frys shelf. But are we talking about "off the shelf" as in the average electronics store? Or "off the shelf" as in parts that are pre-existing and available on some shelf somewhere and have published documenation?

      Benchmarks of any multiuse system are never universal. They best they can do for a large list like that is to use a benchmark that can reasonably represent a common use of such systems. Cray has been good about having systems that can be configured to perform exceptionally for very specific applications. Modern offerings like the XD1 are no different in that respect as they offer that in the FPGAs. To say they are not in the same market space as custers is like saying MySQL isn't in the same market space as PostGreSQL. They both have their strong points but there is many instances where a user has to decide which to go with.

      I'm going to stop there...time for sleep.
      • by mjsottile77 ( 867906 ) on Friday August 11, 2006 @02:33AM (#15887308)
        "Benchmarks of any multiuse system are never universal. They best they can do for a large list like that is to use a benchmark that can reasonably represent a common use of such systems."

        The linpack benchmark used to do the top500 list is a basic, dense matvec solver algorithm. (See wikipedia : http://en.wikipedia.org/wiki/LINPACK [wikipedia.org]) This algorithm used to be the core of most scientific codes, back in the days when you would simply use the computer to solve a simple (but large) set of equations. In the last decade(s), the scientific world has moved to unstructured problems where the solvers are no longer solely matvec operations. Adaptive mesh methods, multigrid, and other similar "modern" methods in scientific computing do NOT have the same behavior as a basic dense matvec - a simple case would be considering a matvec problem where one deals with sparse matrices. Life gets even worse if you try to use linpack to reason about how a machine would perform on something highly data dependent, such as an n-body code or molecular dynamics simulation.

        Linpack is really an archaic relic of the past, and it is NOT a benchmark of a multiuse system. It is a benchmark of a supercomputer from 15+ years ago. This is not news in the parallel computing world -- many efforts such as ParkBench, NAS Parallel Benchmarks, Livermore loops, etc... have been proposed as replacements for linpack to better cover the sorts of applications that a real "multiuse" systems will run. Unfortunately, the fact that most procurement folks and politicians who help fund these big govt. machines do not understand that linpack is a total waste of time have caused it to persist, contrary to the desires of people who either use the systems, or spend their careers studying performance issues in big parallel systems.
        • by 6031769 ( 829845 ) on Friday August 11, 2006 @06:22PM (#15892016) Homepage Journal
          You are correct to argue that using a LINPACK-based benchmark for a molecular dynamics problem is foolish.

          However, the arena of molecular dynamics is one in which clusters and MPP in general are easily the better choice than a monolithic supercomputer. On the one hand, you can make each node an automaton to describe a single particle in a very object-orientated fashion. On the other hand, you can make each node representative of a spatial cell whereby the boundaries interact with those of its nearest-neighbour nodes. It is particularly in this second scenario where the MPP approach wins hands down (and scalably so).

          So, just because the benchmark is biased (which it clearly is), do not assume that this means that it undervalues one architecture or the other for solving an entirely different problem.
    • * Clusters can not compete with supercomputers. They aren't even in the same market space. Cray doesn't make clusters, and clusters have not taken away their business.

      That's not very true. Supercomputers will have a solid market into the foreseeable future, but they certainly are facing competition from improvements in clusters.

      Sometimes interconnect speed can be reasonably traded-off in exchange for a significantly reduced price, or for additional CPU power, local RAM, etc. Often, problems that are generally considered single-threaded can be parallelized, with a performance hit, but still turns out cheaper because of the huge price difference between clusters and supercomputers.

      Claiming there is no competition between the two is nonsense.
    • by cannonfodda ( 557893 ) on Friday August 11, 2006 @04:38AM (#15887637)
      AC has a point about benchmarks but I would say it's debatable as to whether the XT3 doesn't qualify as a supercomputer.....

      To quote a collegue of mine "The interconnect IS the machine!"

      The primary difference between a supercomputer and a cluster is the degree of itegration between the computing elements. You can demand that a Supercomputer MUST have a crossbar switch or similar close coupled interconnection method but that only scales so far. For a good example have a look at the earth simulator [jamstec.go.jp], you're not telling me THAT is not a supercomputer ? The XT3 is similar in that it has a customised high bandwidth, low latency interconnect it just doesn't have the SMP elements that the earth simulator has.

      As an aside Cray get a LOT of contracts that we never hear about . They are actually financially fairly healthy.

    • The top500 list is nonsense. It is based off of 1 benchmark (linpack.)


      The other problem with it is that it only counts systems that people want you to know exist. For example, it's a safe bet that the NSA has multiple systems that would qualify but are not listed. There are probably a significant number of systems like that in the world - so calling it the 'top 500' is just silly.
    • by moosesocks ( 264553 ) on Friday August 11, 2006 @08:34AM (#15888225) Homepage
      I'd say go ahead and mod him up.

      He's right. For *ALL* computing tasks, using the right tool for the job can increase performance exponentially. Slashdotters should know this -- A 400mhz GPU can outperform a 3ghz CPU on vector and matrix operations by huge leaps and bounds

      Clusters are just another tool that work very well for very specific jobs, and very poorly for others. These jobs are mainly those that can be massively parallelized (ie. brute-forcing a math equation -- Computer A should try these values, Computer B should try these values, etc...). Anything more complex than that puts a huge strain on the system being used to interconnect the machines. Once you start incorporating a fast interconnect system, the cluster begins to resemble an extremely inefficent supercomputer with multiple points of failure. At this point, it makes more sense to just use a Cray.

      Over the past few years, for the first time, it's been possible to use the same chips in supercomputers as in desktops -- specifically the Opteron and the PPC970. As a result, consumers got more powerful chips, and supercomputers got a lot cheaper due to economies of scale. As an added bonus, now that the R&D is combined into one architecture, we're getting faster chips on a more regular basis.

      AMD did a lot of things right with the Opteron. They made a series of consumer chips that were inexpensive, and blazing fast. They then took the same architecture, and made enterprise-grade chips that were rock solid, equally fast, energy-efficent, and still pretty cheap. HyperTransport is also an incredible technology, in that it's suitable for inexpensive machines and supercomputers alike. Itanium was none of these things.

      I for one, am glad to see supercomputing coming back into fashion. The DOE's working on a lot of good science that will be essential for our survival in the long run, and the government seems to be providing them ample funding. Sure, NASA may do some cool science, but it's the DOE that's working on more meaningful things that can be put to use here on earth for the betterment of mankind. Perhaps the only positive thing to come out of the political mess right now is that the world is quickly realizing how desparately we need to move away from an oil-based society.
    • Unfortunately this seems to be one of the topics that the slashdot bias and ignorance comes out in full force on.

      I agree completely.

      * Clusters can not compete with supercomputers. They aren't even in the same market space. Cray doesn't make clusters, and clusters have not taken away their business.

      This is not exactly a wrong statement, but it is incredibly broad. First off, Cray does make clusters. At a fundamental level, the basic separate-box clusters connected by Ethernet are the exact same thing as a big massively parallel system. They are on different ends of the spectrum, certainly. The sort of interconnects used by Cray certainly make their systems much more suited to certain workloads than more basic clusters. In practice, even Single System Image vs. separate boxes isn't that big a distinction. And, basic clusters certainly do compete with and take business from Cray. If basic clusters weren't an effective means of computing, then there would be a much larger market for the supers. If I refer to "clusters" in this post, I am probably referring to separate-box basic clusters -- like the parent poster seems to be. As unclear as this terminology can be, it is the way the term is usually used.

      * Cray doesn't take off the shelf hardware and sell it as fancy clusters. Actually look into the details of these machines. While processors sometimes are off the shelf much of the surrounding hardware and software is custom.

      This point I fully agree with. The high end interconnects and whatnot that you see in supers are on a very different level from what you see in the more basic clusters. For the workloads where the supers kill the basic clusters, it's usually related to comms latency between the nodes, which is all about the crazy interconnects.

      * This 50 million contract is one of many that cray has. They also just recently in the news got a 200 million dollar contract. They also are a contender in the DARPA HPCS thing. That could be a lot more if they get it. They aren't dieing.

      I'll take your word for it. I haven't specifically kept up with Cray's contracts, though it wouldn't surprise me if they are doing pretty well.

      * They aren't owned by SGI any longer. They were bought from SGI by Tera who renamed themselves cray.

      Yup, no argument there. (See, I may be a jerk, but at least I'm not arguing with everything! ;) )

      * The top500 list is nonsense. It is based off of 1 benchmark (linpack.) That benchmark doesn't stress the interconnect too much and can allow clusters to appear to compete with supercomputers if you manage to ignore all the other factors. The number of teraflops has very little to do with performance. To see a more well rounded and thought out measurement of top systems check out HPCC's website. http://icl.cs.utk.edu/hpcc/hpcc_results.cgi [utk.edu]

      I wouldn't go so far as to call top500 "nonsense." It is a very specific benchmark. People do tend to look at a very narrow, specific piece of information, and generalise it completely. *That* is nonsense. You have to be aware of what you are reading when you see stuff like benchmark numbers. Benchmarking can be very complex.

      That said, there are some real world workloads that work quite a lot like linpack. Consequently, there are a lot of very real world tasks where a cluster is an appropriate tool. My personal interest in HPC tends to focus on 3D rendering performance. This tends to need a lot of FLOPS, and relatively little bandwidth. For the guys who are doing really bandwidth/latency intensive stuff, the basic clusters are useless. (I'm told that stuff like weather sim falls into this category, but I can't comment on the details.) Without specifying a workload, saying th

  • by Van Cutter Romney ( 973766 ) <sriram.venkatara ... com minus author> on Friday August 11, 2006 @12:01AM (#15886735)
    The Hood supercomputer at NERSC will consist of over 19,000 AMD Opteron 2.6-gigahertz processor cores...

    The Ultimate Gaming Machine!!!
  • Just think (Score:1, Troll)

    by Slithe ( 894946 ) on Friday August 11, 2006 @12:26AM (#15886858) Homepage Journal
    Imagine a Beowulf cluster of ... oh wait!
  • by Heembo ( 916647 ) on Friday August 11, 2006 @02:58AM (#15887381) Journal
    Holy cow, I didn't even know Cray was still in business! And, does it run Linux?
  • Hood? (Score:2, Funny)

    by Tavor ( 845700 ) on Friday August 11, 2006 @03:20AM (#15887442)
    Will it be promptly sunk by the a German supercomputer named Bismark?
  • by sinntel ( 994628 ) on Friday August 11, 2006 @04:17AM (#15887581)
    "The system uses thousands of AMD Opteron processors" I will create a new Supercomputer using the new Intel chips and call it the Bismark
  • by 140Mandak262Jamuna ( 970587 ) on Friday August 11, 2006 @09:11AM (#15888416) Journal
    Cray computers has failed miserably in the marketplace. The solutions it produces is completely out of whack with the cost of solving the problem. This 52 million bucks is just welfare for PhDs in Computational Fluid Dynamics, Computational Electromagnetics, etc. People building ivory towers in the skies with their heads in clouds ...

    America would be better served if we sink the money in creating interoperability standards and creates ways to increase competition in the computational industries. Every company from Microsoft, to Apple to Parametric Techologies to SDRC to Oracle to ANSYS to itsy-bitsy-prof-and-grad-student-garage startups work to build vendor lock-in into every one of their products. The market creates rich rewards for locking in the user to one software product and preventing the user from migrating to a more efficient competitor.

    Promote interop and competition. Super computers will become dime a dozen.

    • by flaming-opus ( 8186 ) on Friday August 11, 2006 @12:37PM (#15889833)
      Well, supercomputers have become a dime a dozen. Or rather, clusters have become a dime a dozen. However, a lot of the really demanding tasks that high-end supercomputer users need to do, are not terribly well served by clusters. Nersc has cluster systems, they know how to use them, and what they are good for. The fact that they are buying a cray indicates that their needs were not well met by the clusters.

      Furthermore, programming a modern cray is not very much like programming a YMP. The code is structured very much the same on a XT3 as it is on Blue Gene, or on a cluster. There really is a lot of interoperability in the HPC space.

      The real flaw in your logic is that there will be a lot of competition. The supercomputing marketplace is really tiny. It's about a 4billion dollar worldwide industry. That sounds like a lot, but it's really tiny compared to the greater computer hardware industry. Why compete for supercomputer dollars, when there are so many corporate customers with more money, and simpler demands. IBM and HP already own most of the HPC market by selling clusters of their business-class servers, so there's really only a tiny slice left over for the real innovators. If your particular need is not met by a cluster of IBM unix servers, you are in the tricky situation of forking over a bundle for a cray/nec/sgi box.

      Niche markets have always been expensive. JP-7 fuel costs $30/liter. Modern day fighter jets cost $100million each. The government buys expensive stuff. Not really news.

Kleeneness is next to Godelness.

Working...