Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Microsoft Supercomputing

Cray Co-Founder Joins Microsoft 169

ergo98 writes "Burton Smith, co-founder and chief scientist at Cray (The Supercomputer Company), has jumped ship. He's joining Microsoft to help them with their clustered computer initiative. Burton joins Microsoft as a technical fellow."
This discussion has been archived. No new comments can be posted.

Cray Co-Founder Joins Microsoft

Comments Filter:
  • by Anonymous Coward on Saturday November 26, 2005 @10:21AM (#14118702)
    Microsoft also announced Windows Vista will require a Cray supercomputer to run.
    • Re:In other news (Score:1, Redundant)

      by headkase ( 533448 )
      My old computer sitting next to me is about 3 times faster just in megahertz rating than a cray from the mid 80's (iirc ~300Mhz). And that's before you factor in architectural speed gains. What I would really love to have kicking around is the software that ran on those old Crays... 2D nuclear explosion sim's 'n stuff.. ;)
      • Re:In other news (Score:3, Informative)

        by Alien Being ( 18488 )
        A 1985 Cray-2 could do about 4 GFLOPS. That's about the same as today's most powerful CPUs.
        • So how's the Kool-aid?
      • by RodgerTheGreat ( 905510 ) on Saturday November 26, 2005 @10:55AM (#14118831)
        and computing power. Before I get on a rant about the megahertz myth and why I love PowerPC's, the real reason Crays were powerful was their massive parallelism and the use of path optimization (premeasured cables and careful curcuit designs that made the distance electrons had to travel equal between parts of the machine) was the real reason they were a Cray.

        Just because your machine is *faster* doesn't mean it's anywhere near as powerful! How many CPU cores does your machine have? I bet the cray had more. Clockspeed means *nothing*. The reason those applications don't exist is because they would take an order of magnitude as long to calculate on your "old computer".

        I recommend you do some reading on supercomputing-http://en.wikipedia.org/wiki/Superc omputing [wikipedia.org]

        "Supercomputers traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times--in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy design and componentry. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used for transaction processing."
        • by fingusernames ( 695699 ) on Saturday November 26, 2005 @12:23PM (#14119186) Homepage
          Back when I worked at Cray, one project I worked on was the Fortran 90 compiler. The Fortran 90 compiler was developed on Sun SPARC machines and it cross-compiled to the Cray. Crays, even the mighty C-90 back then, weren't that great interactively, and were pretty slow to compile code. Not to mention the fact that Cray CPU time was far more valuable than the Sun machine's. Pre SGI/Tera Cray machines came in two flavors, the original vector processors (C-90 up to 16 or 32 processors?), and the later massively parallel T3 series (with HUNDREDS of DEC Alpha processors). Both were specialized machines which excel at particular tasks. Wickedly fast at those tasks.

          Too many people these days work only on PC architectures, and have no/little exposure to other, superior architectures. The PC was and is designed as a cheap, mass produced general purpose desktop device. It in no way compares to supercomputers, mainframes, or true server architectures. A computing environment is more than the sum of the raw megahertz and bandwidth claims of its disparate parts.

          Larry
        • The real reason that early Crays were powerful was because they were very fast (high speed devices), and their main memory was SRAMs (very low latency, but smaller in size) instead of DRAMs (high latency, large size), so memory requests were serviced quickly. A friend once said that a Cray was a great lisp machine because it had a low latency memory.

          The vector registers were interesting, but only of utility for linear algebra problems (Matrix operations), and then only when the vector sizes were fairly l

          • RAM latency is not much of an issue for proper multi-threading CPUs... having one thread stalled due to IO and instruction dependencies simply leaves more execution ressources to the other threads and as long as at least two out of four or more threads have ready-and-able instructions, the execution unit can operate near full-load most of the time and completely hide latencies as far as sustained throughput is concerned. Latency is not as much of a killer now as it was in the '80s, thanks in large part to o
  • Crazy? (Score:4, Funny)

    by HugePedlar ( 900427 ) on Saturday November 26, 2005 @10:26AM (#14118723) Homepage
    I read this as "Crazy co-founder joins MS"

    I was thinking "How crazy do you have to be? Crazy enough to throw a chair?"
  • Great News (Score:4, Funny)

    by Essef ( 12025 ) on Saturday November 26, 2005 @10:31AM (#14118744)
    Windows Cluster Edition System Requirements:

        - 128 CPUs
        - 100 GB RAM
        - 30 square metres of floorspace
        - Liquid Nitrogen cooling system ... and they will still claim it has lower TCO then Linux!

    --
    Don't read between the lines, the real interesting stuff
    is below the line you just read.
    • Sorry couldn't leave it...

      Quote: "and they will still claim it has lower TCO then Linux!"

      "then" is time based comparison. "than" is a subject based comparison.

      So the proper usage is:

      "and they will still claim it has lower TCO than Linux!"
      • Utterly offtopic, but were you aware that the etymology of
        then and then is identical. They were, in early English, the
        same word. It's only chance that they ended up being clearly
        distinct in more recent times.
      • Sorry couldn't leave it...

        Quote: ""then" is time based comparison. "than" is a subject based comparison."

        "time based comparison" is a time comparison and a based comparison. "time-based comparison" is a comparison based on time. Ditto with "subject based".

        So the proper usage is:

        ""then" is time-based comparison. "than" is a subject-based comparison."
        • ""Time based comparison" is a time comparison and a based comparison" is incorrect as well. When there exists no comma between two adjectives the first adjective describes the second. Therefore, "time based comparison" describes a based comparison (whatever that means) but not both a time and based compairson. A comparison based on both time and based would be "time, based comparison". When two adjectives are used to describe comparison the adjectives should be seperated with a comma. (Three or more adjecti
    • Only 100G of RAM to 128 CPUs? That's less per CPU than we specify for development boxes today. Or is that L2 cache?
      And if you had 30 sq metres of floorspace, you wouldn't need the liquid nitrogen cooling. You could use blown air.

      By the time Windows Cluster System Edition comes out, your spec will be considered on the low side for a PDA.

  • by Red Samurai ( 893134 ) on Saturday November 26, 2005 @10:32AM (#14118749)
    Bill Gates: The Microsoft Side is a pathway to many abilities some consider to be unnatural... Burton Smith: Is it possible to learn this power? Bill Gates: Not if you stay at Cray...
    • by kesuki ( 321456 ) on Saturday November 26, 2005 @12:05PM (#14119107) Journal
      So now the race is on, who can build skynet first? google? [slashdot.org] microsoft? or linux users? [wikipedia.org]

      I think humanities last best hope is that it's microsoft.. humanity is saved by a BSOD (or perhaps by a gaping security hole that allows users to set terminators to target skynet)... of course google will never take skynet out of beta, and linux users would make skynet overly complicated, and abandon the project half way to completion.. when the lead developer gets a real job.

  • by Durinia ( 72612 ) on Saturday November 26, 2005 @10:33AM (#14118752)
    Burton was the co-founder of "Tera", the supercomputer company that purchased the old Cray division away from SGI in their 1999 restructuring.

    Tera was founded to develop massively multithreaded machines. After their big purchase, they took the Cray name for continuity with Cray's old customers and products, along with the fact that it's a much more viable "commercial" supercomputing name.

    • by mpg ( 220657 ) on Saturday November 26, 2005 @01:08PM (#14119386)
      Burton Smith responsible for architecture of the Tera MTA series and, much earlier, the Denelcor HEP -- both of which were ahead of their times technically but complete failures commercially. (Indeed, Tera Computer had significant financial problems and some corporate governance issues in the years leading up to the Cray purchase. I don't know the financials of Cray today, however.)

      Some thoughts, in no particular order:

      * The MTA and the HEP, together with Multiflow, represent the commercial roots of the multithreading (MT) work still going on in academia today. Note, however, that the "real" MT work is different by an order of magnitude from what we see in the threaded commericial chips emerging now from Intel, etc.

      * The rumor as of a year or so ago was that Burton and a few of the Tera old guard had been pretty much sidelined from the larger Cray operation into unfunded R&D projects being pitched to organizations like ARPA, etc. It would be nice to believe that someone in the commercial arena is going to fund traditional MT ideals, but I'm skpetical.

      * What is Microsft doing hiring him? Is this a largely PR move, to improve their HPC image? I have a hard time believing Microsoft is going to spend any money doing parallel architecture work; the list of companies that have tried and failed is long and impressive. Supercomputing today is either custom stuff, or high-end-but-nonetheless-stock hardware running Linux clusters. What's their angle?

      * Back in the day, Tera had one of the hottest compilers on the planet; indeed, their compiler IP was pretty much the only valuable stuff left from the MTA project. [Ditto for Multiflow, whose compiler served as the base for Intel's compiler, way back when.] It would be interesting to see who else from the original Tera team follows him over to Redmond -- compiler folk? Architecture folk? Surely not hardware folk?

      * If Microsoft wanted Burton, did Google make a play for him too? Now that would have been interesting -- one could have a fun time speculating about masive parallelism and large-grained work tasks across Google's distributed network...

      [disclaimer: I briefly worked at Tera in the late 90's.]
    • Branded! (Score:3, Interesting)

      by fm6 ( 162816 )

      After their big purchase, they took the Cray name for continuity with Cray's old customers and products, along with the fact that it's a much more viable "commercial" supercomputing name.

      "Continuity" is kind of the wrong word, since the SGI had little use for the Cray name. While they owned Cray, the name appeared only on minor products such as the Craylink bus.

      Despite SGI's neglect, the Cray name did and does have a lot of name recognition. So when Tera bought SGI's Cray division, they did so not just

  • by Nigel_Powers ( 880000 ) on Saturday November 26, 2005 @10:34AM (#14118755)
    Isn't Cray hardware and software completely proprietary? If so, no wonder MS is interested in teaming up with Burton Smith. However, as this article [slashdot.org] suggests, Linux is way ahead of the curve in this arena.

    Linux may not ever truly catch on in the desktop environment, but in high-end computing, it's a proven winner.
    • Not exactly right (Score:5, Informative)

      by Anonymous Coward on Saturday November 26, 2005 @11:00AM (#14118850)
      The Cray XD1 uses Opteron processors and runs a variant of SUSE Linux, but uses a custom interconnect. The Cray XT3 uses Opterons and runs Linux on service nodes, and the Catamount lightweight OS on compute nodes. The Cray X1 series has proprietary CPUs, interconnect, and OS. So you're only partly right. Cray does not hesitate to use Linux where it is appropriate. However, when you are doing something like designing your own vector processor from scratch, porting Linux to it just doesn't make sense.

      Linux has certainly proven itself to be a winner in lots of HPC computing applications, and Microsoft has a tough uphill battle to fight if they want to break into this market.

      You do seem to be implying that Linux-based computers running commodity hardware always makes more sense than using things like proprietary interconnects. It can certainly be more cost effective, but if performance is your main goal (this is "high performance computing" after all), custom-designed hardware like the interconnect on the XT3 is always going to smoke the off-the-shelf stuff which does not exclusively target the high end.
      • by sluke ( 26350 )
        I agree very much with your post, but would just like to point out that all of this really depends on your application. As an example, many high performance applications use the metropolis method to do Monte Carlo, and in that case (as in many other "embarrasingly parallel" applications) the interconnect hardly matters at all.

      • Cray does not hesitate to use Linux where it is appropriate. However, when you are doing something like designing your own vector processor from scratch, porting Linux to it just doesn't make sense.


        My guess is that porting Linux is way less work than writing a new OS from scratch. However, in this case Cray already had unicos running on the predecessor of the X1 vector computer (SV1 IIRC), so it was probably easier to port it, and less pain for existing customers, than to port Linux.

        If you were to start fro
      • by jd ( 1658 ) <`imipak' `at' `yahoo.com'> on Saturday November 26, 2005 @02:09PM (#14119649) Homepage Journal
        These days, a "high speed interconnect" means doing Infiniband better. Many of the exhibits at the SC2005 show were using Linux, OpenIB and Inifiniband, which is a good start - but slow, because Infiniband is generally implemented as a pseudo-bus run on top of PCI or PCI Express. The added layering adds a lot of latency, and it is latency that is killing a lot of high-end applications. That, and the fact that fat-trees saturate so easily, killing performance.
        • Going through PCI-Express is about as close you are going to get without a standard north bridge specification that everyone supported. It is very InfiniBand like...pretty much IB without the networking. Intel seemed to go this way when they didn't get their 4X HCA out the door. Then again, Advanced Switching Interconnect kind of gets you back to IB like fabrics.
          IBM has 12X HCAs for their high end lines that do not use PCI or PCI-Express. I suspect Sun is working on one as well given the all 12X switch
          • The problem with PCI Express is that it adds about 4 ms latency. Many PCI Express network cards quote latency of about 2.5 ms, but that's component latency, not integrated latency. Integrated, you have to add them together (6.5 ms). But that's still best-case, as it doesn't include any latencies in transferring data off the bus and onto the card, as that is not going to be included in either specification.

            Now, onto fat trees. Fat trees are simple enough - any two layers of the network have the same bandwidt

    • Are you suggesting that the beloved Linux is proprietary?

      Shame on you.

      <Flame-On>
  • by Skiron ( 735617 ) on Saturday November 26, 2005 @10:39AM (#14118773)
    Burton Smith took a two week training course in several stages for this:

    1. The mouse - what is it? 2. How to use the mouse. 3. Learn to click [OK] without thinking. 4. Timing - measure your bogomips with the mouse hourglass icon spinning after you click [Cancel] 5. How to reboot when the mouse hourglass icon is still there after 45 minutes.
  • by Anonymous Coward
    Imagine a Beowulf cluster of Microsoft Cray supercomputers... Wait!
  • by CSHARP123 ( 904951 ) on Saturday November 26, 2005 @10:46AM (#14118798)
    Burton Smith was co-founder of Tera Computer Company not Cray Inc. He could help MS in improving their thread architecture as well.
    • by Logger ( 9214 )
      I was going to make this point as well. He does not represent the Cray you are probably familiar with. The brains of the Cray you all know and love still lives on in Chippewa Falls, WI. Although Seymore Cray has long since left and unfortunately died, they still retain their lead vector computing architects.

      They've fallen on some hard times as of late. When Terra acquired the remenants of Cray from SGI, they continued Terra's parallel processing work. Which never turned out to be much of a business suc
  • by Dread_ed ( 260158 ) on Saturday November 26, 2005 @10:58AM (#14118840) Homepage
    ...wouldn't you just love to spend Bill's seemingly ulimited resources to fund your pet project?

    The guy is in the business of developing the biggest/fastest/floppiest computers he can. Having the deep-as-the-Pacific pockets of Microsoft to dig into can't hurt his chances of implementing all his pie-in-the-sky ideas.

    Smart move if you ask me.
    • ...wouldn't you just love to spend Bill's seemingly ulimited resources to fund your pet project?

      Indeed. I have sometimes wondered what it would take for me to become a Microsoftie. Despite my loathing for the company, when Bill would offer me my own lab and a serious budget to hire staff, to do my own research until my retirement, I probably would get over my aversion quickly.

    • I just hope this day will be the one remembered as the day MS started to loose money and drop in red ink below Cray's numbers :P

      :P

    • The guy is in the business of developing the biggest/fastest/floppiest computers he can.

      Yes, notice that you did not mention software in that sentence.

      The HPC market is tough if not next to impossible for software to make money in. Unless MS is going to pull an X-box like thing (and loose money), I don't see where any of the people in the HPC market want a Microsoft style system.

      HPC people want source code. They do stuff like modify the TCP/IP stack, modify the scheduler, modify the kernel, and so on. Th
  • by djupedal ( 584558 ) on Saturday November 26, 2005 @11:01AM (#14118856)
    Smith: "One more thing" [taborcommunications.com] is that the uniprocessor has pretty well run out of steam. Parallelism to date has been a nice strategy for HPC users and an afterthought for microprocessor vendors. Now, it is becoming a matter of business survival for all processor vendors. Parallelism is going to be taken more seriously, starting with the idea of exploiting multi-threading and multiple cores on a single problem. This is a major change. Imagine if Microsoft wanted to write Office in a parallel language. What would that language be, and what would be the architecture to support it? We don't have good answers to these questions yet'

    Imagine if you got paid to answer that question? Which, by the way comes out as 'parallel' and 'parallel language' (don't mix them up) ...the other shoe drops.
    • by Richard Mills ( 17522 ) on Saturday November 26, 2005 @11:33AM (#14118984)
      A good guess might be that that parallel language will be something like the in-development "Chapel" language that Burton has been championing. And Burton certainly has a lot of experience working with threading (google Tera's MTA "Multithreaded Architecture" supercomputer). This hire may turn out to make sense for Microsoft.
    • I more burning question is why would you want to expend the massive R&D effort to make a parallel version of Office or any of the basic desktop apps.

      I imagine most of this new work this guy was hired for is targeted at servers, not the desktop. Excel yes I guess I could maybe see an advantage if its a really monster spreadsheet, though I imagine you would be better off just compiling it for starters. Word and Powerpoint just aren't CPU intensive enough that a parallel version would yield enough benefi
  • Wodehouse (Score:4, Funny)

    by IainMH ( 176964 ) on Saturday November 26, 2005 @11:03AM (#14118864)
    "Burton joins Microsoft as a technical fellow."

    Was this article submitted by Bertie Wooster?
  • Non-compete (Score:2, Funny)

    by Anonymous Coward
    Due to Burton's non-compete agreement with Cray, for his first year as a Microsoft Fellow, he's going to read Cryptonomicon over the company intercom and fix broken chairs in the CEO's office.
  • Burton, [huh, hew] You are my son! [huh, hew] Come to the Dark Side my son! [huh, hew] (Bill Gates during the Interview with Burton Smith) (Comming soon, Bill Gates in "The Matrix") NoMorePoints.com
  • Burton joins Microsoft as a technical fellow.

    What an oddly old-fashioned way to say he's a tech guy.

    • What an oddly old-fashioned way to say he's a tech guy.

      If you did not know, a "fellow" is someone who is funded in a particular way. Usually a fellow is someone whose salary is guaranteed and who is allowed a certain budget for research, and has no obligations to produce anything. The idea is that fellowships are awarded to people who will produce the most valuable stuff if you give them free reign. Although I know of an IBM fellow who after receiving the fellowship went to lie on a beach for the rest of

  • Don't know what he is going to do for them. Have a friend that used to work for them and the new CRAY up here in Seattle is working on clustered super computers running Linux. Don't think that's going to translate.
    • Q:
      What do you call the conjunction of Cray/GNU/linux with Wine and ActiveX and MSIE and MS Windows For Clusters(TM)?

      A:
      It's called a "cluster-fuck".

      Q:
      How will this be different from the original MS Windows For Clusters(TM)?

      A:
      No difference, except that the BSODs will also be available on the SSH terminals.

      Ohh..., and the MS "Shared Source(TM)" will require not only a soul-snatching NDA, but also with an implanted RFID-protected DRM scheme.
  • No doubt... (Score:4, Funny)

    by mtec ( 572168 ) on Saturday November 26, 2005 @11:25AM (#14118956)
    ...to help develop a supercomputer version of the BSOD.
  • by Anonymous Coward on Saturday November 26, 2005 @11:27AM (#14118962)
    I think it was a foregone conclusion that Smith would eventually leave Tera^H^H^H^HCray once they dropped the MTA as a product.

    Some people have acted as if Burton Smith is the second coming of Seymour Cray. To be blunt, I just don't see it. The MTA was Smith's baby, and by most accounts it was a failure. The first version of machine was based on gallium arsenide technology and was very problematic to manufacture; less than 5 were built. Tera bought Cray largely for their CMOS design experience because they wanted to convert the MTA from GaAs to CMOS, but even that wasn't enough to fix its performance problems. While the massive multithreading capability is cute in theory, the MTA architecture simply doesn't have enough memory bandwidth to handle the scientific codes that cause people to spend 7-8 figures on a supercomputer.

    It does seem weird that Burton would go to a software company like Microsoft, though. OTOH, Microsoft Research also employs Jim Gray and Gordon Bell...

  • by deadline ( 14171 ) on Saturday November 26, 2005 @11:28AM (#14118966) Homepage
    This is kind of odd. Burton Smith is not really a cluster guy, although he probably knows his way around HPC (High Performance Computing). Cray is not really a cluster company [newsforge.com] (except for the system they bought from Octiga Bay [technewsworld.com] deal). If you want to read a review of what Bill Gates said at the recent Supercomputing conference, check out Where is the Cluster? [clustermonkey.net] at Cluster Monkey.
  • by Anonymous Coward on Saturday November 26, 2005 @11:34AM (#14118989)
    The new supercomputing fellow position will be a great complement to Microsoft's existing technology fellowships:
    • Menu drop shadow fellow
    • Tail-recursive Windows-Update/reboot dependency cycle fellow
    • Cartoon balloon notification fellow
    • CD-ROM executable file autorun fellow
    • Animated dog search technology fellow
    • Cool full screen color effect fade fellow
    • File replacement/deletion semantics fellow
    • Marketshare defensive game theory fellow

    Truly exciting research and development is in store at Microsoft!.

  • Unix (Score:2, Interesting)

    by Dollyknot ( 216765 )
    Perhaps he is going to teach Billy boy Unix, the defacto clustering OS. It could be said, the internet is Unix based, Google is Unix, Apple is Unix, Amazon is Unix, I could go on - Beowolf anyone? 'Tiz a shame Billy boy did not complete his computer science education.

    • Re:Unix (Score:4, Informative)

      by CharlesEGrant ( 465919 ) on Saturday November 26, 2005 @12:19PM (#14119168)
      Umm, perhaps this was before your time, or perhaps you're just going for the wry comment, but back in the day Microsoft had it's own version of UNIX: XENIX. They originally sold it on the Tandy and later ported it to the 386. They gradually sold their UNIX business off to *shudder* SCO. In fact I believe at one point AT&T had to by the rights to sell UNIX on the Intel x86 architecture back from Microsoft. Whatever Bill Gates' many sins, not knowing UNIX is not one of them.
  • by icepick72 ( 834363 ) on Saturday November 26, 2005 @12:23PM (#14119185)
    Now all Cray has to do is sue Microsoft because the guy is bringing over trade secrets.
  • Did you say, "Windows Cluster"? I've already got several in my data-center. In fact, every dekstop in my network that runs windows can be considered a "Cluster", "Cluster F#%$^" that is.... HA HA HA HA, I SO FUNNY!
  • by computerDr ( 226122 ) on Saturday November 26, 2005 @12:57PM (#14119329)
    but whatever it is, it will be interesting. Burton Smith is a very bright guy who pioneered multithreading computing first at Denelcor, and then Tera, which bought Cray from SGI and adopted its name. He is the founder of the company which is today called Cray, but the original Cray company was, of course, founded by Seymore Cray.

            Burton always reads broadly and thinks broadly. When designing a supercomputer he deals with every issue, from VLSI technology, Architecture, Operating Systems, and Compilers and Applications. He enthusiastically interacts with many experts, in many areas, and attains a very deep understanding of the issues.

              Burton, best of luck at Microsoft.

    Jon Solworth
  • 1. create a supercomputer
    2. ...
    3. profit
  • Burton Smith... (Score:5, Informative)

    by eXtro ( 258933 ) on Saturday November 26, 2005 @01:17PM (#14119422) Homepage
    isn't a founder of CRAY. He's a founder of TERA Computer who aquired CRAY in the late 90's. He's a proponent of their multithreadhed architecture - an architecture which has abysmally failed commercially. Since 1988 they've had only one actual cash sale of their system. What this probably means is that CRAY is returning to it's strength of vector supercomputers, such as the CRAY1, CRAY2, XMP, YMP, J90, SV1 and SV2 or possibly massively parallel systems such as the T3E and T3F.
  • Steve Scott, Cray's CTO, also left Cray recently [taborcommunications.com]. Dr. Scott's bio from Cray's management team [cray.com] web page:

    Steve Scott serves as Chief Technology Officer responsible for designing the integrated infrastructure that will drive Cray's next generation of supercomputer. Dr. Scott, who joined Cray in 1992, was formerly the chief architect of the Cray X1 scalable vector supercomputer and was instrumental in the design of the Cray/Sandia Red Storm supercomputer system. Dr. Scott holds fourteen US patents in the areas

  • by Anonymous Coward
    Microsoft tried to partner with Unisys on multiple CPU architecture but, unlike other partherships, Unisys screwed M$ instead of the other way around. Gotta give credit to those sleasy Univac bastards! Old age and experience trumps eagerness and youthful vision every time.

    Microsoft needs someone or something to get their multiple CPU and clustering architecture working halfway well. They don't seem able to do it themselves. But I predict this effort will fail too.

  • Take one Microsoft hellbent on becoming the only game in town even if they are sued by various governments around the world. Add one supercomputer corporation. Add one easily manipulated United Nations to strike out any metioning of Open-Source Software. Mix in a little Big Brother and voila! You've got yourself the worlds largest seriving of "oh crap."
  • Since when is Cray on the bleeding edge of anything? Since Seymour Cray ran things. Sorry but the Cray name doesn't have any cachet any more and no one cares. And Cray's technogical guts barely put them in the Top-500 list compared to all the others. So let MS have 'Cray'.
  • Wikipedia suggests to me that he's really the co-founder of the Tera Computer Company [wikipedia.org], which merged with Cray in 2000.

"I've finally learned what `upward compatible' means. It means we get to keep all our old mistakes." -- Dennie van Tassel

Working...