Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Windows Compute Cluster Server 2003 Released 230

grammar fascist writes "According to an Information Week article, on Friday Microsoft released Windows Compute Cluster Server 2003." From the article: "The software is Microsoft's first to run parallel HPC applications aimed at users working on complex computations... 'High-performance computing technology holds great potential for expanding opportunities... but until now it has been too expensive and too difficult for many people to use effectively,' said Bob Muglia, senior vice president of [Microsoft's] Server and Tools Business unit, in a statement."
This discussion has been archived. No new comments can be posted.

Windows Compute Cluster Server 2003 Released

Comments Filter:
  • by Weaselmancer ( 533834 ) on Sunday June 11, 2006 @02:09AM (#15511813)

    Imagine a Beowulf cluster of those.

  • by Umbral Blot ( 737704 ) on Sunday June 11, 2006 @02:14AM (#15511830) Homepage
    Hasn't running "parallel HPC applications aimed at users working on complex computations" traditionally been done under Unix, and Linux as well. Seeing how Linux is free it's hard to see how "it has been too expensive", or "too difficult" (since unlike your home user the people running these systems are rocket scientists, I am sure a little command line use doesn't stump them).
    • Well the hardware to setup these systems up is one of the major problems for the price. Most researchers have limited funding. I doubt using WIndows is going to help the price problem at all though.
      • As I understand it this sort of thing can be done on just about any kind of computer. And at every university I've ever been to there's usually stacks of old pcs laying around.
        • by geoff lane ( 93738 ) on Sunday June 11, 2006 @02:45AM (#15511887)
          You can build an HPC from random PCs but it will be crap because the PC to PC interconnects will be too slow. Real HPC needs highspeed, low latency internal interconnects and these are expensive. But I fail to see how paying a "Windows" tax will make matters cheaper, or easier.
          • by kylegordon ( 159137 ) on Sunday June 11, 2006 @03:17AM (#15511951) Homepage
            You can still have real HPC with slow interconnects. It all depends on the application for the HPC. If your data has a high scatter rate that requires large amounts of data transfer all the time, then you need fast interconnects. On the other hand, if your data can be sent off to a node to be crunched on for 2 hours, then a bog standard gigabit ethernet interconnect will do you just fine.
            • You would usually call something that just assigns each computer it's own job and leaves them to it a grid, not a cluster. Cluster implies that it's a collection of computers acting like a single one.
              • That's a pretty simplistic definition of a grid. A grid is typically a pool of resources that is not under a single administrative domain, which is transparently accessible as a utility. Resources on the grid can be clusters, file systems, single machines, etc. I think you're thinking that this would be a distributed application, where everything is under a single administrative domain.

                A cluster is going to be managed as a single machine, true. But you're not necessarily even requiring communication t

            • A lot of times it's more a matter of latency than data volume... But still, you could be surprised by how low the latency can be over raw 100mbit ethernet, or using VI over ethernet. Fancy interconects are typically only for people who need high data rates *and* low latency, or for people who didn't do their homework.
          • Real HPC needs highspeed, low latency internal interconnects and these are expensive.

            It really depends on what you are doing with it. If the jobs can run in parallel well then gigabit networking is plenty. I'm looking after a couple of clusters used to interpret seismic data, and in some cases each node only reports back when you want it to do a daily checkpoint of where it is up to in a job.

            Microsoft haven't even noticed clusters until now, so it will be a few years before anything of note is written to

          • Some companies are in lockstep with microsoft, so this product will be beneficially to maintaining an 'all microsoft' deployment. yeah, having the slow interconnect speed limits the kind of clustering problems that can be done, but that's where all the linux clustering started :)

            If you're working at a microsoft shop, and need clustering, this would be a product worth looking at, i don't know how many people that applies to, but i'm sure microsoft must have had customers asking for this, or else they wouldn'
            • by crovira ( 10242 ) on Sunday June 11, 2006 @03:47PM (#15513520) Homepage
              now that super computing has been turned into clustering and there are lots of people doing it (like it hit >$x billion,) it has apppeared on microsoft's radar.

              Unfortunately for Microsoft, the terrain's already covered by Linux and those systems are a moving target with cost-benefit lines that Microsoft CAN'T possibly over take. (The software is $-free and open source and the users WANT collaboration.)

              Its a technological death trap for Microsoft. (I can just hear the SNAP. :-)
        • by joib ( 70841 )

          As I understand it this sort of thing can be done on just about any kind of computer. And at every university I've ever been to there's usually stacks of old pcs laying around.


          As opposed to running email and word, HPC is one of these things where CPU power actually matters. Those 500 MHz PC:s aren't worth the hassle to set up and maintain. Not to mention that heterogeneous hardware (which a random bunch of discarded PC:s probably is) is a nightmare to maintain and program efficiently in parallel.

          Most cluste
          • by Antique Geekmeister ( 740220 ) on Sunday June 11, 2006 @10:21AM (#15512719)
            There are exceptions: some folks do wind up digging up racks of old servers, at rock bottom prices or even for free, as their data centers or deployed installations decommission them. You can inherit quite a lot of slightly outdated hardware this way: if you can justify the electrical expense of running them, they're quite convenient for massive, lengthy computing jobs.

            A lot of cluster managers also mistake "really expensive, physically robust servers for "will always be working". The complexities of such setups and the general frequency of failure of "high availability" software itself means that the much vaunted 99.99% uptime of such systems is usually based on serious cooking of the numbers, not any metric actually used in the field. After the crops of failures of things like the old IBM deskstar drives, the run of bad tantalum capacitors in Dell motherboards, and other failures that strike entire classes of brand new hardware, it's often better to use older, cheaper, burned in hardware that's had the BIOS updates and the kinks worked out, and save the extra money for the next round of upgrades in six months or a year.
        • Sure, it can be.

          In addition to the issues with interconnects, raw performance of individual nodes, and heterogeneous clusters...

          Reliability becomes a big deal with such old computers. Sure, a well designed cluster will be able to route around a significant number of failed nodes, but computing efficiency will plummet and won't be terribly predictable(often, predictability becomes more important than raw burst performance). You might have 20 nodes working today, 12 tommorow, back up to 20 for a few weeks,
      • I doubt using WIndows is going to help the price problem at all though.

        If they have to use Vista it'll probably exacerbate the price problem.
      • by Weh ( 219305 )
        Most researchers have limited funding

        University researchers may have limited funding but a lot of researchers at large corps (oil/med/etc) don't have much trouble getting funds for their research, bear in mind also that not everything is research, for instance engineers may simply want to run some large numerical models etc. I have personally seen parallel processing on windows clusters implemented at a large corp, plenty of funding there.
    • by supun ( 613105 ) on Sunday June 11, 2006 @02:36AM (#15511870)
      Your response is the same as mine.

      When I worked a Motorola, and was part of their LUG, one of the members was talking about a Beowulf cluster they made. Like bad management, they ordered a bunch of desktop PC they couldn't use, and no one authorized their return. So they sat around in unopened boxes until his team decided to make a Beowulf cluster so they could model the electron flow around traces in an 8 layer circuit board before they had them actually pressed.

      Each prototype board cost around $10,000 to create. And after that you have to test to make sure the electron field, around a trace, does not affect another trace. Manually it took a long time and is prone to errors. So if there is a problem, it's another $10,000, and another, until you get it correct.

      With this Beowulf cluster they could model the electron flow around a trace and then only make one prototype, saving a ton of money and time. And this was all done with an ISO off the net and a bunch of forgotten computers.
      • Uh, sorry to be pedantic, but I think by "the electron flow around a trace" you mean the electromagnetic field around the trace. Electrons themselves never leave a printed circuit trace, i.e. the copper cladding "wire," but instead travel inside of it (or on its surface at high frequencies, resulting in "skin effect"). It sounds like they were modeling the transmission-line and crosstalk characteristics of the PC board.
    • by 0racle ( 667029 ) on Sunday June 11, 2006 @02:45AM (#15511888)
      Not to many are using Fedora or Slackware on some white box with parts from Best Buy to do HPC. They have been altered to specifically run on hardware that was made specifically for this, and even then management of it is not exactly simple. Not that I believe that 2003 Server will suddenly change that but just using Linux somewhere does not automatically make it the cheapest way.

      And I believe the correct answer to your question is Traditionally it has been done by tuned versions of commercial Unices which added to the base cost of the OS over and above the very expensive custom built hardware. Recently Linux has become able to do many of these tasks by similarly being modified at a significant cost running on the same expensive custom hardware. The recent HPC installation using mostly off the shelf parts (they didn't use Ethernet) was the one at Virginia Tech and that ran OS X, not Linux.
      • by joib ( 70841 )

        Not to many are using Fedora or Slackware on some white box with parts from Best Buy to do HPC. They have been altered to specifically run on hardware that was made specifically for this, and even then management of it is not exactly simple. Not that I believe that 2003 Server will suddenly change that but just using Linux somewhere does not automatically make it the cheapest way.


        The "standard" cluster these days is standard rack servers from a reputable vendor, along with a Linux distro tailor-made for clu
      • Ummm...I know of several clusters on our campus (VT) that are made of white boxes running Fedora, Gentoo, or Suse. One is a 200 node (400 CPU) Opteron cluster with a Myrinet interconnect named Anantham, and built by Dr. Varadarajan's graduate students. There are other smaller clusters ( 16 - 32 nodes ) of various design that are running on GigE. All of them built of white boxes and other off-the-shelf components ordered from mail-order companies. In the case of Anantham, all parts were ordered separatel
    • by jd ( 1658 ) <imipak@ y a hoo.com> on Sunday June 11, 2006 @04:32AM (#15512078) Homepage Journal
      If OpenMOSIX is compiled into the kernel, the total effort required to set up a Linux cluster is virtually nil - you need to tell it what nodes there are in the cluster and it transparently takes care of the rest. A home user with ZERO programming experience but has two or more computers, a hub and a working knowledge of what an IP address is can configure a rudimentary cluster in under five minutes. It may not be optimal, it'll rely totally on OpenMOSIX to do the process migration, and without any apps that can take advantage of it, it would be a little pointless, but it could be done. It requires no expert knowledge or significant intelligence. If you can operate VI, you can operate a cluster at that level.


      Difficulty, therefore, is NOT a significant factor in all of this. Ok, what about expense? Well, you're right that Linux is free. So is OpenMOSIX, OpenMPI (and many other MPI implementations), PVM (another messaging library), Lustre (a very high-performance network file system), many scientific and mathematical applications for clusters, etc. There are clustering patches for PoVRay, and it's always possible to write a script to have multiple machines render parts of images anyway. I'm sure there are other applications out there that I'm not thinking of right now, and it's only a matter of time before more "mundane" applications can take advantage of clustered environments. They already do, on Plan 9, to some degree. Oh, Plan 9 is also free.


      Cost would appear not to be a major problem either, then. Optimizing is the only thing that is in any way difficult, and a GUI system that doesn't let you get to the really fine detail won't help there. More time, effort and money is spent on optimizing than on anything else, and I simply can't see any possible way that an OS that is designed for ease-of-use by hiding the intricacies can in any way help in that.

    • And last i heard, microsoft's effort requires a 64bit cpu, so you have two choices:
      Buy a whole heap of new 64bit systems, and then buy microsoft licenses for them.
      Use a whole heap of old 32bit systems, and use a free install of linux on all of them.

      Doing it microsoft's way requires significant up-front investment, while linux clusters can be constructed out of a collection of old/spare machines at no cost...
      I build a cluster at work a few months ago, it was built using recently-obsoleted desktop systems tha
      • Or upgrade something else you need a lot of for another project, and use the old 32-bit or 64-bit hardware with Linux as a free software testbed before pursuing a big hardware purchase. That gives you some experience with the tools before spending huge amounts of software and hardware money up front. And you can upgrade when you need to, since you've already got the cooling and power in place for the old hardware.
  • by Bananatree3 ( 872975 ) on Sunday June 11, 2006 @02:14AM (#15511831)
    "but until now it has been too expensive and too difficult for many people to use effectively"

    and what about the site licence needed for this baby, huh? For us mere basement-cluster builders, there is a cheaper, open source alternative: The OSCAR Project [openclustergroup.org] ( Open Source Cluster Application Resources). Yes, it runs on Linux, but it is a nearly step-by-step system of setting up HPC-level clusters. It is being used on many 100+ CPU High Performance Clusters around the world, and it is free without those pesky site licences.

    • by joib ( 70841 ) on Sunday June 11, 2006 @03:04AM (#15511926)
      Another, perhaps even more popular Linux cluster distro is Rocks Clusters [rocksclusters.org].

      While I don't have personal experience with OSCAR, Rocks is really good. These days, doing a cluster with a "normal" distro is insane. I think MS will have to think long and hard before they come up with something equally easy to install and manage as Rocks.

      That being said, I think MS is not targeting Win CCS at academic supercomputing, which has a long history of using Unix/Linux, but rather they want to expand HPC to business customers who otherwise have a 100 % MS environment.
      • Nice to see the Rocks Clusters project is still going strong. A buddy of mine F. David Sacerdoti did a lot of work for it back in the day at UC San Diego / the San Diego Supercomputer Center.
      • I'll give the Rocks guys a plug (and not just because they were great guys when I was setting my first clusters up), for the elegance, and portability, of their solutions. The configuring nodes through XML scripts, scalable authentication system, support for I386/x86-64/IA-64 simultaneously, auto-configuring drivers for Myrinet, correctly configured Sun Grid Engine with no end-user interaction to set it up, and the funky solutions such as the Display Wall, makes them a stand-out distribution.

        On the othe
      • "MS will have to think long and hard before they come up with something equally easy to install and manage as Rocks"

        MS introduces New Windows 2003 Paper(tm)!

        (just watch out for Mosix Scissors)

    • Considering I work at one of the companies listed in the article MS is working with, and all our HPC clustering is working with linux, and will be for the coming future, I think this is primarily just a PR attempt. I don't even think we're remotely considered running anything on windows for our tasks.
  • What the? (Score:2, Insightful)

    by killjoe ( 766577 )
    "but until now it has been too expensive and too difficult for many people to use effectively,' "

    So the MS solution is cheaper then linux and easier to use the Mac clusters? I don't think so.

    Ms is the "me too!" guy from the usenet. Everytime anybody else comes up with something Ms comes in afterwards and says "Me too!".
    • Re:What the? (Score:3, Informative)

      Not that I disagree with you on this topic, but your post is almost word for word what the industry said of Microsoft when it entered the Server market competing against Novell.

      Microsoft was considered to be the 'me to servers' of the time, yet as it turns out the MS servers 'did' offer features that the Novell servers of the time didn't and application servers progressed to the point that MS kicked Novell's butt.

      The push for application servers also opened the door for *nixes to enter back into the mainstr
      • Re:What the? (Score:2, Interesting)

        by killjoe ( 766577 )
        While MS has had success with it's servers it has not been able to achieve the same monopoly status in that market that is has on the desktop and the office software markets. Since then they have tried repetedly to attain dominance in other markets by dumping software for free, forcing downloads with windows, paying customers to use them etc. Despite all that they have not been able to get better then third place in anything. Xbox, SQL server, sharepoint, great plains (what ever it's called now), their CRM
        • I might be inclined to agree, but if you pay attention to the R&D of Microsoft, both of what is publically published and mass amounts of money into stuff that is unseen, I wouldn't count them out on anything...

          They still put more into R&D than any other company, even IBM at its height ($$ or %).

          This also isn't money flipped to crazy projects or ideas, but to some of the best engineers and theorists in the world.

          Everyone here should drop by the Public portion of the MS R&D site once in a while, s
          • MS has been cutting their R&D budget to make it's numbers look better. It's a recent trend and it's not likely to reverse itself. MS profitability is suffering due to deep discounts it has to make to combat linux and due to the flattning of their stock price (MS makes a ton of money buying and selling it's own stock). At this point they can not allow their stock price to drop at all because it would set up a chain reaction spiraling their stock downward and sending their profits to hell.

            In order to a
          • Everyone here should drop by the Public portion of the MS R&D site once in a while, some of the concepts showcased are seeds of things that could be crucial in society next month or even 10 years from now.

            Yes, they have some clever people working for them, but this is Microsoft we're talking about. They're a business, not an academic institute, and that worries me. And to claim that current R-and-D could be "crucial" in ten years' time is somewhat debatable; Microsoft seems to "make" things "crucial" r

        • MS has cleary lost a lot of it's mojo. They are not the MS of old. Sure they can still sink billions into products which have no market share till kingdom come but that may not be enough anymore.

          I disagree. I think Microsoft has changed somewhat but I think they are still the Microsoft of old (same goals, etc.) It's the rest of the world that has changed the most (and thus, perhaps, why Microsoft has lost a bit of their mojo, as you say). There are *many* more computer users today than there were in 1995
  • but until now it has been too expensive

    That's a good one. I doubt that the expensiveness of HPC is something that Microsoft is going to solve. MS's marketing folks could have atleast thought of something that *sounds* like it might be true.
  • by nurhussein ( 864532 ) on Sunday June 11, 2006 @02:21AM (#15511844) Homepage
    ...the HPC community of scientists and engineers continue to not care.

    The same folks who operate nuclear accelerators probably don't have that much of a problem operating computers that they need Clippy and pretty colours to help them out "in case they get confused".

    • Heh! (Score:5, Interesting)

      by jd ( 1658 ) <imipak@ y a hoo.com> on Sunday June 11, 2006 @04:17AM (#15512053) Homepage Journal
      Microsoft delivered the keynote speech for the Supercomputer conference SC|05 in Seattle last year and had a huge stand in the main hallway. Most people were walking straight past it and were gathering round the stands offering free Linux CDs. (I would really like to see at least one of the BSDs to add something like bproc - Linux has some amazing capabilities and those can and are used in many applications, but NetBSD has better I/O throughput and there are many cases where cluster applications are I/O-bound rather than CPU-bound or feature-bound.)


      Microsoft's MPI implementation is, if I understand their materials correctly, based on MPICH (a BSD-licensed Open Source product) with some in-house fine tuning. MPICH is a good reference implementation but is not terribly fast and is getting to be long in the tooth. Far as I know, it doesn't have much in the way of fault tolerance in it, either. LAMPI and OpenMPI are built for speed (although I've found OpenMPI has room for substantial improvement) and have some fault tolerance support. So, they don't seem to be using an amazing architecture.


      Last, but by no means least, Microsoft's freebies were limited to an Opteron-specific Windows 2003 Cluster Edition beta and a cookie. By comparison, many others had booklets on what their products did, papers on the theoretical work being done, working demos (the molecular modeler with forced feedback was amazing) and some highly knowledgeable geeks to answer detailed technical questions.


      Microsoft may - someday - be an interesting player in the cluster market. Right now, though, they really don't seem to get what it is all about. I'm not trying to bash Microsoft here, they really don't have a product that is useful for the high-performance market, and seem to have the wrong libraries and interfaces for using the servers in a load-balancing, fail-over or distributed storage environment. This isn't to say the other vendors were perfect - I saw many areas that were horribly inefficient and poorly implemented - but rather that Microsoft would have done better to have come back from the show and re-thought what it was that they wanted the Cluster Edition to do.

  • Too expensive? (Score:4, Informative)

    by onlysolution ( 941392 ) on Sunday June 11, 2006 @02:25AM (#15511851)
    "...but until now it has been too expensive and too difficult for many people to use effectively..." According to their licensing model EVERY machine costs 469 dollars... Meaning a 20 machine cluster would have a 10,000 dollar overhead just on the OS alone. Not to mention the fact that you'd be compelled to buy it again as Longhorn Cluster Ed. in just a couple of years... It seems like a little work setting up a free OS cluster would be a vastly preferable option, is there really any need or reason for this (at this cost anyway)?
    • It seems like a little work setting up a free OS cluster would be a vastly preferable option, is there really any need or reason for this (at this cost anyway)?

      Because nobody ever got fired for going Microsoft? There are certain people that will have warm fuzzies now that Microsoft has offered a solution.

      I can't see that the intersection between that group and the one doing computationally-intensive research will be very large, though.
  • 2003=2006? (Score:5, Insightful)

    by John_Booty ( 149925 ) <johnbooty@booty p r o j e c t . o rg> on Sunday June 11, 2006 @02:35AM (#15511867) Homepage
    It takes some serious marketing balls (and/or or a lack of marketing brains) to release a product branded "2003" when we're already halfway through 2006.

    I actually have to applaud the naming move; it accurately lets everybody know that this product is based on Windows Server 2003. It would have been quite misleading if they'd passed it off as " Windows Compute Cluster Server 2006".

    Wonder what the meetings between the marketing team and the engineering team were like for this one. :)
  • by asky ( 815613 ) on Sunday June 11, 2006 @02:56AM (#15511912)
    "...but until now it has been too expensive and too difficult for many people to use effectively."

    I know many people take exception to that remark. But not everyone knows how to build Beowulf clusters.Some of us thought it was insane when, in the 90s, Microsoft said they were going to enter the server market. Yet here they are. And who in their right minds would run their web services out of IIS? (Then again, Apache now runs on Windows.)

    The point is, just because the idea is absurd doesn't mean it won't happen. If corporate consolidations put support for technical computing under the IT department, and support for Linux is considered toodifficult for the IT folks, it's only a matter of time before the decree to port technical computing applications to Windows.

    The fact is, M$ has access to software vendors, hardware vendors, and large customers in ways that Linux companies do not. They can create markets where they shouldn't be justified (unless you think all operating systems really require anti-virus software).

    I'd love to be wrong about this. But I've finally come to the conclusion that sound technical judgement does not stop absurdity from happening.
    • And who in their right minds would run their web services out of IIS?

      eBay for one.
       
    • by mikeb ( 6025 ) on Sunday June 11, 2006 @03:50AM (#15512004) Homepage
      "But I've finally come to the conclusion that sound technical judgement does not stop absurdity from happening"

      Something that the majority of Slashdot readers seem not to understand (and with justification) is that purchasing decisions are not rational.

      A basic training course on sales techniques will, unless it's totally bogus, emphasise the fact that purchasing is based on emotion, not rationality. Some 80-90% of all sales are emotion-driven and then sometimes post-facto justified by selectively picking facts.

      As the world becomes a more complex place and huge amounts of information become available to prospective purchasers there's a kind of paradox emerging that will horrify economists who cling to the theory that perfect markets are based on rational purchasers with perfect information, because the reverse is happening.

      Most purchasers are not analytic personalities. People who hang around Slashdot underestimate how much they have (in general) honed their own analytic skills with years of practice while most middle-tier managers in corporates never did. For those non-analytic people, being asked to rationally evaluate a mass of facts and statistics is a SCARY proposition. That's not how they got their job, they did that by looking good in a suit and licking backsides more or less assiduously whilst being ok at judging how the politics are shaping up. Their skillset is way different from yours and they react differently.

      The more information you make available to those people, the less they are likely to use and the more they will look around for 'safe' decisions. This will be especially true if their promotion prospects may depend on the outcome. THEY ARE NOT SPENDING THEIR OWN MONEY, it's the company's. Their decision will be based on the likelihood of retaining their job or getting promoted before their mistakes are discovered.

      So, figure for yourself. On the one hand some technical guy they distrust because he's smart can 'download an ISO from the interweb and build a cluster myself' or 'buy from Microsoft'.

      The first bit of irrational figuring will be 'the Microsoft stuff costs tens of thousands but the geek says it's free - that does not compute, he must be wrong'. The second will be 'if it goes wrong who will get the blame'. Guess the outcome of that one for yourself.

      The result is fairly predictable IF you understand the parameters. Microsoft's marketing does understand where it's operating and will be well aware that its customer base is heavily loaded with irrational people. Most likely they are hearing squeals from that customer base asking where Microsoft's compute cluster solution is because 'we want to buy one'. It would be foolish not to give them one surely?
      • Something that the majority of Slashdot readers seem not to understand (and with justification) is that purchasing decisions are not rational.

        No, something that the majority of Slashdot readers do not understand is that purchasing decisions are rarely made on the basis of up-front component costs.

      • by hackstraw ( 262471 ) * on Sunday June 11, 2006 @09:18AM (#15512583)
        ...emphasise the fact that purchasing is based on emotion, not rationality. Some 80-90% of all sales are emotion-driven and then sometimes post-facto justified by selectively picking facts.

        Maybe with things like cars and clothes, but clusters are merely machines to crunch numbers. Kinda like a big calculator, and little emotion goes into designing and using them. Its bang/buck. Thats it.

        Microsoft _may_ be able to sell this HPC edition to some PHB out of emotion who is completely clueless and has clueless admins as well, but an OS has little to do with an HPC system. In fact, the less of the OS the better. Most of the time, HPC apps are in user land. The OS does basic memory management and I/O, but that is it.

        Most all clusters are Linux. Why? Its good and cheap. You don't need the scalability and robustness of say Solaris, because you (typically, almost 100% of the time) only have one thread per processor. Yes, I know with large SMP machines, the OS can and does matter, but those rarely have the bang/buck ratio of clusters. The two big guys that have done this over the years (large SMP/NUMA/NUMAcc, etc) are SGI and Cray. And both of those companies are hard for cash right now. IBM probably does not make money, or much money off of their large number crunching systems, but they are probably viewed as RND, not a "for profit" good or service (I could be entirely wrong here regarding IBM, but thats my hunch).

        I don't know what Microsoft is doing with this product. Like someone else said, its probably just a "me too!" thing. In looking at their "details", they do not mention using desktop machines at night. The is a BIG miss by them, because that would be one of the only things that could even make this a marketable item for an already primarily MS outfit.

        The more I think about this, the more silly this sounds. Yeah, I'm an anti-MS guy, but I try to give them the benefit of the doubt, but this product seems completely worthless. Actually, now that I learned that this is an only 64bit offering, I believe this is a way for MS to sell a product for beta/stress testing of their 64bit server offerings.

        To close this post, from the FAQ:

        Q. How does a Windows-based compute cluster compare with a cluster running UNIX or Linux?
        A. There is little substantive difference, but UNIX-based solutions should be fully ported to Windows to realize the full benefits of the Windows operating system. There are several differences between UNIX-based operating systems and Windows. For example, I/O operations and threading are different on UNIX-based systems than they are on Windows. I/O intensive applications will benefit from using Windows native I/O APIs rather than UNIX style I/O APIs.

        Emphasis mine. The second bolded part is important. That porting is expensive and time consuming. Especially when its common for codes to be 30+ years old and designed for UNIX systems. Sounds like vendor lockin to me. Wow, typical Microsoft.

        • should be fully ported to Windows to realize the full benefits of the Windows operating system

          I got the impression from TFA that the major feature HPC Edition had to offer was its handy and clean interface with Visual Studio, allowing all the CLI masses to write software for a compute cluster all their own (without having to deal with learning a different platform).

          Would this have any effect on the HPC market? I can't see people with existing installations+software biting, but I can see it tempting to

    • I'd love to be wrong about this. But I've finally come to the conclusion that sound technical judgement does not stop absurdity from happening.

      Fortunately, I think the bottom line will prevent the absurd from happening in this case. If I understand correctly, licensing for each node will cost something like $465. That can really add up. And given existing free and relatively mature alternatives (Rocks, OSCAR), I don't see the draw of the MS solution. The main reason MS was able to penetrate the server

  • by nixascyborg ( 940774 ) on Sunday June 11, 2006 @03:09AM (#15511934)
    The idea behind Windows clusters being 'cheaper' has nothing to do with the individual price of the OS (versus, for instance, free Linux); the named price is low, not free, but that is not the point of your savings with a Windows HPC cluster. The point is that most programmers work on a Windows platform and have experience with it. And if you program with/for Windows and, for instance VS 2005, MS counts on the effort of building programs that run on HPC to be considerably less effort than it is on a Linux (or Xgrid) cluster. Making existing Windows 'hits' clusterable (i heard mention somewhere of image, movie and 3d processing software) is easier because of this too; making it work on other clusters is a pain because there you would have to work in an environment you are not used too. Like all things with MS; they count on the familiarity and ease of use to make this all run. That is what makes it cheaper; you cannot get a Linux HPC programmer and if you find him/her he will be godawful expensive; for WinHPC it will just be 'another VS programmer' of which there are a lot. Look for MS to add testing, debugging and development aids for HPC in the upcoming versions of VS.
    • you have presented the MS argument masterfully. Are you a professional PR person? Do you work for a PR or ad agency? I suspect you do.

      Anyway your points are not really that relevent because they are not that true. HPC applications tend to be very complex. They are not the types of applications you are likely to trust to a bunch of VS monkeys who draw GUIs with bound controls and call it an application. As for administration there is a mountain of evidence that Nix administration is cheaper then MS administr
      • :)
        I am just saying what MS uses to sell this (and their other) product(s). If it is true or not is simply not relevant; their way of getting market works like this and it works well, because a lot of people are ignorant to what they buy. Even if the costs is $millions; we see it happening every day in a significantly large company.
        I don't work in PR; I am a programmer working mostly on with Java, Ruby and PHP. My background is C and in university I did program a Unix HPC environment (based on Solaris at
        • You have never worked with VS and yet you think it will be easier? Why?

          Managers are not influenced by rationality, reason or facts. They are influenced by golf games, junkets, and gold watches. That's how MS wins business. See godaddy for example. MS paid godaddy to switch their parking to IIS.

          Money talks and the CIOs love gold watches and golf junkets in Hawaii.

          • "You have never worked with VS and yet you think it will be easier?"

            Try reading a post before you reply to it. It begins with "The idea behind..." and discusses arguments that MS uses to sell, and that the arguments often work and MS gets the sale. At no point does he make the opinions out to be his own, or give any real indication or whether he agrees or not. Just "What MS says == What PHB's like to hear", which is actually true a whole lot of the time.

    • for WinHPC it will just be just another VS programmer of which there are a lot

      And those "another VS Programmer types will know all the pitfalls that are relevant to distributed computations in a cluster environment because of ... It is not the environment for development that is important. XCode for Mac is not really hard to learn. Heck, even at MS on campus, most large projects eschew the IDE in favor of command files that invoke nmake directly. (And I worked in Visual Studio Enterprise Edition, so have
  • Like this will ever fly.

    It's rare that I've heard anything as ridiculous as this. There's nowhere UNIX is more entrenched than in big iron computing. Not only that, but the users-- by definition-- aren't dumb. Microsoft don't have a snowball's chance in hell in that marketing space, and I expect this is going to make Microsoft Bob look like a hot seller in comparison.
  • Finally, an alternative operating system for that darned Playstation 3 personal computer. I thought I'd never see the day.

    Seriously though. Anyone who buys into this needs to go sit in their failbox and think about what they've done. This kind of stuff has been done on the cheap with Unix and Posix operating systems for ages. I guess this could come in handy for experiments observing how artificially bloated clusters function, though.
    • I've strongly suggested Xbox based Linux clusters: besides the fun of making Microsoft lose money on every console purchased, then not buying any games for 200 machines. Unfortunately, the heat and poor venting of those power supplies is a problem: I wouldn't want to run more than a few in a small space, but it could also make for a fun cluster computing demo. Dual-boot one of them for games to draw people to your booth at a trade show, and you can have a bit of fun irritating Microsoft while doing actual c
  • by WinEveryGame ( 978424 ) on Sunday June 11, 2006 @03:50AM (#15512005) Homepage
    Per Microsoft document:

    "Microsoft Message Passing Interface (MS-MPI) implementation is fully compatible with the reference MPICH2"

    I guess given the fact that Microsoft is pathetically behind Linux when it comes to high performance computing, they may actually play by the rules here.

    Anyone has an insight on this one? Do they have a API lock-in strategy here as well?

  • by WoTG ( 610710 ) on Sunday June 11, 2006 @03:52AM (#15512011) Homepage Journal
    Ok, this isn't exactly going to be a hit with the scientific HPC community who already have all the clustering software that they need. But, think about MS's best customers, corporations. Imagine a new scheduling module for an ERP. If the model is complex enough, and if it has enough components and rules, it can easily become a major burden for a single server. And no, database clustering isn't necessarily the same -- not everything can be coded as a SQL statement, and even if they can, it isn't necessarily a smart way to apply a particular algo to a set of data. A Microsoft Windows based HPC unit would be perfect for the independent software vendor to use to power their new module -- assuming of course that the ERP itself runs on Windows. Odds are good that at least the client-side application is Windows compatible.
  • by hpcanswers ( 960441 ) on Sunday June 11, 2006 @04:39AM (#15512091)
    I think Microsoft's reason for pushing into HPC is to provide better software development tools for clusters. Can you imagine being able to program in VB.net instead of C99? After all, physicists are there to do science, not write code. Plus, MATLAB (Distributed Computing Toolbox) and Mathematica (gridMathematica) will both be available for Windows CCS, and I imagine Star-P may be out before too long. All in all, I'm cautiously optimistic about getting better development environments available for supercomputing. Of course there is still the concern about license costs and the resource-hogging GUI.

    I blogged about these topics a while back, both MS in HPC and better programming tools for supercomputing:

    http://hpcanswers.com/plog/index.php?op=ViewArticl e&articleId=27&blogId=1 [hpcanswers.com]
    http://hpcanswers.com/plog/index.php?op=ViewArticl e&articleId=25&blogId=1 [hpcanswers.com]
    • by san ( 6716 )
      Your comment should be modded funny, not insightful.

      The only reason you need a HPC cluster is -- indeed -- High Performance Computing. That means you're going to use as many cycles (or messages passed) as you can get from that $50K+ cluster you've just bought. This easily precludes anything but a fast compiled language like Fortran or C (and no, Java with JIT or .net are *not* fast enough. I've tried).

      The comment about domain specific languages in your blog for HPC purposes is true: Fortran is exactly that.
    • After all, physicists are there to do science, not write code.

      That depends entirely on the physicist. For some clusterjockey physicists, writing code (usually c++ in my experience) is part and parcel with doing the science, because matlab and mathematica (and maple and well, for some people maybe even IDL) are great for some domains, but not all, and c or c++ will get you to those other domains.

      Plus, MATLAB (Distributed Computing Toolbox) and Mathematica (gridMathematica) will both be available for Win

  • Dooom (Score:2, Funny)

    by MrPsycho ( 939714 )
    Ahh I am haunted by the vision of thousands of BSODs running in perfect parallel.
  • by 10Ghz ( 453478 ) on Sunday June 11, 2006 @08:17AM (#15512483)
    I made this comment on Ars Technica, and I'll just repeat it here as well.

    I just can't see any future in this. There are few things working against MS here:

    a) Price. In the large-scale, the price they are asking would mean lesser nodes. Instead of paying for Windows, the customers could just use Linux and add extra nodes to the cluster.

    b) source. Yes it does matter. In markets like this, the people running the cluster do fiddle with things in order to make it go faster. They can't do that with Windows.

    c) Ease of use. Well, the people who make clusters are usually not morons, so I don't really see any real need for "point 'n click" GUI for creating clusters. And maybe that GUI could impose a bit more overhead to the system? And creating Linux-clusters is relatively easy.

    d) Momentum. Linux has companies like SGI, Cray, IBM and others using and improving it. And there are universities involved as well. Those companies really know Linux and they REALLY know HPC. Microsoft has no real know-how regarding HPC.

    e) Familiarity. This time, people know Linux. MS is trying to beat an entrenched competitor. MS has succeeded in doing this before, but they did it by undercutting the competition. This time they are competing against something that is free. And their competitor has the advantages mentioned in A, B, C and D, all of which matter to the target-audience.
  • The "on special with new OEM machine" price of Win2k3 Pro from a typical local wholesaler is less than the price of an extra CPU and a motherboard to add it to. Half as much again for a non-OEM "upgrade" version. I shudder to imagine what an HPC version will cost.

    Put another way: I can run twice as many CPUs and save money if I avoid paying the Microsoft tax. I presume that the win is even larger if the licence says "per CPU".

    I can probably run four CPUs and twice the RAM for less $$$, given the real HPC pr
    • ...motherboard, 4 x CPUs, 2 x RAM and a digital still camera, plus I save over AUD$100.00 per system if I don't buy it but install Linux instead.

      For the HPC version, I could probably score a DVD camera and a friggin' partridge in a pear tree as well. FOR REFUSING TO BUY W2k3

      <SRACASM>I wish all decisions were so hard.</SARCASM>
  • by asky ( 815613 )
    I think MS has a lot more in-roads to technical computing that slashdotters realize.

    While working at NASA Ames, I found a lot of technical work on Windows by people who are mechanical, electrical, and aerospace engineers. Applications included: structural/mechanical CAD for fabrication, programs to drive test and measurement equipment, MATLAB to derive simulation models, VxWorks embedded RTOS development.

    The universal applications that I saw every engineer use were MS Word and PowerPoint. (LaTeX and trof
  • They have at least decided to implement the MPI (Message Passing Interface) standard for this cluster system. Hopefully it will be somewhat compatible with other MPI implementaitons like the C++ one I have used (I also used Java, but so far, I'm not impressed by Java MPI implementations).
  • "...on Friday Microsoft released Windows Compute Cluster Server 2003."
    I just wonder if MS realises that it's the year 2006....
  • How fast will spybot search and destroy run on it.?

    Seriously by the time you add virus scanning, spybot removal tools and all the other crapware you need just to keep one running half your cluster clock cycles are gone.

BLISS is ignorance.

Working...