Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Technology

NCSA To Build $53 Million, 13-Teraflop Facility 162

Quite a few readers submitted news of a distributed system to be built by four U.S. institutions (mostly) out of IBM computers, and paid for with a whopping grant. DoctorWho and november writes: "'The National Science Foundation has awarded $53 million to four U.S. research institutions to build and deploy a distributed terascale facility...' A link to the press release is here." An anonymous reader contributed a link to coverage on Wired, and GreazyMF to one of this story at the New York Times.
This discussion has been archived. No new comments can be posted.

NCSA To Build $53 Million, 13-Teraflop Facility

Comments Filter:
  • Intel had an article on this on their internal web site today; it went on and on about their Intaniums used in this system. But not once did they mention the OS used!! I don't think Intel wants to be associated with Linux.
  • What surprises me is that although the scientific community has fully embraced the flexibility, power, and openness of Linux; Microsoft continues its efforts to paint it as a "toy" operating system.

    13 teraflops is a pretty big toy.

    • Microsoft continues its efforts to paint [Linux] as a "toy" operating system.

      They can try all they want, but it is a moot effort. The recent Linux GUI desktops are gorgeous. Take away the GUI and you have a hard-core workhorse. Either way these "toy" OSs are pretty damn serious.

      What scientists and engineers appreciate about Linux and always have about UNIX in general is the sheer flexibility and modularity offered. I have never felt Windows offers such flexibility.

    • That's funny.... I, and most folks in my particular line of work (sysadmins who work for various state and local govts) consider MS to be the toy operating systems, at best they are "consumer-grade", not "commercial-grade" due to the lack of stability, arbitrary and capricious "upgrades" and dubious bugfixes that tend to wreak havoc with already installed apps, forced premature obsolescence... and an exhasperating void where security should be present.
    • Look at XP's new interface ... windows is the toy OS here.
      It wouldn't look bad in a teletubbies episode.
  • 2003 ibm brings into operation petaflop bluegene computer for a cost of 100 million. .. that would bring the price of a quality 10 terafloop machine into the 1 mill range .. i think that makes for some some interesting posibilitys ...
  • by Anonymous Coward

    This project will flop most terably

    sorry

  • With 450 Terabytes, we can give almost 7 generations of people music for a lifetime, without repeating...

    Let us examine:
    450 Terabytes
    4.5mb per mp3 (average)

    Thats 100,000,000 Mp3's

    Lets take the average length of music = 3.5 min.
    Thats:
    350000000 Minutes
    5833333.33 Hours
    243055.55 Days
    34722.22 Weeks
    667.73 Years


    Is there any venture capitalists interested in this idea? I think that this could be one great consumer service!!
  • is that I can't remember the last time I heard the words "microsoft" and "windows 2000" and "cluster" used in the same sentence.
    It's all linux baby!
  • I should be able to build a single machine this fast for about $1,000 in 10 years. Do you think they'll be done by then?
  • If they're IBM machines most likely they're going to use Linux... IBM is making a company wide push to the linux platform.
    Now my only question is... where can i get a beowulf cluster of these babies? That would be sweet...
  • Linux (Score:2, Insightful)

    by halftrack ( 454203 )
    The first thing I looked for was what OS it used. Linux seemed as a good choise, but being no expert I wonder if even Linux can efficiently utilize 1300+ Itanium processors. I realise that Linux (me being a big supporter myself) will have the wanted customizability, but wouldn't making a OS from scratch (Linux-like if that's best) Afterall Linux isn't tested nor built for clusters this big.
    • Saying Linux isn't tested nor built for clusters this big is a little like saying that sand isn't meant to go in car windows.

      Linux has ten years and millions of manhours' worth of development and refinement that has gone into it. You wanna do WHAT from scratch?? PASS!

      A cluster is still a machine-by-machine entity, which is the level that the OS is working at; it's the "hooks" you create that facilitate cluster behavior. If you want to write an actual "cluster OS," i.e., one that does not have a context on a single machine, then by all means, go for it, but don't blame these guys for building something by integrating mostly pre-existing parts in order to get the behavior they seek.

      Forgive me for the harsh subject line; it's been a long week!

    • My gut instinct is that IBM has done some tweaking to the OS. IBM knows about as much as anyone else how to make really, really big systems work. I'm not a huge fan of them, but they have always had a decent portion of their team devoted to big iron.
    • by Anonymous Coward
      Linux doesn't have to efficently use all 1300 processors - they're not even going to try and do that. All Linux has to do is efficently manage one CPU. (Well, for space concerns the nodes are probably going to be dual-proc SMP machines...). You're thinking of something called Single-System-Image (SSI), and IMHO it's the wrong approach to take with this many machines.

      This is not a big SMP machine - the kernel does not have to manage all 1300 CPU's at once. Instead, there will be 1300 copies of Linux running (in the long run, you don't really want the OS involved much anyway)

      It totally depends on exactly what they'll run on it, but based on what's currently running on the NCSA machines the concerns will be a high speed, low latency network (which they got in Myrinet - note that I didn't say cheap) and a good MPI implemenation to take advantage of it. Both LAM and MPICH have Myrinet-aware implementations, and they're both pretty fast.

      • I have visited the Shell cluster in Haag and it was 1024 dual CPU machines running Linux. The new cluster will be based on 16 processor machines. They 16 headed machines use a custom IBM chipset that allows SMP operation.
      • Myrinet is not cheap. If you look at their prices, 16 cards, and a 16 port hub, will set you back around 30grand. Assuming dual proc systems, thats only a 32 processor node. :P It however has killer bandwith (254MegaBYTES/second (1.96Gbps), and extremely low latency makes me drool). The klat (i think thats what it was) cluster that used the genetic algorithm to design the network, and a 3-4 cheap nics in each machine, and wirespeed switches, was a pretty good idea. Semi low cost (20 dollar nics, and the switch), and the speed rivalled gigabit solutions, for ALOT lower cost.
    • Re:Linux (Score:2, Informative)

      by Jeff Knox ( 1093 )

      You seemed to be slightly confused about how such clusters work. Linux is more than just a good choice, it is the definitive best choice in the supercomputing industry for clusters. If you ever goto the SuperComputing conferences, you would notice how there are many dozens of cluster companies, and they all use linux. Clustering is what supercomputing is all about now.

      Linux does not need to efficiently utilize 1300+ itanium processors. This isnt a singular machine, it's a cluster. The linux kernel needs to be able to handle its individual node (consiting of a couple processors or so) efficiently, not all the processors. The distribution and parallelization is handled by other software, such as message passing interfaces like MPI. To be honest, linux is tested on many clusters with this many processors and whatnot, and it has been customized and hardened for use in large magnatude clusters. But like I said, it really isnt a kernel thing, its the other software in the package that controls distribution of processing payloads to the individual nodes.

      Building an operating system for scracth is just a bad idea for something like this. They are not exactly something that can be built a couple weeks.Look at all the other OS projects out there besides Linux. Even with a few dozen constributors, alot of been years in the making, and are not any where near the level of linux, or an OS that could be used in such a fashion. Basically, it would take a very long time to build an OS from scratch that would do all the things necessary, and have the stability requirements for such a project.

  • by karb ( 66692 ) on Friday August 10, 2001 @10:46AM (#2123429)
    From wired:

    to eliminate the tyranny of time and space limitations.

    This time and space flaming has got to end. Granted, time and space have a monopoly on time and space, but it is a *benevolent* monopoly, which is ok with every legislative body in the world except the EU. Time and space have prevailed as the primary purveyors of time and space through quality, perseverance, and generous donations to any political party that would take their money. So, lay off, slashdot!

    • by karb ( 66692 )
      "Karb, you've just hit 50 karma despite receiving a -1 commenting on skylarov's predicament. Where are you going next?"

      karb: I'm going to kuro5hin!!

    • This time and space flaming has got to end.

      No kidding. If this keeps up, it could spell the end of innovation as we know it. Next we'll have the EU hauling time and space off to court, saying that they tried to extend their monopolies into other markets through predatory practices and hiding their API's. (They'll be saying things like, "We didn't even KNOW about relativity until the 20th century, for crying out loud! Where were those API's?")

      On the other hand, with a little competition, I might finally have enough time to finish my work and space on my desk to keep all this idiotic paperwork...
  • "Scientists involved in the project said the facility would help researchers understand the origins of the universe, cure cancer, unlock secrets of the brain, predict tornadoes, and save lives in an earthquake" yeah, but can it find me pr0n?
  • $53 million for a cluster to provide that power is dramatically less then it would cost from a vector outfit like Tera, NCR, Fujitsu, SGI, Cray, etc. etc. However, you can get more bang for the buck. I priced building a cluster, with gigabit switches and all that, for 13 teraflops around 8 months ago, to be around 20-25 million. Prices on processors have dramatically dropped since then. Like mentioned on a previous post, use cheaper processors, Itaniums dont have the price/performance ratio a Athlon 1.4GHZ, or a Intel P3 would have. Sometimes using the newest technology isnt always worth it.
    • Custom systems- whether completely novel, or a
      scale up of a commercial system- always have
      very high overheads.

      First, you have a dedicated hardware and software
      support crew. A production system ammortises
      this over multiple deliveries.

      Second, you are pushing the envelope. Though it
      looks possible on paper, you don't always know
      what won't scale up properly in a cutting edge
      system.

      Third, educational institutions (U of I) charge
      large overheads (@50%) for existing buildings/staff.

      The largest systems just don't get built
      unless the government subsdizes some of the costs.
      If you are lucky, the contracting company learns
      new things to help its commercial side.
      • It is true that hardware is only a fraction of the cost. There is staff cost, building cost (it takes quite a few man hours to build a hundreds to thousands of machines), support contracts, etc.. I agree. You still got to admit IBM or whoever wins the contract is still making a pretty nice profit. Otherwise they would not be in the business. I still wager that you can get the hardware done, and pay Scylld (Donald Becker's company) to do software and support, and whatever cost and do it cheaper than IBM will sell it to you.
  • "Quite a few readers submitted news of a distributed system to be built by four U.S. institutions..."

    Looks like our "Slashdot Distributed Story Submission" (SDSS) is working quite nicely.

  • how long it will take this thing to decode on data block from seti@home?
  • We REALLY need to frind a different term for measuring floating point operations. Anyone from the country, or has spent time their, can tell you that a cowflop, sometimes shortened to just flop, is the results after a cow is finished with the grass it ate. I see the term terraflop and frankly I reach for my boots figurring this is going to be a big one....
  • Can you? (Score:1, Offtopic)

    by Mononoke ( 88668 )
    Imagine a Beow....

    Aww, shit. nevermind.

  • It's going to be obsolete as soon as they get it working, so why go with bleeding edge (expensive) hardware? Why can't they crank it back a bit, use cheap 1Ghz processors, and have 3-4 times as many of them? It seems they could get twice the bang for the buck that way.

    --Mike--

    • It's a $53 million($US) project. Using about a thousand CPUs at say, $1000($US) each, you have an expense of $1million. Clearly, the cost of the CPUs is not going to be where the project will be limited. It's the cost of integration. More processors which are cheaper individually will likely have a higher integration cost and therefore be more expensive, not less. The real question is why they choose Itanium which is really an unproven technology.
      • The current number of procs is 3300. The machines they are using are 16 headed machines and they are installing 1024 machines. That means they are only populating 3 procs in each box. This means that over time they can expand the system to over 52000 procs. Sounds like a good way to go. It will allow expansion with time and money.
    • Because most (if not all) of the applications of this super-cluster will probably be research. Scientific research. Scientific research that requires insanely high-precision numbers. 64 bit processors go a long way toward high-precision without using any scary high-precision math libraries. Or the scary high-precision math libraries that you do use can be tweeked for 64 bit processors thusly resulting in faster math. That's the name of the game here. Faster math.

      Beyond that, you really need as high a processing-power to memory-transfer cost ratio as possible. When you are dealing with highly coupled simulations (such as wireless simulations) you pay dearly for cross-processor memory IO.
    • Actually, it would probably cost more per instruction/sec. to use cheaper processors since each one (pair? 8-way?) will need its own motherboard, RAM, etc.
    • If you want to see bleeding edge tech bleed onto your desktop sooner, you should thank the early adopters who overpay for that temporary edge.

      So why complain?

    • Myrinet (on a per-node basis) is actually a more significant cost than the CPU's, for many clustering projects. It runs around $2,000 per machine. Furthermore, the balance between fast processors/many processors depends strongly on the problem at hand.

      When you attempt to distribute a problem (a non-trivial problem, one that isn't embarassingly parallel), you have to strike a balance between the load on a single processor (or single SMP machine), and the overhead associated with message passing. Many research groups who build their own clusters go through extensive analysis of their particular problems to find the appropriate "sweet spot". For this machine, which will no doubt be used for many dissimilar projects, I don't know how they determined how much per-node power they needed. With a $53million grant, I bet they just went with the simple solution: as much as they can get :).

      My point is just that the assumption that the CPU is the biggest expense is erroneous when dealing with specialized networking equipment like Myrinet, and that trading processor power for numbers isn't always a good bet.
  • Then maybe the government would discover some intelligent life, because they obviously don't have any.
  • OS/software (Score:1, Troll)

    by clinko ( 232501 )
    Check this out: The software They're running [globus.org]
    • Re:OS/software (Score:4, Informative)

      by Anonymous Coward on Friday August 10, 2001 @10:24AM (#2152964)
      That's NOT the OS software they're using; they're using Linux. Globus is NOT an OS. It's an add-on, and one that's been around for years and years now.
      • That's NOT the OS software they're using; they're using Linux. Globus is NOT an OS. It's an add-on, and one that's been around for years and years now.


        You mean the Globe is not an OS? Think about it for a while - you can set your own enviroment in which you operate and it is a complex system.

        ...and now they will tell me the Globe is only an add-on... What's next? It's not a bug, it's a feature??!!
  • AIX (Score:2, Funny)

    by macdaddy ( 38372 )
    Let's just hope it doesn't run AIX. 'When you don't understand Unix, you probably run AIX.
    • I assume we've never really used AIX?
      • Actually it's in use where I work and personally I can't stand the damned thing.
    • I used to think AIX was a lame flavor of Unix... until after I'd been the sysadmin for 5 years for a govt organization that runs a mixture of AIX, Solaris, HP-UX, Linux, *BSD, and NT. I used to prefer Solaris, but now AIX is my favorite. It's the most stable by far, and performance is top notch for the hardware it runs upon. True, it's got its quirks and wierdnesses, but they all do. You just get used to them over time. The AIX LVM/JFS and memory management is the finest of all.
    • And you would prefer what business class OS with an LVM, JFS, on-the-fly kernel reconfiguration, and fantastic (albeit expensive) support?

  • but it probably won't pay their first loan payment on that behemoth. skye
  • I wonder if they would let me run my Illuminati(tm) software. I stayed up all night last night, coding like a maniac on speed, and have come up with something pretty special:
    1. Win the RSA factoring challenge [rsasecurity.com], put the money in a swiss bank account, and feed Illuminati(tm) back the account number.
    2. Use genetic programming to predict the stock market, making billions of dollars from the $500,000 won in the factoring challenge.
    3. Buy and sell peoples lives, based on loyalty to myself and Illuminati(tm).
    4. Voila, world domination
    Pinky will probably screw it up, as usual.
  • by Cytlid ( 95255 )
    ... and I will be using this cluster for my distributed.net client...
  • The title on this article is a bit misleading. As the press release [npaci.edu] says, NCSA [uiuc.edu] is just one of the four institutions involved in this project. The others are SDSC [sdsc.edu], Argonne National Laboratory, and [anl.gov]Caltech's [caltech.edu]CACR [caltech.edu] (Center for Advanced Computing Research).

    NCSA is certainly an important part of this partnership, but they're neither the only part nor the lead site.

  • so if there's a distibuted client app that also allows for file sharing, everyone could download it and we'd all have supercomputers. I saw 40TB of content on LimeWire yesterday, granted it was mostly music and not scientific data. but after decoding a music stream and loading a webpage, what do you do with all those extra clockcycles anyhow? how about providing a globus interface in the major Linux Distros, so you could subscribe to the grid along with system updates and supoort options. sure it'd piss off my ISP but what the hell do I care?
  • Weather forecast (Score:2, Interesting)

    by halftrack ( 454203 )
    I've heard that the algorithms to calculate tomorrows weather exists, but todays super-computers uses two days to calculate it ("And yesterdays weather was: %s" % (calculate_weather())" Will this do it? If so, they'll need two, one for the weather and one for all the stuff they planned to use this for.
  • Is it just me or does it strike you too that NSF is very busy funding the next big iron but not funding initiatives to teach the masses how to program massively parallel systems?

    Every cluster I know of (around 20 systems, 14 sites) is not for want of cycles, they need programmers to write the codes to eat the cycles. There are not enough small 'education' clusters to allow everyone the education & experience.

    Even just $1m of that could be much better spent in education instead of feeding the 0.0001% of computer problems that currently need this class of hardware.

    -- Multics

    • I do have a problem I am looking to solve with a mini-beowolf. Has to do with real-time music recognition. I've been in the process of setting up a 4 node beowolf (as cheaply as possible) with one controller pc.

      One could also calc PI now on a beowolf now. So yes, I would like to see a beowolf programming class in the college courses. Have is as intro, hardware setup, software setup, programming, advanced topics (weather).

    • Interesting points, but you do have to remember that massively parallel systems aren't for the masses anyway, and normal programmers don't wrestle with these "0.0001%" of problems that demand this kind of power. The fact is that those small percentage of problems aren't always trivial theoretical problems that don't have impact on our lives, but are more often things of practical importance to scientists and the military. Nuclear reaction simulations (both weapons and energy), protein folding, DNA sequencing, molecular simulations...all very very intense computing problems that demand powerful computers to produce better and better simulations.

      We need more programmers to program the machines? Maybe. This is an important but niche market, and throwing billions into education so that kids with bachelor's can call themselves super-computer programmers isn't the answer. The systems are already programmed by brilliant people researching these problems, doctorates all around. This isn't work for your average 15 year old 3r33t haXor, you know?
      • Alas, more and more programmers DO have to program highly parallel environments. Since Seymour Cray's untimely departure [www.csc.fi] there are few advances in faster computers at the top end that don't have multiple CPUs in them.

        So since we don't have faster processors (relatively) we will have more and more processors.

        I do not advocate spending Billions on teaching how parallel programming works and how to use PVM and MPI effectively, but I do think it is time that it become a standard theme at the college level CS world. That means that the professors learn how it works and then have access to equipment that allows everyone to have the experience.

        -- Multics

  • Seems to me... fast networking, collaborative computing, peer-to-peer information sharing, autonomous virus communities. We're heading towards a massive parallel global computing system controlled by no single entity.
  • for comparision (Score:4, Informative)

    by Alien54 ( 180860 ) on Friday August 10, 2001 @10:23AM (#2145593) Journal
    For comparision there is the Cosmology Machine [dur.ac.uk] in Britain, which among other things consists of an integrated cluster of 128 Ultra-SparcIII processors and a 24-processor SunFire, and has a total of 112 Gigabytes of RAM and 7 Terabytes of data storage. With all of this power it can perform up to 456 billion arithmetic operations in a second (228 billion floating point and 228 billion integer operations)

    This is impressive, but the nasa machine will blow it out of the water.

    • The machine in Britain would barely rank #48 on top500.org [top500.org], so what's your point?
    • If you want to compare, a better match is what NCSA is already running [uiuc.edu]. 1024 processors, over half a TFLOP sustained, a full TFLOP at peak.
    • Re:for comparision (Score:2, Informative)

      by ajiva ( 156759 )
      Big deal, the article claims its going to use McKinley based Itanium processors. Which are at least 2 years away from production. Plus they are using 1300 processors, while the one in Britain only has 152 processors. Quite a bit of a difference if you ask me :)
      • Hmmm, take a look at this Slashdot story [slashdot.org], also on today's front page. It looks like somebody just built the first Itanium cluster. That's really impressive if the chip doesn't exist for another two years...
        • Re:for comparision (Score:2, Informative)

          by Anonymous Coward
          McKinley is the second generation Itanium CPU which is at least a year away from production. The SGI cluster is using the first generation Itanium CPU (also known as "Merced") which is actually just a technology demonstration, and not a full-blown product from Intel.
    • the "cosmology machine" is small fry compared to the Cray T3e we have in manchester (www.csar.cfs.ac.uk)

      and that's our old machine...
    • I wonder how much of that power goes wasted into the regular administration of the site, idle time, everyone's web and email traffic, and storing employees' pr0n pix and mp3s, instead of the science it is intended for. It is my experience in a corporate environment that no one ever cleans up disk or mail boxes and they don't consider the impact of running non-essential processes on compute servers.

      Also, what are they doing to protect and backup that much data?

      • Very little of the system time is likely to go to waste. I'd say a likely down time is only a couple of % since there's always a long queue waiting to get on and there's a lot of stuff being done at the moment in this area. Or to put it another way - it's not going to waste

        If you look at the details of the system it doesn't handle email or web traffic, just physics programs which will be submitted through a single node which then distributes it out to the 128 processors so there wont be any user data on the machines just temp files from the data being run on each processor.

        Backing up data is likely to occur through the huge amount of storage currently being purchased for the UK-GRID and tape. What is there to protect? Monte Carlo simulations of cosmology experiments? this isn't personal or corporate data, one bogus result is unlikely to throw the experiment off.

        Anyway, this is only one of a few new systems in the UK which are getting announced at the moment, so although they aren't as large as the ones being *talked* about in the states they're here, now and working while it'll take 2 years before the american ones come online.
  • Lemme tell ya, it was exciting to be in the room with the press conference. Well, that's a bit misleading, because many of the institutions participating in the DTF (Distributed Terascale Facility) were holding the press conference over the Grid, via the AccessGrid, with Intel an IBM and their invited press people dialed in over the phone. I'm at ANL, where the AG was born.. if before I started here. ;)

    Actually, I didn't stick around for as much of the press conference, 'cuz I had WORK to do! Many press releases on the DTF make it sound like it's one cluster in one lab's basement, and that ain't right. As importantly as looking at the distributed nature of the project, look at what each institution is contributing- this isn't a homogeneous wide-area cluster. I don't have a big part in it, and my internship is almost over, but I'd like to think that what I've been working on for over a year may become well-known soon. So yeah, while the press conference was going on I was in the next room working on enhancing a visualization library to work on tiled displays, (which has been news on /. recently. Too bad few managed to find our work here- We gots neet stuph).

    Now an obligatory Oh, puh-leeze! RC-5 cracking? Quake? We've already seen Quake3 in the CAVE [visbox.com]. Listening to conversations at the reception, there are much cooler things coming..

    Cryptomancer, working the magic on code

  • Yes, but... (Score:2, Funny)

    by baptiste ( 256004 )
    will it be running the NCSA server software or will they finally switch to Apache? ;) ;)
  • Because that's some powerful encryption breaking power... if you know what I mean...
  • Someone must be able to do their maths out there and figure out how fast you can crack 128bit encryption at 12 terraflops? (and you know the US govmt. has a centre like this alread right?) Matt
    • 2**128 = 3.4e38
      13 teraflops = 1e13 instructions per second

      Assume 1 trial decryption per instruction
      which is of course unrealistically low.

      You still need 3.4e25 seconds or about 1e18 years to search that keyspace.

      Sorry, no cigar...
  • by pgpckt ( 312866 ) on Friday August 10, 2001 @10:26AM (#2156812) Homepage Journal
    The hope is that -- as an open source network using Linux and standard IBM servers -- it will be easily expandable and able to follow a similar trajectory to the Internet.

    "The only way to do this project is open source," project director Stevens said.


    Interesting that researches know that open source projects are the only way they can control all the variables. After all, if you don't control the OS, you can't be sure some little bug in the code is screwing with your data. Universities have long understood this principle, which is why Unix is so popular. Now our millions of tax payer dollars will be spent on research rather then licensing costs, plus the research is controlled, scalable, and open to peer review. Always nice to see professionals understand the benefits of open source that no closed source movement could possibly replicate.
    • Always nice to see professionals understand the benefits of open source that no closed source movement could possibly replicate.

      While I am in broad agreement, do not take the announcement of this machine as another blast in the direction of Micro$haft, or another nail in their corporate coffin. If a closed-source system is built correctly, and presents consistent and well-documented interfaces to the outside world, then it can be just as effective.

      Business didn't employ Unix because they could get the source code, they bought it because it followed interface standards, and it was thus easier to get your Unix boxes to talk to your S390s and your Unisys 2200s and your VAXs etc etc etc

      If Microsoft had offered common external interfaces in the first release of NT, and not those bloated buggy propriety standards years later, they might actually have managed to produce a useable OS that enterprises could then integrate into their existing data centres, rather than boxes that perform tasks in independant installations.
      • "If Microsoft had offered common external interfaces in the first release of NT, and not those bloated buggy propriety standards years later, they might actually have managed to produce a useable OS that enterprises could then integrate into their existing data centres, rather than boxes that perform tasks in independant installations."

        Ah, but then there would be no incentive in the future to replace those machines. Microsoft, as the subscription based licenses show, cannot merely sell a product and live off the income. That's not how you maximize profit. You keep them paying, and make sure they can't pay anyone else. That's how a monopoly works - you don't play nice with anyone else.
    • while open source is useful here, you shouldn't use this argument as a justification for the GPL. the BSD license would more than suffice for these purposes.

      The GPL seriously undermines the commercial viability of software.

    • Yes - using linux is all very fine and well but it has some nasty suprises. For example on RedHat 6 upgrading to the next version of Sun's JDK (in this case 1.3) requires an upgrade to a new version of certain libraries and the recompiling of most of the software on the system.

      While this is fine on a home hobbyist machine it is not very good if you have multiple users and especially not if you are selling computer time to companies. And why do you need Java 1.3 you ask? You need it because the Globus CoG toolkit [globus.org] needs it.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...