Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
IBM

IBM Wants CPU Time To Be A Metered Utility 565

kwertii writes "IBM CEO Samuel J. Palmisano announced a sweeping new business strategy yesterday, pledging $10,000,000,000 towards redefining computing as a metered utility. Corporate customers would have on-demand access to supercomputer-level resources, and would pay only for time actually used. The $10 billion is slated for acquisitions and research to put the supporting infrastructure in place. Will this model revolutionize the way companies compute, or is this plan doomed to be another PCjr?"
This discussion has been archived. No new comments can be posted.

IBM Wants CPU Time To Be A Metered Utility

Comments Filter:
  • by Perianwyr Stormcrow ( 157913 ) on Thursday October 31, 2002 @03:54PM (#4573314) Homepage
    Ballpoint pens proclaimed "the wave of the future".
  • by ellisDtrails ( 583304 ) on Thursday October 31, 2002 @03:54PM (#4573320) Homepage
    It will be tought getting quarters and dimes in the floppy slot. Or is that a cupholder?
  • Revolutionize? (Score:5, Insightful)

    by Mike Markley ( 9536 ) <madhack&madhack,com> on Thursday October 31, 2002 @03:55PM (#4573329)
    This won't revolutionize anything... I remember this when it was called timesharing on mainframes. The revolution was moving away from that model...
    • by Anonymous Coward on Thursday October 31, 2002 @04:04PM (#4573458)
      See, at that time ubiquitous networking was not a way of life. Also, software engineering was not as mature as it is now WRT to virtual machines, encapsulation, OO design, etc.

      Of course, all those technologies did exist then, but they can be counted on to be everywhere now. The reason mainframe timesharing gave way to PC's is because PC's could provide a more flexible and convenient sandbox to compute in, rather than the cumbersome interface of working with the mainframe in the company basement.

      These days returning to the idea of computing power as a fluid resource is a good idea because the landscape has changed and the world might actually be better prepared to accept the tradeoffs since the tradeoffs are much less significant now.
      • by n9hmg ( 548792 ) <n9hmg@@@hotmail...com> on Thursday October 31, 2002 @05:45PM (#4574324) Homepage
        I've been a hired gun for IGS (IBM Global Services, the outside contracting arm). I'm probably not in the minority here. You know how they are with hourly billing stuff. The way to get ahead in IGS is to maximize your billable hours. Your greatest hotshot project manager had the internal nickname "the assassin", because she would get into a project and halt progress while dramatically increasing billable hours, sucking all the cash she could out of the customer, who got out of the contract as soon as they could, and she'd move on to a new one. We see the same sort of short-term thinking often in business. That Cringley story a few days ago (i can't find it... /. search is broken) gave a really nice analysis.
        This kind of utility is going to allow the "service providers" to obfuscate the costs of the service, much the way fiber providers keep their "dark" lines secret, for negotiation purposes. Also, they will require some sort of compliance with their systems, allowing them to dictate what sort of software runs on their system, thus giving them the opportunity to insert inefficiencies there, too. Unless they can arrange to lock people into this model somehow, it'll never work. Nobody wants to let a vendor control both the rate and volume purchased. If they try to push customers into this model, maybe by restricting the availability of their hardware to outside customers, most will just migrate to another platform.
    • please, please (Score:4, Interesting)

      by EEgopher ( 527984 ) on Thursday October 31, 2002 @04:05PM (#4573463) Homepage
      don't ever let this happen. The car design scenario creeped me out. I work for an automotive supplier, and we ALREADY have to wait in line to use test equipment, testing chambers, etc. I can only imagine the local supercomputing hub monopolizing the speedy machine, creating more lines to wait in, and IBM bringing its supercomputer prices out of reach for anyone but their own subsidiaries to purchase. Could be a disaster, indeed.
      • Re:please, please (Score:5, Informative)

        by Ian Wolf ( 171633 ) on Thursday October 31, 2002 @04:21PM (#4573642) Homepage
        "We view this as Palmisano's coming-out party," said Thomas Bittman, an analyst at Gartner Research. "The industry will be measuring IBM against this as a benchmark for years."

        Well, here is Gartner Group, missing the boat again. SimUtility [simutility.com] has been doing this for years now, but because IBM is getting in to the market its news?

        Timesharing of computers is a very valid, and far from dead market for computing. There are a lot of companies that do not want to buy their own supercomputers, which will likely sit unused the majority of the time. As for the example of a car manufacturer doing testing on a new model, this already happens as do many other organizations.

        - America's Cup boat designers
        - Racing teams
        - Natural Resource Explorers
        - scientific organizations
        - and many many more

        We're not exactly talking about a new or even revived paradigm. Timesharing never died.
    • Re:Revolutionize? (Score:3, Insightful)

      by Xentax ( 201517 )
      The difference, I'm guessing, is that they're trying to make it easier/cheaper to get access to this sort of power if you only have a limited or periodic need for it.

      I'm not sure how many companies out there only need "a little" time on a "supercomputer" though...

      Xentax
      • Testing how a new drug interacts with a nearly complete biological model of a human cell? Or possibly entire tissues or organs?
        • Re:Revolutionize? (Score:3, Insightful)

          by Xentax ( 201517 )
          I guess there are companies that have their sights set on a *single* drug (like ImClone). But most such companies are always researching *something*, and usually several things at once, so they almost always need such capability (companies like Glaxo).

          Xentax
        • Re:Revolutionize? (Score:3, Insightful)

          by jaoswald ( 63789 )
          His point was that this kind of problem isn't going to take "a little time." You are going to want supercomputer time in rather large chunks, not little bits at a time.

          If you had a problem that a supercomputer could solve in just a few minutes, you could probably use a much cheaper computer for a few hours/days instead. If this is an infrequent problem, just use the much cheaper computer full time, and avoid paying any IBM bill.

          The only advantage of the supercomputer would be the turnaround time. In the end, you get what you pay for.
    • Re:Revolutionize? (Score:4, Insightful)

      by WatertonMan ( 550706 ) on Thursday October 31, 2002 @04:10PM (#4573528)
      While it is hardly a revolution certain applications might benefit from this approach. Basically you get processing power on demand. Clearly this isn't going to be used by the secretary, accountant or those who use their computers only for spreadsheets and word processessing.

      Those who, for example, might need a rendering farm but only for a short time might benefit. Consider that you only pay for the processing you need. If IBM comes up wiht some clustering software that is good and distributed this might work. However it is clearly aimed at the markets that are already buying very large IBM computers. It won't help, for instance, for the typical internet sever.

      Having said that though, what kinds of people are that? The main rendering farms are being used fairly consistently. So for them having a bunch of Suns or equivalent systems is more efficient. They can then just add computers. So who is it that would need this sort of thing?

      And if they did try and foist it on the general public it would obviously fail immediately. After all the heaviest users of processing time on general computers are games. And most people aren't quite willing to pay for the processing the latest Halo or equivalent might use. (Not to mention the fact that Dell, Apple, and Compact wouldn't follow IBM)

    • A CRAY supercomputer in 1980 has the equivilent processing power of a 500MHz processor. By the time IBM gets people to switch to this "pay for cycles" method computers will surpass it's ability in the cost/performance arena.

      It's not always how fast you go, it's how efficiently you get there. I could fly on the shuttle from Kennedy Space Center to Edwards Space Center (assuming NASA would lighten up the travel for free restrictions on italians in oklahoma! lol) but as fast as the shuttle goes I could drive there faster (although not nearly as stylish).
    • Y'know, that's what I thought, too. But this might not really be such a bad move, when one considers, for example, the recently reported standardization of the U.S. military. From the article:
      IBM, he said, hoped to fashion a computing grid that would allow services to be shifted from company to company as they are needed. For instance, a car company might need the computing power of a supercomputer for a short period as it designs a new model but then have little need for that added horsepower once production begins. Other services could be delivered in much the same way, assuming IBM can pull together the networks, computers and software needed to manage and automate the chore. Palmisano said the industry would first need to embrace greater standardization.
      I haven't ever worked in a place where the need for computing power has varied wildly over time (which is the only scenario for which this model seems to make any sense) so I don't know how common this market is or how valuable the service will be. But the part that creeps me out is that last sentence, talking about greater standardization. As a developer, I'm in favor of standard data interchange formats, but somehow I suspect that what this really means is a standardization on a single suite of software tools, and that's more along Microsoft's line of thinking.
      • Re:Revolutionize? (Score:3, Insightful)

        by rgmoore ( 133276 )
        I haven't ever worked in a place where the need for computing power has varied wildly over time (which is the only scenario for which this model seems to make any sense) so I don't know how common this market is or how valuable the service will be.

        It might be more common than you think. In my work, for instance, we have occasional spurts where we generate a large amount of scientific data that needs to be processed followed by long periods where it doesn't. We're limited to running on the fastest box we can reasonably afford, but it might be cheaper and faster to buy just the clock cycles we need. One thing that's unclear is whether our processed data would be secure on our own machine or would be on IBM's farm, though. We'd definitely need to keep it on our own machine.

        The other thing to consider is that it's possible that there are lots of applications where computer use might vary wildly from time to time, but nobody is thinking about them because they're uneconomical. Most places can't afford to have a supercomputer sitting around idle 95+% of the time, so instead they buy a machine that can process all of their data without the long idle times. The result is that there's a long lag between when the processing starts and when it finishes. If a system were available where they could buy the power to process that data rapidly when needed, though, it might make more sense to do it that way. They might very well wind up with about the same total cost but much faster results.

    • Re:Revolutionize? (Score:2, Insightful)

      by Cyberia ( 70947 )
      Hmm...
      sounds like when ISDN first came out.... remember when for the first time, it didn't disconnect and you were billed by the hour over x hours? How many people got bills for 100's if not 1000's of dollars? I wonder how long it would take for IBM to recoup it's costs after a few locked up applications hanging a cpu thread? Not long if you run M$ type applications.

      ^-Alt-Del (Werd Not responding, but still billing)

    • Re:Revolutionize? (Score:5, Insightful)

      by scoove ( 71173 ) on Thursday October 31, 2002 @04:24PM (#4573667)
      This won't revolutionize anything...

      No, in fact, it may be a good indication the end is near for IBM, and the past decade of "reinvention" was only an anomoly. Clayton Christensen's Innovator's Dilemna has only been delayed.

      One of the things I like about Christensen's model is that it illustrates the fallacy of product normalizing on the top 5% of customers. Lucent, Digital, Wang, Nortel, etc. all fell prey to this issue. They listened to their very best paying customers and shifted more and more of their product design to please them.

      Think about what IBM customers need supercomputer timesharing access? Probably their top 10% - or less. Can these folks already access timesharing? Certainly. So what's the hype about here?

      It'd be one thing if it was a minor effort with big PR fanfare, sending a polite message to IBM's favorite customers that they think about them frequently.

      But designating this kind of money and strategic focus? Especially when the focus appears to be a large, centralized and proprietary model (which flies in the face of low-cost, decentralized distributive models e.g. distributed.net, SETI@Home, etc.)?

      Time to prepare for the fall... hey, maybe there will be some nice Enron-quality assets at the auction.

      *scoove*
    • Back to the future (Score:3, Insightful)

      by dcavanaugh ( 248349 )
      I agree; we won't see the revival of timesharing anytime soon.

      The PC revolution was based on the desire to get replace dumb terminals with something that could do color graphics, fancy fonts, and WYSIWYG word processing. This evolved into a more user-friendly interface for data manipulation.

      For data-intensive applications, timeshare computing was economical, and it worked over low speed connections. Back in the 80's, it didn't take much data to qualify as "data intensive", either. I seem to remember something about a 32MB hard disk limit, for those PC users lucky enough to have hard drives. In general, data was never shared with anyone unless a mainframe was involved. File servers eventually brought data sharing to the PC, but even then, record locking was a joke compared to mainframe capabilities. You could run quite a few dumb terminals over a 9600 bps line, but that is inadequate for even one web surfer today.

      OK, what has changed? Is there some new generation of CPU-intensive applications that requires far more CPU power than desktop computers have? I think this is yet another case of a solution in search of a problem. The NetPC was supposed to run apps without the need for a hard disk. The concept died when people discovered that hard disks were cheap and broadband Internet was not living up to the advertising claims. Along the same lines, who needs supercomputer resources when none of our applications are really CPU-bound in the first place? Aside from specialized stuff like ray tracing, animation, and possibly busting DRM algorithms, I don't know how timesharing would become a mainstream product.
    • Re:Revolutionize? (Score:5, Insightful)

      by snookerdoodle ( 123851 ) on Thursday October 31, 2002 @04:52PM (#4573887)
      I agree. This whole business is so crazy: Timesharing with Service Bureaus that are now called Application Service Providers (ASP's). IBM needs to come up with a Truly Kewl Name for it if they want it to take off.

      I guess there'll always be some tensions here that aren't really technology per se: In this case, it's in-house vs outsource.

      Joe does an analysis that shows if he outsources all of IT, it will save $X,XXX,XXX, so they do it. Joe gets promoted. Three years pass. Sam does an analysis that shows if he brings all IT functions in-house, it will save $X,XXX,XXX, so they do it and he gets promoted...

      IBM and Microsoft make money no matter what. Kinda like lawyers. Oh, I forgot they ARE lawyers.

      'Sorry for the cynicism. ;-(

      Mark
    • by Usquebaugh ( 230216 ) on Thursday October 31, 2002 @05:04PM (#4574012)
      The revolution will be in revenue.

      Currently IBM big customers buy a new machine every four years or so, they pay a yearly maintenance bill. IBM has trouble predicting it's revenue quarter to quarter, in a downturn everyone stops capital expenditure and IBM mainframe sales plummet.

      Under this model everyone should pay less but they'll pay every month like clockwork.

      Computer Associates has a similar scheme for software. You rent your software on a monthly basis.

      On a technical level I'm all for it. I have a suite in my current site that is run yearly and takes for ever. Currently IBM has a big box sitting here and we just sip from it, until year end when we max it out for like two weeks. Let me rent time on a huge box and I'll be happy. Gurantee my data and response time and I'll be ecstatic.
  • Reminds me of .... (Score:2, Insightful)

    by Tink2000 ( 524407 )
    "Timesharing" back in the early days of computing. I always assumed too that this was the bank's way of justifying the abominable practice of charging ATM fees.
  • by elliotj ( 519297 ) <slashdot&elliotjohnson,com> on Thursday October 31, 2002 @03:56PM (#4573348) Homepage
    I really think IBM may be onto something here. Timesharing could totally change the way we use computers.

    Just imagine the office of the future. Instead of a computer on every desk you could have just one computer per department. And that computer would just dial-up to one of these IBM supercomputers.

    I think this could be big.
    • It's not timesharing (Score:5, Interesting)

      by scenic ( 4226 ) <sujal@s u j a l .net> on Thursday October 31, 2002 @04:07PM (#4573484) Homepage Journal
      RTFA.

      Seriously, beyond reading the article, I've been at several talks given by engineers from IBM as well as some marketing oriented presentations on this. They're focused directly on supercomputer type applications. In other words, the type of computing that requires expensive data centers, maintenance, and various processes around them. They want you to have computers on your desk so you can access the remote computing power of The Grid (TM).

      One example from the article involves car design/testing where resources are needed for a relatively short period of time, with lots of down time. Just keeping a cluster running is an expensive proposition, so the theory is that companies with limited needs for massive compute power can just rent it.

      It makes sense and sounds like a good idea, but I'm not sure how big that market is. For example, is it worth $10 billion? I don't work in an industry with this type of requirement, but maybe some folks that work in high end research, design, or engineering can share how much idle time their big machines rack up each year.

      The other problem is that companies may find benefits in having extra compute power around all the time. I mean, when I was in college, students often found a way to make use of all the idle time on machines (now whether that was research or just playing quake might be debatable :) ). Having a system in house means that instead of running 100 permutations on a design, you can run 10000 (or whatever), right?

      Sujal

  • PCjr was doomed? (Score:2, Interesting)

    by phraktyl ( 92649 )
    My first computer was a PCjr, and it served us very well. Still have it somewhere. CGA graphics, 3 note polyphony sound, and 128k of RAM. It doesn't get any better than that, Baby!
    • Yep, my first computer was a PCjr too. Great little machine. Who cares if it didn't sell that well, it was just a smaller, cheaper PC, and could run IBM stuff fine.

      And it even had a cartridge slot!
    • Re:PCjr was doomed? (Score:3, Informative)

      by nucal ( 561664 )
      North America's Last IBM PCjr Computer Club [efn.org]

      We were founded in 1985, and ceased our active role in 2001. The Eugene PCjr Club was the last club left in the nation that was devoted solely to the IBM PCjr Computer. We were a member-owned, not for profit, support group, and supported all PCjr users no matter where they lived for over sixteen years. The Club was organized for educational and recreational purposes only. We no longer hold meetings on a regular schedule, but we do still maintain a fairly large library of public domain and user-supported software and still have some PCjr parts available.

  • More Info Here... (Score:3, Interesting)

    by GT_Alias ( 551463 ) on Thursday October 31, 2002 @03:57PM (#4573358)
    USA Today Article [usatoday.com]

    Wow...what a security nightmare....

    • I'm sure businesses are just lining up to send their proprietary data like financials off to a public compute farm. Most business databases don't even need supercomputer CPU power; it's more about storage and I/O. And if you're talking about supercomputing clusters for things like science and engineering simulations, the price/performance ratio of x86 beats the pants off IBM PowerPC.
  • Revolution.... Mosix (Score:5, Interesting)

    by m0rph3us0 ( 549631 ) on Thursday October 31, 2002 @03:58PM (#4573370)
    It will be a revolution until Linux becomes mainstream on the desktop and every computer on the corporate LAN is part of a cluster, when users log off the computer re-joins the cluster. Companies should look at what they already have before shelling out more money.
  • As long as IBM doesn't change the rates for processing in conjunction with Moore's law, making the processing cheaper on their end, there could be quite a boost in profits from their current business model simply selling the hardware.

    But then again, one of the reasons that Enron went down is that they quit selling real, hard, physical commodities and instead went directly to a more ethereal model of paper sales and transactions.
  • Computer companies have tradionally veered away from from accepting liability for failures of their products. For an example, look towards Mordor/Redmond, etc.

    So will this move the ball towards corporate responsibility in this area?

    I am certain that a lot of companies will try to avoid it if at all possible. Of course, this would be controversial, especially re: open source, etc. but it is not the most common practice now.

  • This isnt new (Score:4, Insightful)

    by nurb432 ( 527695 ) on Thursday October 31, 2002 @04:00PM (#4573406) Homepage Journal
    In the mainframe world cpu cycles are already a potential billable transaction..

    So the concept is old and crusty..
  • Is this a hint that the industry has finaly figured out how to get the bugs out of software?
  • by Jaguar777 ( 189036 ) on Thursday October 31, 2002 @04:01PM (#4573410) Journal
    Samuel J. Palmisano announced a sweeping new business strategy yesterday, pledging $10,000,000,000

    I think Samuel has been watching Austin Powers way too much.
  • Doomed! (Score:4, Interesting)

    by Bilestoad ( 60385 ) on Thursday October 31, 2002 @04:01PM (#4573415)
    If the huge false start that was Application Service Providers showed anything it is that corporate customers don't trust computing resources that are outside their control. It doesn't matter if IBM can provide a better service or a more reliable one, it just doesn't feel that way - and the IT guys will never report favorably on something that will put them out of a job.

    It's PCjr, it's Gavilan, it's all kind of failures. And $10,000,000,000!
  • by f97tosc ( 578893 ) on Thursday October 31, 2002 @04:02PM (#4573424)
    for companies and institutions that use a lot of heavy computation.

    It takes a lot of time, space and know-how to own and maintain big-@ss computers. With broadband connections being commonplace, you could run your own progam remotely, and let a specialist (like IBM) handle all that stuff. And of course, there is value unlocked by having multiple users share common resources.

    Of course, the vast majority of companies and institutions (not to mention individuals) use their machines mostly for word processing and surfing the net - and thus they will have little use for this kind of service.

    Tor
  • by ChuckMaster ( 595275 ) on Thursday October 31, 2002 @04:02PM (#4573432)
    ...in an age were processors are dirt cheap anymore. I mean really, if I saw a p2 400 chip and a quarter lying side-by-side on a street corner, I'd pick up the quarter.
  • To win back that initial 10 BILLION [pinky finger to lips] investment? $1 per Ghz?
    $.01 per Ghz?
  • by Anonymous Coward

    It took the rest of the computing world YEARS to match the color & sound of that baby. What, you don't remember CGA and speaker music? Tisk.
  • This reminds me of how scientific super computers/clusters are leased out to various researchers for computing time. It works very well for that target audience, or so I am told. With a well formulated design and clever marketing IBM could find another audience for such "metered computing". I can't think of any other industry outside of the scientific community that would be looking for such a solution though. In the long run, wouldn't it be cheaper just to hire a consulting firm to build you a cluster?
    • Re:Clever, perhaps (Score:2, Informative)

      by jonatha ( 204526 )
      In the long run, wouldn't it be cheaper just to hire a consulting firm to build you a cluster?

      IBM does that, and they're not making the kind of money they'd like to at it recently. Neither is anybody else (e.g., EDS).

      This appears to be IBM's bid to claim a larger share of a shrinking IT pie.
  • ...but, as the article points out, only for corporate/supercomputer types of situations.

    After all, the PC revolution demonstrated that individual users want unrestricted computer usage on their own terms, and were/are willing to pay a fairly generous amount for it.

    I only see this project working out as long as companies see it as cheaper than building their own solutions. Linux-based clusters can provide a fairly low-cost solution for a lot of high-end computing needs (like rendering tanks) -- that's what IBM will be competing with.

    I think it'll boil down to how greedy IBM gets on pricing. If it's too pricey, companies with a fairly regular need for lots of computing power will deploy their own internal computing clusters -- which is ironic, considering that IBM will probably stay very interested in supplying such solutions. It sounds like they're just trying to play all sides of the game: Sell the big/pricey hardware, sell time on the big pricey hardware, sell the lower-cost alternatives -- or at least the contract to deploy and maintain them.

    Xentax
  • "Will this model revolutionize the way companies compute, or is this plan doomed to be another PCjr?"

    yes. Doomed to be another PCjr. People want expensive goods that they can brag about. Plus, let's see you game on it. Personally, metered utilities are bad enough on their own without extending into my computer.
  • by twfry ( 266215 ) on Thursday October 31, 2002 @04:06PM (#4573469)
    This is a great concept. If you guys actually read the (many) articles on Sam's speach, you'd see its nothing like timesharing either.

    The concept IBM is going for is to treat IT as another utility. Instead of some small company having to keep an expensive IT staff and maintain their own computers/network/storage, IBM says that it will do this for you. IBM will essentially replace the IT department and let some organization concentrate on running their own business.

    The cost saving of such a model (if successful) are quite substancial and will save everyone money in the end.

    I think IBM is on the right track with this and they are the only company really positioned to do so.

    • This is a great concept. If you guys actually read the (many) articles on Sam's speach, you'd see its nothing like timesharing either.


      No, it's an Application Service Provider, the next step in outsourcing. The idea wasn't all that popular during the dotcom craze; is it any better now?

    • Didn't companies try that towards the end of the dot com boom? I seem to recall that many of those internet applications were supposed to be moving IT responsibilities from the department to that company. How many of them survived?

      The problem is that most reasonably sized departments need an IT staff anyway. Having them run a mail server or the like isn't that big a deal. While some things can be effecient to subcontract out (i.e. your web server) often it is easier to have it on sight.

      There are exceptions. But I think that only a few IT functions really can reasonably be marketed out. I think IBM's marketing strategy will work - but only for a small niche.

    • Unfortunately, it's not a good idea nor is it going to happen. In fact, with the costs of IT workers so low right now, I have seen evidence of people moving away from the ASP model. Frankly, I believe there will always be a mix of outsourced IT development, in-house maintenance and development, and Application Service Providers (ASPs) who will fit in for appropriately commoditized applications.


      The real world has a huge diversity of applications - most enterprise applications can't just be outsourced for maintenance, ongoing development and so-forth, unless by the people who developed it in the first place. Exodus and the many colocation facilities of the late 90s and early 00s wanted to offer services sort of like this, but it just doesn't work - they don't have the talent in shop to do it, and can't learn everybody else's apps.


      If by "IT department" you mean IBM will operate databases, Apache web servers and J2EE app servers and other commodity applications in their own datacenters, then I do believe it, but again that is what a lot of high-service colos were doing several years back (many of whom went under). The economies of scale aren't there - the only people who would think they are are those who think of "IT" as some mythical blob of computer operators, and don't realize the mix of trained sysadmins, developers, and so-on that make up "IT".


      And the ASP model - well, the problem there is that though the company that developed an app is well suited to actually host and operate the app, if a corporation adopts that model, then their apps will be hosted and operated all over hell and high water. I mean, this is fundamentally the web services model, and it's nice for a lot of things, but I don't think anybody believes it is going to make corporate IT departments go away and allow the centralization of all computing work into a big IBM datacenter. I'll believe that when I see it.

  • Hooray IBM! (Score:2, Interesting)

    by GraZZ ( 9716 )
    If this isn't an idea for a killer app, I don't know what is.

    With IBM's continuing support of Linux in the commercial and high end server space, I have no doubt that this will be a GNU/Linux friendly project, if not composed entirely of GNU/Linux software.

    And just imagine the possibilites for breaking the MS monopoly. I can just imagine companies with hundreds of cheap, dumb, never-needing-to-be-upgraded X terminals connected to this computing "utility" for all their office/CAD/research/calculation/accounting/etc needs.

    Why not combine your computing "utility" bill with your software "utility" bill? IBM's supercomputers could always have the latest versions of Sun/Open/IBM/etc office suites. It would be the natural extension of the software subscription model.

    This project is going to make MS quake in its boots .
  • Nowadays the need for computing are everywhere. For instance where I'm studying almost every research group has its own Linux cluster. The use of these hardware comes in burst--often it sits idle, but when it is needed, it turns out to be under-powered. Besides, the maintenance of this machine costs too much man-power and money. After a couple of years/projects, these machines go out-of-date anyhow. Of course any given lab could always negotiate the use of supercomputers at large research centers, but IMHO having the infrastructure in place will be very useful.

  • They'll sell you a gigantic hard drive array and only bill you for the space you actually use. If you need more space you call your friendly salesperson and say "I need more space" and then he'll say "Then move to texas!" and... oh wait... Anyway he'll give you a disk that you stick in the machine and it'll turn on the extra drives. It's a little bit strange but the big customers seem to like it.

    Likewise you can get a machine with an big ol batch of CPUs, most of them disabled. Over, say, the Christmas rush you call your salesperson and have the other CPUs turned on for a month. Again: Strange but the corporate customers seem to like it.

    I doubt it'd affect Joe Average Desktop user all that much. Your average desktop has more processing power than he'll ever need and is already dirt cheap. It's only when you start talking machines worth millions of dollars that this sort of thing makes sense. The same people who go for this stuff pay out tens of thousands of dollars a month for support costs and they get some very good value for their money.

    Many of you youngsters might not be all that hip to mainframe culture or mentality, but it's a pretty good deal and those machines are still amazingly fast. A lot of shops haven't been able to get rid of their big iron because PC clusters just couldn't deliver as promised. Our VM box back at school routinely had 5000 users on 13 years ago and that machine never even hiccupped.

  • by Pig Hogger ( 10379 ) <pig.hogger@g[ ]l.com ['mai' in gap]> on Thursday October 31, 2002 @04:07PM (#4573489) Journal
    This must be a Hallowe'en story about the ghost of Thomas Watson sr....

    The last 35 years development in computers were precisely to move away from the "metered service" model which made IBM's fortune.

    On will recall that IBM's data-processing customers since the 1920's were charged by units of information stored/processed by the way of forcing customers to buy Hollerith (punch) cards solely from IBM, and run them in rented machines whose rental price was directly proportionnal to the throughtput of those (a card reader that processed 600 cards per minutes cost twice as much as one that processed 300, yet the only difference was the size of the pulley off the main motor - and you could upgrade by having an IBM tech that came and changed the pulley for a bigger one).

    So is it that the ghost of Thomas Watson sr has made a comeback to IBM's board of directors????

    • the only difference was the size of the pulley off the main motor - and you could upgrade by having an IBM tech that came and changed the pulley for a bigger one

      Uh... They still do that. Almost all of their z- and iSeries boxes other than their bottom of the line models come equipped with multiple CPU's that are soft unlocked. It's an easy way to do an upgrade - send IBM a check and they call your computer and unlock another few MIPS. No downtime, either. Actually, I'd be surprised if Sun didn't do something similar for its large clusters.

  • by kriston ( 7886 )
    We had metered cpu usage at college. It was a constant, annoying nightmare. Though the "money" was supposedly "fake" to students, you had to beg the admin assistants in the CS department to get more when your account ran low. The administrators of the Computer Center claimed it was actual money charged to each department. The school also gave out free accounts to students with small money allocations in them which gamers borrowed and stole to play GalTrader on the VAX.

    I thought it all went away until I started working for IBM. Every time you log out of the mainframe the computer told you how much money your session cost the company. That turned out to be real money that was charged to the department you worked for. We eventually reverted to using X Terminals connected to massive, rack-sized RS/6000 machines instead of the mainframes after that.

    Kris

  • by Servo ( 9177 ) <dstringf@noSPam.tutanota.com> on Thursday October 31, 2002 @04:07PM (#4573493) Journal
    If my history books and gathered information is correct, that was a business model used in early computers. A company would lease CPU time to users, generally because the end users couldn't afford the massive costs involved to purchasing and maintaining them.

    Now, I'm relatively young (mid 20's), but I recall people not even a half generation older than I telling stories about getting in trouble for running up large bills on their school's timeshare account.

    I could see where this might be useful, but only for a small handful of customers. There are not very many users of supercomputer's out there right now. I can't see that number increasing much just by servicing new customers who could benefit from a supercomputer but couldn't otherwise justify it for a short term project.

    If they are dumping 10 billion dollars into this, they must think they are going to get at least that much out of it. I seriously doubt that they could do so, not without ridicously overpricing their service. For small time users who don't need supercomputer levels, there are much cheaper ways to go. (Buy your own gear, lease your gear, etc)

    I work for a specialized outsourcing outfit that manages storage for large customers (internet datacenters primarily). I know how much of a pain in the ass it is to accomplish what we do now. I could just see the mess people would get into by getting into a timeshare system like this.
  • Computing will be made into a utility like hydro and the closest most users will get to a computer as we know it will be a wall socket. This will happen because big business and government want it to happen, because like bank robbers to banks, they know that's where the money is. The only recourse will be to go off the grid. Like many farmers are now going off the hydro grid and turning to wind and solar power we will have to go off the grid perhaps forming co-ops, credit unions and other institutions to allow us to access the big business and government run net while not being subject to the strictures. Maybe an independent satellite grid?
  • One of my first computers was an IBM PC JX (which I understand is similar to the PC JR). This was the first Personal Computer (ie. box with drives, monitor, keyboard) that I ever encountered - a big step up from the old cpu-under-keyboard micros. The 256K RAM was a big step up too.
    It was made in 1980 AFAIK, but had a 3.5" diskdrive and a cordless keyboard (features which never came along in other computers for several years).

    Why does the article talk about it as if it was a bad thing?
  • So how will corporate customers be convinced to trust sending their data to and processing their data on IBM's grid of computers?

    But it'd be nice for running multiplayer gaming servers.
  • Is that we continue to see companies like Microsoft and IBM looking to change their revenue model to subscription based services. It makes sense, just today I was talking to a friend about what parts I was planning to order to build his computer. And thinking about it, the average user can run most of their average software on a 1ghz intel or athlon board. Microsoft is having a problem getting people to continue upgrading simply because the lifespan of the software as-is is good enough for most. Naturally, the hardware demand will slow when software isn't written in such a way that it requires more horsepower. I think these companies see the writing on the wall. I'm just disappointed that instead of Revolutionizing they would rather rope consumers into some sort of model that doesn't require any extraordinary efforts on their part.
  • This reminds me of UNIX's parent, Multics [multicians.org], which had similar goals but never achieved widespread acceptance.
  • So does that mean I can put the IBM remote client on my idle machines and sell them back my own spare cycles? If this is truly a mirroring of the power industry, then I should be able to, as I can add solar cells and wind turbines on my property to offset what I take from the grid, and even sell my energy to the power company if I have a surplus.


    Now I just need to get a solar array to power my array of older computers so I can sell back their CPU cycles to IBM and maybe, just maybe, earn enough to pay for the solar cells.

  • No, this is _way_ more expensive than the PCJr was. $10billion? Sheesh... Lot of money. I wonder how they came to that figure? Why not 9 billion, or 11 billion?

    At least the PCJr wasn't doomed to begin with -- the only way to make CPU time valuable is to limit the amount available. With Moores law and economies of scale (how long till we have an 8-way 5GHz CPU system? How much longer until we have the same with 10GHz?) I find it difficult to conceive of any way to beat it, other than absolute domination.
  • PCjr [magnaspeed.net]

  • Everyone's so incredibly focused on their quarterly/monthly/dayly/hourly earnings numbers that companies don't want to sink big bucks into big IT project right now. Give them a chance to rent something by the hour/MIP/whatever and even if they pay more over the long run it keeps the 'up front' expenditure down and doesn't hit this quarter's numbers...makes the numbers look better, keeps the investors happy, lets the execs sleep a little better at night. Plus if you rent MIPS for a month on some new super-duper project and realize it's a dud you can walk away without having invested too heavily.
  • Utilities are great! (Score:3, Interesting)

    by DrinkDr.Pepper ( 620053 ) on Thursday October 31, 2002 @04:16PM (#4573587)
    I don't know what everyone is comlaining about. Take a look at the wonderful user-oriented, monopolistic services companies like the Phone and Cable companies currently provide (Qwest, Verizon, Cox, etc). This is a terrific model to emulate. And think of all the lovely intelligible Taxes the government could add to your monthly computing bill.
  • If all it is is code for a price hike, yes, it will fail.
  • You save money by outsourcing to IBM. But then you have other things to look at.

    First you'll still need some sort of helpdesk staff. Internal or outsourced to IBM.

    Second you're going to be spending more money on telecom circuits. Now you'll need enough bandwith out to the internet to support all of your "knowledge workers."

    Third security. Who will own the data? How will the data be secured against competitors who might also be IBM's customers?

    Fourth is backups. What is the liability if IBM can't restore a deleted file or email? What about redundancy and downtime? Who is responsible for lost revenue?

    Fifth it won't save as much money as IBM is hyping. Every company has tons of data that is rarely used, but still sits on file servers taking up space. This model won't change this. You will still be paying for storage that rarely gets used.
  • wow, bad programming would _REALLY_ cost you!

    there goes the wintendo TCO
  • There are some good reasons for selling CPU time as an on-demand service. I'm sure IBM knows what those reasons are and will use them to try and sell this concept.

    But there are two, possibly three, very powerful forces working against them here.

    First, computing power is very cheap these days. It's not precious. People have 2 GHz Pentium 4 processors sitting around waiting for their next keystroke in Word and they don't feel guilty about wasting CPU cycles.

    Second, the price keeps dropping at about a 40% annual rate. That same cheap PC waiting for the next keystroke would have been worth tens of millions of dollars to a scientific establishment in 1974. Not now. With a market where the supply of computing power is constantly increasing, it will be very difficult to peg any kind of price that people can use to make buying decisions, because those decisions will look foolish a year from now when someone asks why they didn't just buy a couple more PCs, or even a rack of PCs to do the task.

    Third, the rented computing power needs to be connected very well with the data it will be processing or producing. If the rented machine is on the customer's site next to his SAN warehouse, then everything's fine and this may not be a real problem. But if the big machine is in Fishkill and the customer's 10 TB of data are sitting in a weird database inside a firewall connected via T1 to the Internet, then there may be a problem.

    If I were IBM, I'd look into ways of increasing demand for computing power. Protein folding simulations for new pharmaceuticals is one way, financial scenario analyses is another, and database mining is yet another. They have to make customers want to buy extra computing power because they can easily see a business need for doing so.

    The other thing is they need to increase demand for the ultra high reliability mainframes. For some of those computing needs, a rack o cheap PCs is going to be a much more economical choice for their customers. However, there are some applications, like VoIP telephony, video streaming, or credit card approvals, where people would get upset by downtime.

  • In other news, IBM announces a sweeping initative to implement a standardized dress code of Blue Suits for all employees. Each IBM worker will also receive 2 punched cards, a pewter Employee-Number pin, and a Betamax tape about the evils of Communism.

    The initiative is expected to cost $1.86e+93 Kabillion dollars.

  • I think many people are missing the point. This is not a return to Mainframe-style time-sharing (although the technical descriptions and business model might seem that way).

    What IBM is proposing is that companies should not have to deal with running an IT department, when all they want to run is their business. They can simply pay for CPU cycles just as they pay for electricity, and their applications will simply use those cycles to perform their desired computation/storage.

    Think about this: No more dealing with hardware. No more huge IT staff. No more complex budgeting for IT. No more upgrade nightmares.

    Also, companies with as weak IT department will now be confident that the IBM (or whoever) datacenter folks will handle all the security concerns for their application (user access, encryption, authentication, DoS, hackers, etc). Likewise, they will feel confident that the datacenter folks will mirror and backup their data offsite in the event of a catastrophe, something only large companies today can afford to do.

    Once companies realize the benefits of this, not only will they rent CPU cycles, they might even decide to rent applications as well. Today the Applications Service Providers model has not taken off due to a lack to a killer app. I think Grid Computing is that killer app.

  • by grid geek ( 532440 ) on Thursday October 31, 2002 @04:23PM (#4573652) Homepage

    What IBM has said is that it hasn't got anything new to report but that its still here. If you look at their figures $10Bn works out at 3.5bm for the consultancy firm they purchased, a few billion for Grid computing, and I guess a couple of billion for linux. With a bit of spare change for research.

    Why are they doing this? My guess is that CFO's keep complaining about the cost of computing resources. A multinational with 10,000 desktops still has to ask for clusters and supercomputers for serious work while TFlops of processing are sitting idle on the secretarys desktops. Hard Disks, which used to be able to just about hold the OS, Office suite and files now have 10's of GBs of wasted storage.

    If you're serious about using computers you want to use resources efficently. And from IBM's perspective so how does this idea sound ...

    IBM sells computers to a firm, it then sells the software to turn all their hard disks into a P2P file storage system so that you never lose that important document ever again. Instead of a new cluster - set all the desktops to process data overnight as a massivly distributed system. (using IBM software), installed by IBM engineers under the direction of their new consultants. And of course the only real option for this is Linux.

    A single, nice, neat package. A single point of contact and massive economies of scale. Now assume that their contract allows them to use/sell spare cycles and their revenue stream suddenly improves a lot.

  • During his address, Palmisano said he saw signs that the global economy may have hit bottom and is flattening out. But he also said the tech sector would be slow to rebound because of the enormous growth and overinvestment of the Slate 1990s.

    One online magazine [slate.com] did all that? Now I know who to blame!

    In any case, I'm not sure how far this return-to-the-mainframe idea will take us; we've had the technological framework for doing this for years -- think RPC, OpenStep's Distributed Objects, Sun's GRID engine -- but where's the real value to the department's bottom line?

    I spent a number of years working on an extremely computationally-intense business process for the not-so-late, not-so-lamented WorldCom. For about half of that time, I was running the systems architecture and administration group, so performance management was a huge concern. We chewed up a lot of user time, but we were primarily hampered at every layer of the process by I/O (disk and network) and memory constraints. The same has been true of the accounting and provisioning systems I've worked with since then: the enterprise-level bottlenecks these days are things that can't be purchased on demand.

    I'm sure there's a market for these kinds of services -- medical imaging, for example, though the network costs would be high -- but something to bet the Big Blue (computing) Farm on? I just don't see it. *shrug*



  • A lot of people here are pooh-poohing this as "time-share" computing which was around back in the day saying we've moved away from that concept. I think it could certainly be a viable option for companies that are wanting more computing power, but also looking to cut costs.

    Also, consider that the companies making use of this would never have to upgrade their own clusters. I constantly see newer clusters being planned by companies and governmental agencies. It's always more processors, more MHz per processor and more nodes per cluster. Why not offload all of this onto a company (IBM in this case) who can put the resources (both in hardware and personnel intimately familiar with that hardware) necessary to maintain and grow ever larger, more powerful clusters.

    IMHO, it seems like a great idea. It will give far more companies access to "super-computers" than ever before and at a significant savings.
    It seems that once again IBM is being a very forward-thinking company and will probably end up make a pile of cash because of a little foresight and some guts to act on it.

  • by caesar-auf-nihil ( 513828 ) on Thursday October 31, 2002 @04:42PM (#4573814)
    Let's say IBM is able to set up a way to do what they propose, here's some basic utility concepts I'm curious how they will adddress:
    1. Transferring product from generator (IBM supercomputer) to location. If you've just used 1 month of supercomputer time to model DNA folding, how will IBM transfer that data back to you? What if the computations and use are faster than the transmission rate? [Modem vs. DSL vs. T1 line]
    2. Dependency - you rely up on natural gas and electricity to be there, and yes they go down, but can they guarentee their utility won't have worse problems - especially if its Windows run and goes down once a week, cutting into your bought utility time.
    3. Regulation. Most utilities are regulated, and those that were deregulated have not always worked out for the consumer. Let's say company A gets rid of its expensive infrastructure for computing resources and uses IBM's utility. What if IBM becomes the only utility and charges way more than it should - there's no competition so Company A can't shop around. Along this same vien, if Company A is smart enough, they'll never enter into a utility agreement with IBM if they can generate their own computing cycles and be sure that they'll always be there, versus putting all their eggs in one basket.

    IBM's idea may have merit, but anytime someone throws out the idea of a new Utility, that suggests that the resource they're selling is mainstream and essential, and therefore, is treated as a commodity. Those commodities are regulated and made reliable so that they never go down. I can't see supercomputing cycles as being something that is commodity, or for that matter, something I (or any company) needs to buy on a metered basis.
    • by wsloand ( 176072 ) on Thursday October 31, 2002 @05:29PM (#4574209)
      1. Transferring product from generator (IBM supercomputer) to location. If you've just used 1 month of supercomputer time to model DNA folding, how will IBM transfer that data back to you? What if the computations and use are faster than the transmission rate?

      Well, all that you would need at your location would be the equivalent of an Xterminal, and you would have all you need. Why would you need more than visualization of the data at your location? If it is a metered utility, you should be able to access it from anywhere negating the need for data transfer from their cluster of supercomputers. ...especially if its Windows run and goes down once a week, cutting into your bought utility time.

      I doubt that they would use a system that goes down. Often supercomputers are clustered and use a common set of storage space that would allow migration of users and processes between systems. There should be minimal downtime in the final system-- the equivalent of current utilities. Also, they would likely only go down when your other utilties went out (lines cut, etc).

      What if IBM becomes the only utility and charges way more than it should - there's no competition so Company A can't shop around. Along this same vien, if Company A is smart enough, they'll never enter into a utility agreement with IBM if they can generate their own computing cycles and be sure that they'll always be there, versus putting all their eggs in one basket.

      If IBM did this and was successful, I'd feel sure that Sun, MS, Intel, and maybe others (does Tera still exist?) would start their own shops as competition. And companies are already putting their eggs all in one basket, but now it's just a basket that is their IT department.

      I can't see supercomputing cycles as being something that is commodity, or for that matter, something I (or any company) needs to buy on a metered basis.

      So, as your desktop you have access to this system. Maybe you are using only 20 CPU minutes per month as a standard desktop user. Imagine a company that has 10k users that would only use 20 CPU minutes per month. I'd think it would make sense in that case. Similar systems already exist, and they're called ASP's (Application Service Providers), and they already work on a similar concept.

      The DOD and others already sell supercomputer CPU hours. I had a friend who had ~100000 CPU hours available to him on ASCI Red (for rocket and combustion fluid dynamics simulations). IBM is just formalizing it a bit more.
  • by alispguru ( 72689 ) <bob@bane.me@com> on Thursday October 31, 2002 @05:05PM (#4574024) Journal
    Bad news: the rates charged per byte/cycle/whatever ought to drop by 50% every generation (12-18 months these days).

    More bad news: typical supercomputer code is usually bummed (at least a little) for the particular hardware it runs on, to get the last factor of two or so for performance. If you rent crunchons, can you afford to rent generic crunchons and give up that last bit of optimization?

    Good news: if you can get around the bad news above, this could turn supercomputing into a lease-vs-buy situation, and when the computer you buy essentially depreciates 50% every generation, leasing might be a win.
  • by mikeage ( 119105 ) <.slashdot. .at. .mikeage.net.> on Thursday October 31, 2002 @05:08PM (#4574058) Homepage
    Well, sure, the Frinkiac-7 looks impressive [to student] Don't touch it! [back to class] But I predict that within 100 years computers will be twice as powerful, 10,000 times larger, and so expensive that only the five richest kings in Europe will own them.
    --Prof. Frink
  • by dacarr ( 562277 ) on Thursday October 31, 2002 @05:13PM (#4574090) Homepage Journal
    This might work well on the corporate level, but clearly not at the home user level. The big thing with home users is that the computer becomes a very personal thing in many cases - while your typical home luser will run a Gateway or a Dell (DUDE!), many geeks here on slashdot have probably built their own box from the parts level (or in a few cases, for all I know, that even involved a soldering iron). But hte point here is that it is their computer - unless they're running (say) SETI or hosting their own web page off of their DSL, they probably don't want other people sharing their user space. It's sort of a possessiveness thing - they don't want to run somebody else's hardware. Besides, you go to some LAN party, what's more impressive, that big ol' honkin' tower, or something looking like a dumb terminal?

    (*nix bigots and such note: Yes, I know, your defined user space on foobox is restricted unless you've chmod'ed your ~ to 777 (which is of course bombastically stupid), but do keep in mind that a typical home luser is running Windows, and accordingly sees their computer as their ersatz "user space".)

  • by gentlewizard ( 300741 ) on Thursday October 31, 2002 @05:21PM (#4574147)
    I know people who have generators or windmills and are connected to the electrical grid. When power demands are high, the power company actually pays THEM for their surplus power.

    If I have a nice Linux cluster that meets the "standards" for the grid (whatever they are), can I sell cycles back to the provider? Or is it just one way, in which case I'm trapped into doing whatever the grid wants me to do.
  • by PrimeNumber ( 136578 ) <PrimeNumberNO@SPAMexcite.com> on Thursday October 31, 2002 @05:23PM (#4574161) Homepage
    This would (did) work in the early days of computing, when it was virtually unheard of for anyone except for large fortune 500 companies and the US gov't to have access to computing power.

    Why would anyone be tempted to return back to this model? How many sub $500 or even sub $200 dollar computers, will it take for IBM to realize computing power isn't rare or expensive?

    And if a company or organizaion needs incredibly massive computing power is needed then can turn to companies like this [uniteddevices.com] to provide the solution, again using cheap generic pcs.

    To some it all up this is stupid, and now Palmisano looks like another idiotic buzzword chanting CEO. This will be yet another blow to IBM, and it will soon (IMHO) join the growing stable of companies (Compaq, HP and the "new" Cisco) that have been screwed by a clueless greedy CEOs. Somebody needs to cancel his subscription to Business 2.0
  • by spun ( 1352 ) <loverevolutionary@@@yahoo...com> on Thursday October 31, 2002 @05:29PM (#4574211) Journal
    <rant mode=paranoid>
    During the cold war, the CIA uses IBM to provide cover for operatives. IN exchange, IBM gets access to intelligence relating to competition.

    Fast forward to today. Dozens of high quality encryption schemes foil the CIA's spying. What to do? Their friends at IBM can help again: create a new paradigm that leaves IBM in charge of all corporate data security.
    </rant>
  • Could work (Score:4, Interesting)

    by dh003i ( 203189 ) <`dh003i' `at' `gmail.com'> on Thursday October 31, 2002 @05:41PM (#4574290) Homepage Journal
    As long as they keep it within reasonabl bounds.

    People do not want "shared computing"; they do not want to put their data on "borrowed computers" nor do everything on "rented computing power" or "rented space". IBM should realize that most people will still want their applications and most of their processes and files on their own computers.

    What IBM should be offering -- and what it seems like they're offering -- is loaning supercomputer time to people (for a price) for specific tasks which they can't accomplish in a reasonable amount of time on their own computers. This is a reasonable and useful idea; however, it is hardly new at all. At the University of Rochester, there are shared computers within biology labs, where people dump some heavy-duty computing operations and pick them up later. This went on during the 60's when computers were so expensive no-one could afford them. In short, this is hardly new nor revolutionary, though IBM may be putting a new twist on it by trying to use it as a business model.

    It makes sense. After all, most people don't need supercomputing power for the majority of their tasks; why spend money on a supercomputer when it'll be unutilized 90% of the time? But what IBM can do is maximize supercomputer utilization by selling a percentage of its resources to various customers; these customers save money because they pay on a per-need basis.

    For example, I often run Bayesian phylogenies. Recently, I ran a Bayesian phylogeny with about 50 taxa in it. This took 7 days on a dual G4 (2x 800MHz) Mac. That's with all of the computer's power focusing just on that. The time requires to complete the trees increases at a steep rate as one adds more taxa. If I was doing 200 taxa, it would have taken two or three months.

    So this can offer a great service to many people.
  • by Mittermeyer ( 195358 ) on Thursday October 31, 2002 @06:51PM (#4574820) Homepage
    Yes this concept is Timesharing on Steroids, but check what this CEO guy has already done- sold the commodity hard-drive biz and gone for Linux in a big way. He is clearly not risk-averse and assuming we all agree Linux is A Good Thing (and certainly a way to beat on Sun and Microsoft) he is not stupid. So what is he doing here?

    Posters who are focusing on the U-word (utility) need to see that IBM doesn't want Joe Citizen using this. The profit levels for dealing with the general public just aren't there for IBM- Big Blue is all about the corporate or government cash.

    In a word, cost savings for premier customers, i.e. the kind of people who will run up huge MIPS but not on a constant daily basis. Scenarios that come to mind beyond the car engineering ones are banks/companies/bureaucracies who have monster End Of Month/End Of year processing but reduced needs otherwise, websites that have a lower average use threshold except when the Super Bowl commercial airs, and disaster recovery (keep your disks mirrored offsite, if a disaster occurs call IBM, get your virtual mainframe up and switch to the offsite array).

    With IBM's sysplexing and workload algorithms in play, tying in 'outside' 'puters will waste few resources.

    I suspect that IBM's ultimate goal is disk farms on user sites and CPUs at IBM's Grid Ranch. With the CPUs under IBM's care they can really drop the TCO for the machines themselves.

    That reminds me, the real cost of operating mainframes nowadays beyond the staff is the third-party licenses for the support software- security, tape libraries, etc. That's because traditionally the software vendors license by MIPS on the machine, not MIPS actually used in your LPAR (logical partition, a carved out virtual machine on a mainframe). Whenever you increase the MIPS of your machine, the third-party vendors will bleed you dry (which ultimately loses IBM customers as they go to cheaper alternatives).

    IBM is beating on these vendors by competing in their arena to drive TCO down, and is also trying to get them to meter their actual usage under z/OS. So this grid thing is just a logical extension of what they are trying to do to not get run over by Moore's Law and the cost of running The Big Box.
  • Multics? (Score:4, Interesting)

    by ameoba ( 173803 ) on Thursday October 31, 2002 @07:01PM (#4574881)
    Maybe it's just that I'm reading at +4, but I'm suprised that nobody's mentioned Multics yet. The first thing that popped into my mind when I read "computing at a public utility" was Multics. I mean, the whole point of the system was to make computing a metetered utility. Not that any significant conclusions can be drawn from this, since Multics' failure had nothing to do with the business model, but more to do with them having overly ambitious goals for the project.
  • by SerialHistorian ( 565638 ) on Thursday October 31, 2002 @07:01PM (#4574882)
    A business will always choose a fixed cost over a variable cost. But there's many points of view.

    From a system administrator's point of view:
    I work in the data processing indstury, and we have a 12-way NUMA box as our mainframe. We moved from a 16-way SE-70 that we'd had for seven years earlier this year, and our software has already expanded to max out the capacity of the new NUMA unit - to the point where we've upgraded it several times.

    We'd continue to expand if the perception was that we have unlimited resources.

    From a business point of view:
    Even if we could do our dp activities on someone else's mainframe, we would still have our system administration costs for systems that can't be moved out of the building, so our costs don't go down. We would also have to maintain in-house development machines, because we wouldn't want to pay someone else for the endless compiles that we would need while developing new software.
    Additionally, we already have a huge, unamortized investment in fixed dp assets.
    Currently, our systems process for 24 hours per day to meet our needs. If we were to do these same activities on a metered system, we would probably not have to process as long, but if costs are over $5,000 per month metered, it's not worth it, especially since there are no cost savings except for the cost of amoritization of our main hardware.

    Corporations buy unmetered data lines because they don't want to have to deal with variable (and, in case of a slashdotting, extremely high and exteremely unstable) costs. Trying to sell a service that has a variable cost structure is good for a company, but buying a service that has a variable cost structure is bad for a company. The only time buying becomes good is when the company can't provide it for itself, as with electrical power and telecom. But it's easy to buy/build your own mainframe-class computer for less than $10,000.
  • by Avla ( 523352 ) on Thursday October 31, 2002 @07:17PM (#4574969)
    Palmisano pointed out that businesses already have excess capacity. IBM is willing to go in and turn a business's computers into a grid. It could operate as an intra-grid. But it could also sell excess cycles to other grids. Thus, a business could conceivably pay for IT equipment by renting it to others.

    This is not just about paying the meter. It is about utilizing all the wasted CPU cycles.

  • by mgoff ( 40215 ) on Thursday October 31, 2002 @08:52PM (#4575476)
    No matter how many articles I've read, it always amazes me how few Slashdotters read the article before they feel compelled to post their (usually misguided) opinion. I'm sure plenty do, but there sure are a lot who don't.

    IBM is working on the commercialization of Distributed Computing (henceforth, DC). This effort has been around for a while (in a related area, called Grid Computing, which some people use interchangably with DC) in the form of the Globus [globus.org] project, amongst others.

    The concept behind DC is essentially a next-gen timeshare-- a distributed timeshare with an abstration layer, if you will. Unlike traditional timeshare, you don't specify where your processing will occur. Unlike existing projects (like folding@home, dsitributed.net), DC doesn't require that you have a parallel, segmentable computing problem.

    Let's say (in your best Police Squad [imdb.com] voice) I'm a mechanical engineer who's designing a car engine with a few thousand parts. I want to run some simulations on my model to inspect heat flows, vibration, whatever. Car companies (or the little guy with a copy of Catilla and a great idea) don't necessarily have dedicated computing resources to run my simulation. So, until now, I had to band together with a bunch of other mechanical engineers with jobs similar to mine and try to justify a giant simulation node. Or, I might convince management to outsource the computation, requiring a bunch of red tape, NDAs, contracts, negotiation, etc.

    Now consider IBM, one of the largest commercial web hosts. IBM maintains giant server farms to support these services. Consider the amount of excess processing capacity sitting in these server farms because (a) a lot of servers are spitting out static pages and (b) extra capacity necessary to cover peak loading for special events.

    Expand this idea to include thousands of people who need computation power for discrete, isolated projects and thousands of companies with excess computational capacity. The consumers don't care precisely where or when their computations get completed, they only care that they get done in a "reasonable" amount of time. An intermediary, which it looks like IBM wants to be, can accept jobs from them, break them into as many pieces as they can, farm them out to whichever of their suppliers has excess capacity at any particular moment, combine the results, and return them to the customer.

    Even more, IBM can charge more if you want a high priority on your computation or if your job is not symmetric and must be run on fewer nodes.

    Actually, if you think about it, IBM is hurting their server sales by advancing this project. Right now, they sell a lot of excess capacity to companies to cover their peak loading. If companies can dynamically purchase exactly the amount of processing they need, that's money IBM's leaving on the table. Now, companies with high-availabity requirements will still purchase their own systems with enough extra capacity to cover their own needs. But, when they're not using that capacity, they'll sell it.

    I think IBM saw that the train was leaving the station. They know this technology is coming. And they see that the chance to be the intermediary in this market is worth more than the money they'll lose in hardware sales. And, they know if they don't, someone else will.

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...