IBM Wants CPU Time To Be A Metered Utility 565
kwertii writes "IBM CEO Samuel J. Palmisano announced a sweeping new business strategy yesterday, pledging $10,000,000,000 towards redefining computing as a metered utility. Corporate customers would have on-demand access to supercomputer-level resources, and would pay only for time actually used. The $10 billion is slated for acquisitions and research to put the supporting infrastructure in place. Will this model revolutionize the way companies compute, or is this plan doomed to be another PCjr?"
In other news (Score:5, Funny)
It will be tough. (Score:5, Funny)
Revolutionize? (Score:5, Insightful)
Things are a little different now. (Score:5, Interesting)
Of course, all those technologies did exist then, but they can be counted on to be everywhere now. The reason mainframe timesharing gave way to PC's is because PC's could provide a more flexible and convenient sandbox to compute in, rather than the cumbersome interface of working with the mainframe in the company basement.
These days returning to the idea of computing power as a fluid resource is a good idea because the landscape has changed and the world might actually be better prepared to accept the tradeoffs since the tradeoffs are much less significant now.
Re:Things are a little different now. (Score:5, Interesting)
This kind of utility is going to allow the "service providers" to obfuscate the costs of the service, much the way fiber providers keep their "dark" lines secret, for negotiation purposes. Also, they will require some sort of compliance with their systems, allowing them to dictate what sort of software runs on their system, thus giving them the opportunity to insert inefficiencies there, too. Unless they can arrange to lock people into this model somehow, it'll never work. Nobody wants to let a vendor control both the rate and volume purchased. If they try to push customers into this model, maybe by restricting the availability of their hardware to outside customers, most will just migrate to another platform.
Re:Things are a little different now. (Score:3, Interesting)
I took one last semester and am in another one right now - these courses are basically singing the praises of OO, and they are not taught by idiots. OO gives us a lot of desirable things in a language, while the tradeoffs are becoming less and less significant (mostly speed).
Just like the other guy said, just because you don't understand it doesn't mean it is bad. There are good reasons why OO is becoming so popular - and it has nothing to do with trendiness. Computer scientists sit around and think about all these issues day in and day out. They write papers, and show imperical evidence on these subjects all the time, there is mass peer review going on right now (and in the last several years) about OO. Please don't just shrug it off as a trend.
Derek
please, please (Score:4, Interesting)
Re:please, please (Score:5, Informative)
Well, here is Gartner Group, missing the boat again. SimUtility [simutility.com] has been doing this for years now, but because IBM is getting in to the market its news?
Timesharing of computers is a very valid, and far from dead market for computing. There are a lot of companies that do not want to buy their own supercomputers, which will likely sit unused the majority of the time. As for the example of a car manufacturer doing testing on a new model, this already happens as do many other organizations.
- America's Cup boat designers
- Racing teams
- Natural Resource Explorers
- scientific organizations
- and many many more
We're not exactly talking about a new or even revived paradigm. Timesharing never died.
Re:Revolutionize? (Score:3, Insightful)
I'm not sure how many companies out there only need "a little" time on a "supercomputer" though...
Xentax
Re:Revolutionize? (Score:2)
Re:Revolutionize? (Score:3, Insightful)
Xentax
Re:Revolutionize? (Score:3, Insightful)
Re:Revolutionize? (Score:3, Insightful)
Lots -- for one, try massive internal bandwidth. A great many parallel apps won't work decently on more conventional (inexpensive) clustering systems for that reason alone.
Further, there's a point where it may be cheaper to have one big, expensive, extremely reliable and well-supported machine to administer than hundreds and hundreds of little, unreliable ones with hardware that can't be trusted and poor connections (in terms of latency as well as bandwidth) between them.
Re:Revolutionize? (Score:3, Insightful)
If you had a problem that a supercomputer could solve in just a few minutes, you could probably use a much cheaper computer for a few hours/days instead. If this is an infrequent problem, just use the much cheaper computer full time, and avoid paying any IBM bill.
The only advantage of the supercomputer would be the turnaround time. In the end, you get what you pay for.
Re:Revolutionize? (Score:4, Insightful)
Those who, for example, might need a rendering farm but only for a short time might benefit. Consider that you only pay for the processing you need. If IBM comes up wiht some clustering software that is good and distributed this might work. However it is clearly aimed at the markets that are already buying very large IBM computers. It won't help, for instance, for the typical internet sever.
Having said that though, what kinds of people are that? The main rendering farms are being used fairly consistently. So for them having a bunch of Suns or equivalent systems is more efficient. They can then just add computers. So who is it that would need this sort of thing?
And if they did try and foist it on the general public it would obviously fail immediately. After all the heaviest users of processing time on general computers are games. And most people aren't quite willing to pay for the processing the latest Halo or equivalent might use. (Not to mention the fact that Dell, Apple, and Compact wouldn't follow IBM)
Exactly, here's why. (Score:3, Insightful)
It's not always how fast you go, it's how efficiently you get there. I could fly on the shuttle from Kennedy Space Center to Edwards Space Center (assuming NASA would lighten up the travel for free restrictions on italians in oklahoma! lol) but as fast as the shuttle goes I could drive there faster (although not nearly as stylish).
Re:Revolutionize? (Score:2)
I haven't ever worked in a place where the need for computing power has varied wildly over time (which is the only scenario for which this model seems to make any sense) so I don't know how common this market is or how valuable the service will be. But the part that creeps me out is that last sentence, talking about greater standardization. As a developer, I'm in favor of standard data interchange formats, but somehow I suspect that what this really means is a standardization on a single suite of software tools, and that's more along Microsoft's line of thinking.
Re:Revolutionize? (Score:3, Insightful)
It might be more common than you think. In my work, for instance, we have occasional spurts where we generate a large amount of scientific data that needs to be processed followed by long periods where it doesn't. We're limited to running on the fastest box we can reasonably afford, but it might be cheaper and faster to buy just the clock cycles we need. One thing that's unclear is whether our processed data would be secure on our own machine or would be on IBM's farm, though. We'd definitely need to keep it on our own machine.
The other thing to consider is that it's possible that there are lots of applications where computer use might vary wildly from time to time, but nobody is thinking about them because they're uneconomical. Most places can't afford to have a supercomputer sitting around idle 95+% of the time, so instead they buy a machine that can process all of their data without the long idle times. The result is that there's a long lag between when the processing starts and when it finishes. If a system were available where they could buy the power to process that data rapidly when needed, though, it might make more sense to do it that way. They might very well wind up with about the same total cost but much faster results.
Re:Revolutionize? (Score:2, Insightful)
sounds like when ISDN first came out.... remember when for the first time, it didn't disconnect and you were billed by the hour over x hours? How many people got bills for 100's if not 1000's of dollars? I wonder how long it would take for IBM to recoup it's costs after a few locked up applications hanging a cpu thread? Not long if you run M$ type applications.
^-Alt-Del (Werd Not responding, but still billing)
Re:Revolutionize? (Score:5, Insightful)
No, in fact, it may be a good indication the end is near for IBM, and the past decade of "reinvention" was only an anomoly. Clayton Christensen's Innovator's Dilemna has only been delayed.
One of the things I like about Christensen's model is that it illustrates the fallacy of product normalizing on the top 5% of customers. Lucent, Digital, Wang, Nortel, etc. all fell prey to this issue. They listened to their very best paying customers and shifted more and more of their product design to please them.
Think about what IBM customers need supercomputer timesharing access? Probably their top 10% - or less. Can these folks already access timesharing? Certainly. So what's the hype about here?
It'd be one thing if it was a minor effort with big PR fanfare, sending a polite message to IBM's favorite customers that they think about them frequently.
But designating this kind of money and strategic focus? Especially when the focus appears to be a large, centralized and proprietary model (which flies in the face of low-cost, decentralized distributive models e.g. distributed.net, SETI@Home, etc.)?
Time to prepare for the fall... hey, maybe there will be some nice Enron-quality assets at the auction.
*scoove*
Back to the future (Score:3, Insightful)
The PC revolution was based on the desire to get replace dumb terminals with something that could do color graphics, fancy fonts, and WYSIWYG word processing. This evolved into a more user-friendly interface for data manipulation.
For data-intensive applications, timeshare computing was economical, and it worked over low speed connections. Back in the 80's, it didn't take much data to qualify as "data intensive", either. I seem to remember something about a 32MB hard disk limit, for those PC users lucky enough to have hard drives. In general, data was never shared with anyone unless a mainframe was involved. File servers eventually brought data sharing to the PC, but even then, record locking was a joke compared to mainframe capabilities. You could run quite a few dumb terminals over a 9600 bps line, but that is inadequate for even one web surfer today.
OK, what has changed? Is there some new generation of CPU-intensive applications that requires far more CPU power than desktop computers have? I think this is yet another case of a solution in search of a problem. The NetPC was supposed to run apps without the need for a hard disk. The concept died when people discovered that hard disks were cheap and broadband Internet was not living up to the advertising claims. Along the same lines, who needs supercomputer resources when none of our applications are really CPU-bound in the first place? Aside from specialized stuff like ray tracing, animation, and possibly busting DRM algorithms, I don't know how timesharing would become a mainstream product.
Re:Revolutionize? (Score:5, Insightful)
I guess there'll always be some tensions here that aren't really technology per se: In this case, it's in-house vs outsource.
Joe does an analysis that shows if he outsources all of IT, it will save $X,XXX,XXX, so they do it. Joe gets promoted. Three years pass. Sam does an analysis that shows if he brings all IT functions in-house, it will save $X,XXX,XXX, so they do it and he gets promoted...
IBM and Microsoft make money no matter what. Kinda like lawyers. Oh, I forgot they ARE lawyers.
'Sorry for the cynicism.
Mark
It's revenue silly... (Score:5, Interesting)
Currently IBM big customers buy a new machine every four years or so, they pay a yearly maintenance bill. IBM has trouble predicting it's revenue quarter to quarter, in a downturn everyone stops capital expenditure and IBM mainframe sales plummet.
Under this model everyone should pay less but they'll pay every month like clockwork.
Computer Associates has a similar scheme for software. You rent your software on a monthly basis.
On a technical level I'm all for it. I have a suite in my current site that is run yearly and takes for ever. Currently IBM has a big box sitting here and we just sip from it, until year end when we max it out for like two weeks. Let me rent time on a huge box and I'll be happy. Gurantee my data and response time and I'll be ecstatic.
Reminds me of .... (Score:2, Insightful)
Wow. Timesharing. What a novel concept. (Score:5, Funny)
Just imagine the office of the future. Instead of a computer on every desk you could have just one computer per department. And that computer would just dial-up to one of these IBM supercomputers.
I think this could be big.
It's not timesharing (Score:5, Interesting)
Seriously, beyond reading the article, I've been at several talks given by engineers from IBM as well as some marketing oriented presentations on this. They're focused directly on supercomputer type applications. In other words, the type of computing that requires expensive data centers, maintenance, and various processes around them. They want you to have computers on your desk so you can access the remote computing power of The Grid (TM).
One example from the article involves car design/testing where resources are needed for a relatively short period of time, with lots of down time. Just keeping a cluster running is an expensive proposition, so the theory is that companies with limited needs for massive compute power can just rent it.
It makes sense and sounds like a good idea, but I'm not sure how big that market is. For example, is it worth $10 billion? I don't work in an industry with this type of requirement, but maybe some folks that work in high end research, design, or engineering can share how much idle time their big machines rack up each year.
The other problem is that companies may find benefits in having extra compute power around all the time. I mean, when I was in college, students often found a way to make use of all the idle time on machines (now whether that was research or just playing quake might be debatable :) ). Having a system in house means that instead of running 100 permutations on a design, you can run 10000 (or whatever), right?
Sujal
Re:Could be pretty good (Score:3, Informative)
Sujal
PCjr was doomed? (Score:2, Interesting)
Re:PCjr was doomed? (Score:2)
And it even had a cartridge slot!
Re:PCjr was doomed? (Score:3, Informative)
We were founded in 1985, and ceased our active role in 2001. The Eugene PCjr Club was the last club left in the nation that was devoted solely to the IBM PCjr Computer. We were a member-owned, not for profit, support group, and supported all PCjr users no matter where they lived for over sixteen years. The Club was organized for educational and recreational purposes only. We no longer hold meetings on a regular schedule, but we do still maintain a fairly large library of public domain and user-supported software and still have some PCjr parts available.
Re:PCjr was doomed? (Score:3, Funny)
Actually, I know for a fact that if he dropped his PCjr on your Athlon 1600, your Athlon 1600 XP would probably be the thing crushed. Even more certain would be dropping the PCjr on the soundblaster 16. How can that little card possibly have a chance?
More Info Here... (Score:3, Interesting)
Wow...what a security nightmare....
Re:More Info Here... (Score:2)
Revolution.... Mosix (Score:5, Interesting)
Re:Revolution.... Mosix (Score:2)
How??? What will you be likely be doing that requires such enormous computing cycles at home? Your most likely cycle sink will be games, and even heavy 3D games are approaching the day where they don't saturate a consumer CPU anymore (esp. given a decent video card). So basically you're going to have a "terminal" at home that connects to a remote computer cluster to rent the processing time for--wordprocessing, which the terminal itself has more than ample processing power for. For all but a very select few super computer-type applications this entire model is flawed and merely a symptom of the fact that IBM simply can't break with its past.
Re:Revolution.... Mosix (Score:4, Interesting)
Dangerous, but it could mean profits (Score:2, Troll)
But then again, one of the reasons that Enron went down is that they quit selling real, hard, physical commodities and instead went directly to a more ethereal model of paper sales and transactions.
Moving Towards the Telco Model (Score:2)
So will this move the ball towards corporate responsibility in this area?
I am certain that a lot of companies will try to avoid it if at all possible. Of course, this would be controversial, especially re: open source, etc. but it is not the most common practice now.
This isnt new (Score:4, Insightful)
So the concept is old and crusty..
With all these rental/pay as you go/subscription.. (Score:2)
Does he have a doctorate in Evil? (Score:5, Funny)
I think Samuel has been watching Austin Powers way too much.
Doomed! (Score:4, Interesting)
It's PCjr, it's Gavilan, it's all kind of failures. And $10,000,000,000!
There is value in this... (Score:4, Insightful)
It takes a lot of time, space and know-how to own and maintain big-@ss computers. With broadband connections being commonplace, you could run your own progam remotely, and let a specialist (like IBM) handle all that stuff. And of course, there is value unlocked by having multiple users share common resources.
Of course, the vast majority of companies and institutions (not to mention individuals) use their machines mostly for word processing and surfing the net - and thus they will have little use for this kind of service.
Tor
Seems kinda silly.... (Score:5, Funny)
Wonder what the rates will be? (Score:2)
$.01 per Ghz?
Don't knock the PCjr! (Score:2, Informative)
It took the rest of the computing world YEARS to match the color & sound of that baby. What, you don't remember CGA and speaker music? Tisk.
Clever, perhaps (Score:2)
Re:Clever, perhaps (Score:2, Informative)
IBM does that, and they're not making the kind of money they'd like to at it recently. Neither is anybody else (e.g., EDS).
This appears to be IBM's bid to claim a larger share of a shrinking IT pie.
It might work... (Score:2)
After all, the PC revolution demonstrated that individual users want unrestricted computer usage on their own terms, and were/are willing to pay a fairly generous amount for it.
I only see this project working out as long as companies see it as cheaper than building their own solutions. Linux-based clusters can provide a fairly low-cost solution for a lot of high-end computing needs (like rendering tanks) -- that's what IBM will be competing with.
I think it'll boil down to how greedy IBM gets on pricing. If it's too pricey, companies with a fairly regular need for lots of computing power will deploy their own internal computing clusters -- which is ironic, considering that IBM will probably stay very interested in supplying such solutions. It sounds like they're just trying to play all sides of the game: Sell the big/pricey hardware, sell time on the big pricey hardware, sell the lower-cost alternatives -- or at least the contract to deploy and maintain them.
Xentax
Doomed. (Score:2)
yes. Doomed to be another PCjr. People want expensive goods that they can brag about. Plus, let's see you game on it. Personally, metered utilities are bad enough on their own without extending into my computer.
You're all missing the point (Score:5, Insightful)
The concept IBM is going for is to treat IT as another utility. Instead of some small company having to keep an expensive IT staff and maintain their own computers/network/storage, IBM says that it will do this for you. IBM will essentially replace the IT department and let some organization concentrate on running their own business.
The cost saving of such a model (if successful) are quite substancial and will save everyone money in the end.
I think IBM is on the right track with this and they are the only company really positioned to do so.
Re:You're all missing the point (Score:3, Interesting)
No, it's an Application Service Provider, the next step in outsourcing. The idea wasn't all that popular during the dotcom craze; is it any better now?
Re:You're all missing the point (Score:3, Interesting)
The problem is that most reasonably sized departments need an IT staff anyway. Having them run a mail server or the like isn't that big a deal. While some things can be effecient to subcontract out (i.e. your web server) often it is easier to have it on sight.
There are exceptions. But I think that only a few IT functions really can reasonably be marketed out. I think IBM's marketing strategy will work - but only for a small niche.
Re:You're all missing the point (Score:3, Interesting)
The real world has a huge diversity of applications - most enterprise applications can't just be outsourced for maintenance, ongoing development and so-forth, unless by the people who developed it in the first place. Exodus and the many colocation facilities of the late 90s and early 00s wanted to offer services sort of like this, but it just doesn't work - they don't have the talent in shop to do it, and can't learn everybody else's apps.
If by "IT department" you mean IBM will operate databases, Apache web servers and J2EE app servers and other commodity applications in their own datacenters, then I do believe it, but again that is what a lot of high-service colos were doing several years back (many of whom went under). The economies of scale aren't there - the only people who would think they are are those who think of "IT" as some mythical blob of computer operators, and don't realize the mix of trained sysadmins, developers, and so-on that make up "IT".
And the ASP model - well, the problem there is that though the company that developed an app is well suited to actually host and operate the app, if a corporation adopts that model, then their apps will be hosted and operated all over hell and high water. I mean, this is fundamentally the web services model, and it's nice for a lot of things, but I don't think anybody believes it is going to make corporate IT departments go away and allow the centralization of all computing work into a big IBM datacenter. I'll believe that when I see it.
Re:You're all missing the point (Score:3, Informative)
"Irving Wladawksy-Berger, vice-president of technology and strategy for IBM's Server Group, is a 32-year IBM veteran whose career has included stints in research, product development, business management, and strategic planning. In 1995 he was handpicked by CEO Lou Gerstner to figure out how to make the Internet a core part of IBM's business. He is still on that mission, although his latest focus is on two next-generation technologies: grid computing and autonomic systems. Wladawksy-Berger believes the Internet is on the verge of becoming a global virtual computer, like a utility power grid, with computing resources available on demand."
Hooray IBM! (Score:2, Interesting)
With IBM's continuing support of Linux in the commercial and high end server space, I have no doubt that this will be a GNU/Linux friendly project, if not composed entirely of GNU/Linux software.
And just imagine the possibilites for breaking the MS monopoly. I can just imagine companies with hundreds of cheap, dumb, never-needing-to-be-upgraded X terminals connected to this computing "utility" for all their office/CAD/research/calculation/accounting/etc needs.
Why not combine your computing "utility" bill with your software "utility" bill? IBM's supercomputers could always have the latest versions of Sun/Open/IBM/etc office suites. It would be the natural extension of the software subscription model.
This project is going to make MS quake in its boots .
might actually be useful (Score:2, Interesting)
They're On That Path Now (Score:2)
Likewise you can get a machine with an big ol batch of CPUs, most of them disabled. Over, say, the Christmas rush you call your salesperson and have the other CPUs turned on for a month. Again: Strange but the corporate customers seem to like it.
I doubt it'd affect Joe Average Desktop user all that much. Your average desktop has more processing power than he'll ever need and is already dirt cheap. It's only when you start talking machines worth millions of dollars that this sort of thing makes sense. The same people who go for this stuff pay out tens of thousands of dollars a month for support costs and they get some very good value for their money.
Many of you youngsters might not be all that hip to mainframe culture or mentality, but it's a pretty good deal and those machines are still amazingly fast. A lot of shops haven't been able to get rid of their big iron because PC clusters just couldn't deliver as promised. Our VM box back at school routinely had 5000 users on 13 years ago and that machine never even hiccupped.
The ghost of Thomas Watson sr??? (Score:5, Interesting)
The last 35 years development in computers were precisely to move away from the "metered service" model which made IBM's fortune.
On will recall that IBM's data-processing customers since the 1920's were charged by units of information stored/processed by the way of forcing customers to buy Hollerith (punch) cards solely from IBM, and run them in rented machines whose rental price was directly proportionnal to the throughtput of those (a card reader that processed 600 cards per minutes cost twice as much as one that processed 300, yet the only difference was the size of the pulley off the main motor - and you could upgrade by having an IBM tech that came and changed the pulley for a bigger one).
So is it that the ghost of Thomas Watson sr has made a comeback to IBM's board of directors????
Re:The ghost of Thomas Watson sr??? (Score:3, Interesting)
Uh... They still do that. Almost all of their z- and iSeries boxes other than their bottom of the line models come equipped with multiple CPU's that are soft unlocked. It's an easy way to do an upgrade - send IBM a check and they call your computer and unlock another few MIPS. No downtime, either. Actually, I'd be surprised if Sun didn't do something similar for its large clusters.
We had this in college (Score:2, Interesting)
I thought it all went away until I started working for IBM. Every time you log out of the mainframe the computer told you how much money your session cost the company. That turned out to be real money that was charged to the department you worked for. We eventually reverted to using X Terminals connected to massive, rack-sized RS/6000 machines instead of the mainframes after that.
Kris
Didn't they try that already? (Score:4, Informative)
Now, I'm relatively young (mid 20's), but I recall people not even a half generation older than I telling stories about getting in trouble for running up large bills on their school's timeshare account.
I could see where this might be useful, but only for a small handful of customers. There are not very many users of supercomputer's out there right now. I can't see that number increasing much just by servicing new customers who could benefit from a supercomputer but couldn't otherwise justify it for a short term project.
If they are dumping 10 billion dollars into this, they must think they are going to get at least that much out of it. I seriously doubt that they could do so, not without ridicously overpricing their service. For small time users who don't need supercomputer levels, there are much cheaper ways to go. (Buy your own gear, lease your gear, etc)
I work for a specialized outsourcing outfit that manages storage for large customers (internet datacenters primarily). I know how much of a pain in the ass it is to accomplish what we do now. I could just see the mess people would get into by getting into a timeshare system like this.
Welcome to the Machine (Score:2)
PCjr is a bad thing? (Score:2)
It was made in 1980 AFAIK, but had a 3.5" diskdrive and a cordless keyboard (features which never came along in other computers for several years).
Why does the article talk about it as if it was a bad thing?
Trust us - we're IBM (Score:2)
But it'd be nice for running multiplayer gaming servers.
What is interesting.. (Score:2, Insightful)
Reminds me of UNIX's parent, Multics (Score:2)
Augmenting the power grid... (Score:2)
Now I just need to get a solar array to power my array of older computers so I can sell back their CPU cycles to IBM and maybe, just maybe, earn enough to pay for the solar cells.
No PC Jr (Score:2)
At least the PCJr wasn't doomed to begin with -- the only way to make CPU time valuable is to limit the amount available. With Moores law and economies of scale (how long till we have an 8-way 5GHz CPU system? How much longer until we have the same with 10GHz?) I find it difficult to conceive of any way to beat it, other than absolute domination.
PCjr owners are offended (Score:2, Interesting)
Corporate Accounting (Score:2)
Utilities are great! (Score:3, Interesting)
Metered, rented, ASP, .Net, you name it... (Score:2)
On the surface it looks pretty good (Score:2)
First you'll still need some sort of helpdesk staff. Internal or outsourced to IBM.
Second you're going to be spending more money on telecom circuits. Now you'll need enough bandwith out to the internet to support all of your "knowledge workers."
Third security. Who will own the data? How will the data be secured against competitors who might also be IBM's customers?
Fourth is backups. What is the liability if IBM can't restore a deleted file or email? What about redundancy and downtime? Who is responsible for lost revenue?
Fifth it won't save as much money as IBM is hyping. Every company has tons of data that is rarely used, but still sits on file servers taking up space. This model won't change this. You will still be paying for storage that rarely gets used.
Mind those infinite loops! (Score:2, Funny)
there goes the wintendo TCO
Death by (1) Glut (2) Decreasing Costs (Score:2)
There are some good reasons for selling CPU time as an on-demand service. I'm sure IBM knows what those reasons are and will use them to try and sell this concept.
But there are two, possibly three, very powerful forces working against them here.
First, computing power is very cheap these days. It's not precious. People have 2 GHz Pentium 4 processors sitting around waiting for their next keystroke in Word and they don't feel guilty about wasting CPU cycles.
Second, the price keeps dropping at about a 40% annual rate. That same cheap PC waiting for the next keystroke would have been worth tens of millions of dollars to a scientific establishment in 1974. Not now. With a market where the supply of computing power is constantly increasing, it will be very difficult to peg any kind of price that people can use to make buying decisions, because those decisions will look foolish a year from now when someone asks why they didn't just buy a couple more PCs, or even a rack of PCs to do the task.
Third, the rented computing power needs to be connected very well with the data it will be processing or producing. If the rented machine is on the customer's site next to his SAN warehouse, then everything's fine and this may not be a real problem. But if the big machine is in Fishkill and the customer's 10 TB of data are sitting in a weird database inside a firewall connected via T1 to the Internet, then there may be a problem.
If I were IBM, I'd look into ways of increasing demand for computing power. Protein folding simulations for new pharmaceuticals is one way, financial scenario analyses is another, and database mining is yet another. They have to make customers want to buy extra computing power because they can easily see a business need for doing so.
The other thing is they need to increase demand for the ultra high reliability mainframes. For some of those computing needs, a rack o cheap PCs is going to be a much more economical choice for their customers. However, there are some applications, like VoIP telephony, video streaming, or credit card approvals, where people would get upset by downtime.
It's Nostalgia Time! (Score:2)
The initiative is expected to cost $1.86e+93 Kabillion dollars.
Grid Computing is the Killer App for App Ser. Prov (Score:2)
What IBM is proposing is that companies should not have to deal with running an IT department, when all they want to run is their business. They can simply pay for CPU cycles just as they pay for electricity, and their applications will simply use those cycles to perform their desired computation/storage.
Think about this: No more dealing with hardware. No more huge IT staff. No more complex budgeting for IT. No more upgrade nightmares.
Also, companies with as weak IT department will now be confident that the IBM (or whoever) datacenter folks will handle all the security concerns for their application (user access, encryption, authentication, DoS, hackers, etc). Likewise, they will feel confident that the datacenter folks will mirror and backup their data offsite in the event of a catastrophe, something only large companies today can afford to do.
Once companies realize the benefits of this, not only will they rent CPU cycles, they might even decide to rent applications as well. Today the Applications Service Providers model has not taken off due to a lack to a killer app. I think Grid Computing is that killer app.
Re:Grid Computing is the Killer App for App Ser. P (Score:4, Informative)
There is no fire-able individual that gets a performance review from the company. If they're unhappy with the outsourced datacenter performance, they have only two recourses: cancel the contract or sue, and I assume that contract agreements would most likely preclude the latter. The human element is completely being overlooked in these equations. Managers like pulling their staff together into a conference room and whipping their butts in times of crisis, making them feel in control of the company. Outsourcing precludes that. Sure, it will be (and has been) tried anyway, and will (and pretty much has in the case of ASPs) fail. But be my guest.
> What consumes most of the bandwidth in an internal company network is actually "raw" data.
[...]
> This means that the only bandwidth being used by your company will be to display web pages.
Hmm? Database queries are actually quite network efficient and in many respects very similar to HTTP. You send a query and get back a recordset. If you used a thin client instead, most of the information inside the recordset would likely travel across the network anyway, only in the form of more bloated ASCII-inside-HTML (to be displayed say inside an HTML table). And if the web server and database server don't reside on the same machine, you'd actually DOUBLE the network traffic.
Many cases can be made for browser-based thin client computing, but reduced network traffic definitely isn't one of them. There's nothing network efficient about stateless gobs of ASCII and graphics.
Another thing is that, as you mentioned, the ASP model is mainly suitable for web applications. Unfortunatly, that is still not the majority of applications in many corporations. There are still no satisfactory web versions of office applications, and there probably never will, because they're intrinsically client-side; if you insist on serving them via a browser, they will still end up mostly executing code (ActiveX, Java, JavaScript etc.) on the client side, but inside a sandbox, adding much headache and little benefit (think saving and printing).
What it really means ... (Score:4, Interesting)
What IBM has said is that it hasn't got anything new to report but that its still here. If you look at their figures $10Bn works out at 3.5bm for the consultancy firm they purchased, a few billion for Grid computing, and I guess a couple of billion for linux. With a bit of spare change for research.
Why are they doing this? My guess is that CFO's keep complaining about the cost of computing resources. A multinational with 10,000 desktops still has to ask for clusters and supercomputers for serious work while TFlops of processing are sitting idle on the secretarys desktops. Hard Disks, which used to be able to just about hold the OS, Office suite and files now have 10's of GBs of wasted storage.
If you're serious about using computers you want to use resources efficently. And from IBM's perspective so how does this idea sound ...
IBM sells computers to a firm, it then sells the software to turn all their hard disks into a P2P file storage system so that you never lose that important document ever again. Instead of a new cluster - set all the desktops to process data overnight as a massivly distributed system. (using IBM software), installed by IBM engineers under the direction of their new consultants. And of course the only real option for this is Linux.
A single, nice, neat package. A single point of contact and massive economies of scale. Now assume that their contract allows them to use/sell spare cycles and their revenue stream suddenly improves a lot.
Who to blame? What to buy? (Score:2, Insightful)
One online magazine [slate.com] did all that? Now I know who to blame!
In any case, I'm not sure how far this return-to-the-mainframe idea will take us; we've had the technological framework for doing this for years -- think RPC, OpenStep's Distributed Objects, Sun's GRID engine -- but where's the real value to the department's bottom line?
I spent a number of years working on an extremely computationally-intense business process for the not-so-late, not-so-lamented WorldCom. For about half of that time, I was running the systems architecture and administration group, so performance management was a huge concern. We chewed up a lot of user time, but we were primarily hampered at every layer of the process by I/O (disk and network) and memory constraints. The same has been true of the accounting and provisioning systems I've worked with since then: the enterprise-level bottlenecks these days are things that can't be purchased on demand.
I'm sure there's a market for these kinds of services -- medical imaging, for example, though the network costs would be high -- but something to bet the Big Blue (computing) Farm on? I just don't see it. *shrug*
Viable option... (Score:2)
A lot of people here are pooh-poohing this as "time-share" computing which was around back in the day saying we've moved away from that concept. I think it could certainly be a viable option for companies that are wanting more computing power, but also looking to cut costs.
Also, consider that the companies making use of this would never have to upgrade their own clusters. I constantly see newer clusters being planned by companies and governmental agencies. It's always more processors, more MHz per processor and more nodes per cluster. Why not offload all of this onto a company (IBM in this case) who can put the resources (both in hardware and personnel intimately familiar with that hardware) necessary to maintain and grow ever larger, more powerful clusters.
IMHO, it seems like a great idea. It will give far more companies access to "super-computers" than ever before and at a significant savings.
It seems that once again IBM is being a very forward-thinking company and will probably end up make a pile of cash because of a little foresight and some guts to act on it.
Computing as a utility - will it be regulated? (Score:4, Insightful)
1. Transferring product from generator (IBM supercomputer) to location. If you've just used 1 month of supercomputer time to model DNA folding, how will IBM transfer that data back to you? What if the computations and use are faster than the transmission rate? [Modem vs. DSL vs. T1 line]
2. Dependency - you rely up on natural gas and electricity to be there, and yes they go down, but can they guarentee their utility won't have worse problems - especially if its Windows run and goes down once a week, cutting into your bought utility time.
3. Regulation. Most utilities are regulated, and those that were deregulated have not always worked out for the consumer. Let's say company A gets rid of its expensive infrastructure for computing resources and uses IBM's utility. What if IBM becomes the only utility and charges way more than it should - there's no competition so Company A can't shop around. Along this same vien, if Company A is smart enough, they'll never enter into a utility agreement with IBM if they can generate their own computing cycles and be sure that they'll always be there, versus putting all their eggs in one basket.
IBM's idea may have merit, but anytime someone throws out the idea of a new Utility, that suggests that the resource they're selling is mainstream and essential, and therefore, is treated as a commodity. Those commodities are regulated and made reliable so that they never go down. I can't see supercomputing cycles as being something that is commodity, or for that matter, something I (or any company) needs to buy on a metered basis.
Re:Computing as a utility - will it be regulated? (Score:5, Informative)
Well, all that you would need at your location would be the equivalent of an Xterminal, and you would have all you need. Why would you need more than visualization of the data at your location? If it is a metered utility, you should be able to access it from anywhere negating the need for data transfer from their cluster of supercomputers.
I doubt that they would use a system that goes down. Often supercomputers are clustered and use a common set of storage space that would allow migration of users and processes between systems. There should be minimal downtime in the final system-- the equivalent of current utilities. Also, they would likely only go down when your other utilties went out (lines cut, etc).
What if IBM becomes the only utility and charges way more than it should - there's no competition so Company A can't shop around. Along this same vien, if Company A is smart enough, they'll never enter into a utility agreement with IBM if they can generate their own computing cycles and be sure that they'll always be there, versus putting all their eggs in one basket.
If IBM did this and was successful, I'd feel sure that Sun, MS, Intel, and maybe others (does Tera still exist?) would start their own shops as competition. And companies are already putting their eggs all in one basket, but now it's just a basket that is their IT department.
I can't see supercomputing cycles as being something that is commodity, or for that matter, something I (or any company) needs to buy on a metered basis.
So, as your desktop you have access to this system. Maybe you are using only 20 CPU minutes per month as a standard desktop user. Imagine a company that has 10k users that would only use 20 CPU minutes per month. I'd think it would make sense in that case. Similar systems already exist, and they're called ASP's (Application Service Providers), and they already work on a similar concept.
The DOD and others already sell supercomputer CPU hours. I had a friend who had ~100000 CPU hours available to him on ASCI Red (for rocket and combustion fluid dynamics simulations). IBM is just formalizing it a bit more.
Moore's Law and this idea... (Score:4, Interesting)
More bad news: typical supercomputer code is usually bummed (at least a little) for the particular hardware it runs on, to get the last factor of two or so for performance. If you rent crunchons, can you afford to rent generic crunchons and give up that last bit of optimization?
Good news: if you can get around the bad news above, this could turn supercomputing into a lease-vs-buy situation, and when the computer you buy essentially depreciates 50% every generation, leasing might be a win.
In 100 years... (Score:4, Funny)
--Prof. Frink
Interesting idea.... (Score:3, Funny)
(*nix bigots and such note: Yes, I know, your defined user space on foobox is restricted unless you've chmod'ed your ~ to 777 (which is of course bombastically stupid), but do keep in mind that a typical home luser is running Windows, and accordingly sees their computer as their ersatz "user space".)
Can we sell cycles back to the grid? (Score:3, Interesting)
If I have a nice Linux cluster that meets the "standards" for the grid (whatever they are), can I sell cycles back to the provider? Or is it just one way, in which case I'm trapped into doing whatever the grid wants me to do.
More than a doomed PCjr. (Score:3, Interesting)
Why would anyone be tempted to return back to this model? How many sub $500 or even sub $200 dollar computers, will it take for IBM to realize computing power isn't rare or expensive?
And if a company or organizaion needs incredibly massive computing power is needed then can turn to companies like this [uniteddevices.com] to provide the solution, again using cheap generic pcs.
To some it all up this is stupid, and now Palmisano looks like another idiotic buzzword chanting CEO. This will be yet another blow to IBM, and it will soon (IMHO) join the growing stable of companies (Compaq, HP and the "new" Cisco) that have been screwed by a clueless greedy CEOs. Somebody needs to cancel his subscription to Business 2.0
Hello corporate spynet (Score:3, Flamebait)
During the cold war, the CIA uses IBM to provide cover for operatives. IN exchange, IBM gets access to intelligence relating to competition.
Fast forward to today. Dozens of high quality encryption schemes foil the CIA's spying. What to do? Their friends at IBM can help again: create a new paradigm that leaves IBM in charge of all corporate data security.
</rant>
Could work (Score:4, Interesting)
People do not want "shared computing"; they do not want to put their data on "borrowed computers" nor do everything on "rented computing power" or "rented space". IBM should realize that most people will still want their applications and most of their processes and files on their own computers.
What IBM should be offering -- and what it seems like they're offering -- is loaning supercomputer time to people (for a price) for specific tasks which they can't accomplish in a reasonable amount of time on their own computers. This is a reasonable and useful idea; however, it is hardly new at all. At the University of Rochester, there are shared computers within biology labs, where people dump some heavy-duty computing operations and pick them up later. This went on during the 60's when computers were so expensive no-one could afford them. In short, this is hardly new nor revolutionary, though IBM may be putting a new twist on it by trying to use it as a business model.
It makes sense. After all, most people don't need supercomputing power for the majority of their tasks; why spend money on a supercomputer when it'll be unutilized 90% of the time? But what IBM can do is maximize supercomputer utilization by selling a percentage of its resources to various customers; these customers save money because they pay on a per-need basis.
For example, I often run Bayesian phylogenies. Recently, I ran a Bayesian phylogeny with about 50 taxa in it. This took 7 days on a dual G4 (2x 800MHz) Mac. That's with all of the computer's power focusing just on that. The time requires to complete the trees increases at a steep rate as one adds more taxa. If I was doing 200 taxa, it would have taken two or three months.
So this can offer a great service to many people.
Grid Ranches, Disk Farms and TCO Gullies (Score:4, Interesting)
Posters who are focusing on the U-word (utility) need to see that IBM doesn't want Joe Citizen using this. The profit levels for dealing with the general public just aren't there for IBM- Big Blue is all about the corporate or government cash.
In a word, cost savings for premier customers, i.e. the kind of people who will run up huge MIPS but not on a constant daily basis. Scenarios that come to mind beyond the car engineering ones are banks/companies/bureaucracies who have monster End Of Month/End Of year processing but reduced needs otherwise, websites that have a lower average use threshold except when the Super Bowl commercial airs, and disaster recovery (keep your disks mirrored offsite, if a disaster occurs call IBM, get your virtual mainframe up and switch to the offsite array).
With IBM's sysplexing and workload algorithms in play, tying in 'outside' 'puters will waste few resources.
I suspect that IBM's ultimate goal is disk farms on user sites and CPUs at IBM's Grid Ranch. With the CPUs under IBM's care they can really drop the TCO for the machines themselves.
That reminds me, the real cost of operating mainframes nowadays beyond the staff is the third-party licenses for the support software- security, tape libraries, etc. That's because traditionally the software vendors license by MIPS on the machine, not MIPS actually used in your LPAR (logical partition, a carved out virtual machine on a mainframe). Whenever you increase the MIPS of your machine, the third-party vendors will bleed you dry (which ultimately loses IBM customers as they go to cheaper alternatives).
IBM is beating on these vendors by competing in their arena to drive TCO down, and is also trying to get them to meter their actual usage under z/OS. So this grid thing is just a logical extension of what they are trying to do to not get run over by Moore's Law and the cost of running The Big Box.
Multics? (Score:4, Interesting)
Fixed costs vs. Variable Costs (Score:3, Interesting)
From a system administrator's point of view:
I work in the data processing indstury, and we have a 12-way NUMA box as our mainframe. We moved from a 16-way SE-70 that we'd had for seven years earlier this year, and our software has already expanded to max out the capacity of the new NUMA unit - to the point where we've upgraded it several times.
We'd continue to expand if the perception was that we have unlimited resources.
From a business point of view:
Even if we could do our dp activities on someone else's mainframe, we would still have our system administration costs for systems that can't be moved out of the building, so our costs don't go down. We would also have to maintain in-house development machines, because we wouldn't want to pay someone else for the endless compiles that we would need while developing new software.
Additionally, we already have a huge, unamortized investment in fixed dp assets.
Currently, our systems process for 24 hours per day to meet our needs. If we were to do these same activities on a metered system, we would probably not have to process as long, but if costs are over $5,000 per month metered, it's not worth it, especially since there are no cost savings except for the cost of amoritization of our main hardware.
Corporations buy unmetered data lines because they don't want to have to deal with variable (and, in case of a slashdotting, extremely high and exteremely unstable) costs. Trying to sell a service that has a variable cost structure is good for a company, but buying a service that has a variable cost structure is bad for a company. The only time buying becomes good is when the company can't provide it for itself, as with electrical power and telecom. But it's easy to buy/build your own mainframe-class computer for less than $10,000.
Resell your excess capacity back to the grid (Score:4, Insightful)
This is not just about paying the meter. It is about utilizing all the wasted CPU cycles.
Grid Computing != Timeshare (although related) (Score:4, Insightful)
IBM is working on the commercialization of Distributed Computing (henceforth, DC). This effort has been around for a while (in a related area, called Grid Computing, which some people use interchangably with DC) in the form of the Globus [globus.org] project, amongst others.
The concept behind DC is essentially a next-gen timeshare-- a distributed timeshare with an abstration layer, if you will. Unlike traditional timeshare, you don't specify where your processing will occur. Unlike existing projects (like folding@home, dsitributed.net), DC doesn't require that you have a parallel, segmentable computing problem.
Let's say (in your best Police Squad [imdb.com] voice) I'm a mechanical engineer who's designing a car engine with a few thousand parts. I want to run some simulations on my model to inspect heat flows, vibration, whatever. Car companies (or the little guy with a copy of Catilla and a great idea) don't necessarily have dedicated computing resources to run my simulation. So, until now, I had to band together with a bunch of other mechanical engineers with jobs similar to mine and try to justify a giant simulation node. Or, I might convince management to outsource the computation, requiring a bunch of red tape, NDAs, contracts, negotiation, etc.
Now consider IBM, one of the largest commercial web hosts. IBM maintains giant server farms to support these services. Consider the amount of excess processing capacity sitting in these server farms because (a) a lot of servers are spitting out static pages and (b) extra capacity necessary to cover peak loading for special events.
Expand this idea to include thousands of people who need computation power for discrete, isolated projects and thousands of companies with excess computational capacity. The consumers don't care precisely where or when their computations get completed, they only care that they get done in a "reasonable" amount of time. An intermediary, which it looks like IBM wants to be, can accept jobs from them, break them into as many pieces as they can, farm them out to whichever of their suppliers has excess capacity at any particular moment, combine the results, and return them to the customer.
Even more, IBM can charge more if you want a high priority on your computation or if your job is not symmetric and must be run on fewer nodes.
Actually, if you think about it, IBM is hurting their server sales by advancing this project. Right now, they sell a lot of excess capacity to companies to cover their peak loading. If companies can dynamically purchase exactly the amount of processing they need, that's money IBM's leaving on the table. Now, companies with high-availabity requirements will still purchase their own systems with enough extra capacity to cover their own needs. But, when they're not using that capacity, they'll sell it.
I think IBM saw that the train was leaving the station. They know this technology is coming. And they see that the chance to be the intermediary in this market is worth more than the money they'll lose in hardware sales. And, they know if they don't, someone else will.
Re:Microsoft's Plan for Palladium? (Score:3, Funny)
Soko
Re:Microsoft's Plan for Palladium? (Score:2)
> set up a "trusted" system and then use your
> Passport account to charge you by the bit.
You mean like this "trusted" system?
http://research.microsoft.com/research/sn/Mille
Yep, Microsoft already has their Millenium planned out. Of course, Godzilla already has their destruction planned out.
"At this moment, it has control of systems all over the world.
And...we can't do a damn thing to stop it."
Miyasaka, "Godzilla 2000 Millennium" (Japanese version)
Godzilla's 48th Birthday will be this Sunday.
Re:IBM: Waah! People don't buy Timesharing anymore (Score:3, Insightful)
Re:IBM: Waah! People don't buy Timesharing anymore (Score:3, Informative)
Our article today sounds like batch:
"computing power of a supercomputer for a short period" although they do go on to say "Other services could be delivered in much the same way".