Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
IBM

Grid Computing and IBM 66

cozimek writes: "I just read this article from the NY Times that discusses a plan by IBM to leverage their support of the Linux platform to build grid computing. IBM has already won support of grid projects for supercomputing in England and the Netherlands, and now seems ready to take on the Internet. Of course, the article says it could be many years before we see any fruits of this bounty." This has been submitted many times, so we're posting it. But somehow I resent the fact that it's just a vaporous press release generating this hype, taking advantage of a well-known idea that many are already working on and was forecast many, many years ago.
This discussion has been archived. No new comments can be posted.

Grid Computing and IBM

Comments Filter:
  • by Anonymous Coward
    ibm vs. the internet. yeah right. i'll take the internet, exactly 6 seconds into the first round, by tko.

    the internet is like that one fat kid in elementary school who could win in a 'vs. all' wrestling match.
  • by Anonymous Coward
    Does this mean that I can plug into the grid and charge IBM for computing power similar to people who generate power and charge the electric company for the surplus?
  • I thought the exact same thoughts.

    I miss my old Grid. Man, that was a sexy machine. Red plasma screen, locked down tight. Nice, nice box.

    I'd love to have one of those now. Wish I'd never junked my old Gridpad.
  • Sadly, that's the first thing I thought of... I was thinking "Gee, did IBM have something to do with the demise of GRID? Were the GRIDs in competition with the convertible for "behemoth of the year" when IBM stifled the competition?

    Alright... admit it... who here pines for the days of plasma screens? :)

  • What on Earth makes you think IBM is in any way, shape, or form less evil or powerful than Microsoft?

    It's only the still-standing anti-trust penalties holding them back.

    If you're looking for a White Knight to save you from the dreaded Microsoft, you'd better look elsewhere.

  • Grid computing was hyped back in the 1960s (Multics was targeted towards that, IIRC). It never happened because of Moore's Law. Cheap processors means it's been more cost effective in most cases to buy your own cpu than to lease it from a grid.

    A science fiction novel I read recently (Permutation City by Greg Egan), however, reminded me that this may eventually change, if and when Moore's Law stops working.

    If compute power hits a stable plateau in 10, 20, 100 years, whatever, then the cost of compute power will also roughly become a constant number of dollars per clock cycle (or peta-clock cycle).

    In that case, as Egan presents it, compute power from a global grid may indeed be the only way to get larger amounts of compute power than your local processor can give you, and therefore, as a commodity, it may go to the highest bidder at any given moment.

    (Hopefully not so badly as with California's power grid bidding, but we'll see.)

    P.S. the advent of nanotechnology computers, or quantum computers, or purely optical computing, etc, wouldn't dispel the above scenario, it would just delay it. It's not clear that even Vinge's Singularity would literally prevent Moore's law from going away. (I don't believe that the Singularity will do away with the laws of physics.)

  • I assume you mean 'megaFLOPS hour', in other words million floating point operations per second, for one hour. That's a silly unit (as is the 'kilowatt hour' used for electricity). Rather than dividing by seconds and then multiplying by hours, it would be better to measure per gigaFLOP (3.6 gigaFLOP = 1 megaFLOPS hour).
  • by rho ( 6063 ) on Thursday August 02, 2001 @06:54AM (#2176710) Journal

    I'm not sure I understand -- who provides this "grid"? Are they built and maintained by IBM around the world? I don't think IBM would be thrilled to discover that Compaq is using the IBM grid to advance Compaq's bottom line. I like IBM, don't get me wrong -- but I doubt they're such humanitarians.

    Is the "grid" made up of PCs on the Internet? First, most of those PCs are on dial-up connections, making things very complicated (and the PCs themselves not very useful). Second, who compensates the people who own the PCs? Is it strictly voluntary, like SETI@home? If so, how will anti-nuke activists prevent Los Alamos from running simulation calculations on their PowerMac?

    I think the idea is fantastic, but I'd like a few more details..

  • I never used one of the pen computers, but I used one of their laptops. They had the neatest pointing device I've seen -- a sliding rod mounted just behind the space bar. Move it L/R, roll it F/B for up/down, just the most intuitive one I've ever used. I guess somebody (Tandy?) must own the patent on it, because I've never seen another laptop with one. Too bad, it was really great.
  • by FreeUser ( 11483 ) on Thursday August 02, 2001 @08:50AM (#2176712)
    The intro posted is not correct. The article says that Grid software infrastructure is being developed on the "open source model," it does not say that it incoporates Linux (although I'm sure Linux will be a major OS used with it). MS has also contributed $1 million to the effort, and hopes to tie in .NET

    The intro is absolutely correct, which if you'd done any digging whatsoever *cough*google*cough* you would have found for yourself:


    It will be based on Globus and Linux software which uses the internet as an underlying communication system. [Atkinson, 2001 [bbc.co.uk]]


    It really can't be stated much more clearly than that.
    --
  • "Give me a marketing budget large enough, and I will hype to the world!"

    Or something like that.
  • For the average user who just wants to keep track of christmas lists, play some Solitare and surf the web, local processors are sufficient; even for most scientific calculations, a desktop system is adequate. The folks who would be interested in massive computing power, though, are people with Big Computational Challenges. The difference between now and the 1960s (or '70s or '80s or even '90s) is that many more people are starting to dream really big, coming up with potential calculations to run that they wouldn't have even considered in the past. Simulations of turbulence, climate modelling, risk analyses, astrophysics, genomics (and its horrendously bigger and more complex brother, proteomics)... where people used to say, "Forget it, that would take years to calculate, don't bother to even fully define the problem as a computation to be solved", now they say, "Don't worry about how complex the calculation is, the computing power will be available sooner or later." Of course, nobody wants to wait 10 years for a Pentium XVI when a cluster of Pentium IIIs could do the job now.

    It seems to me that for desktop use, the current crop of processors/systems is adequate for 90% of the user base (the e.mail/Solitare/web browsers). The power users probably won't ever be satisfied - I'd expect to see a lot more work going into developing methods of reformulating the computations to best take advantage of parallel processing clusters, rather than builing UberHyperPetaFLOP single processors.
  • MegaFLOP-hours. It would work almost exactly like the power grid.

    -B
  • The article mentions that IBM is setting up a grid of nine research centers accross Britain. I immediately thought of another network that traces its origins to a handfull of research centers. The internet grew beyond the bounds of what its founders ever imagined. Maybe this will too. As for grid connected PCs, I would be willing to bet that by the time any grid is ready to accept PCs, that dial-up connections will be a fuzzy memory.

    -B
  • Did anyone else out there think for a moment, "Oh, no! IBM's going to be making those stupid pen computers that GRID tried to market!" ?
  • I resent the fact that Michael thinks his stupid opinions matter. A research project can not be vaporware because there is not targeted project. It is a big step when major forces start activly investing in something as long term as this. There is a huge difference between cooking up bluesky ideas, and actualy trying to see if thwe ideas can be made to work. And it is very fortunate that a firm like IBM, with anti-MS tendencies is spearheading some reaserch. The computor science of MS is too primitive to tackel anything this complex, but they are big enough to steal it from smaller developers and claim that it is theirs. With IBM and the involment of various governments, I dont see MS as quite being able to do this. If they did do it, MS would more or less rule the world and Linux, FreeBSD, Plan9, OS/2, etc would be dead.
  • Reading the BBC article i get a slight impression that this will be used by the big corps. for their advantage - Imagine, computer hardware sales drop and everyone uses dumb terminals connected to the 'grid'. Once all the leftover computers have become obsolete/broke, you will have no choice but to use the grid (unless you can build your own chip fabricating plant). Now they can sell you DVDs and Music and you can't copy something if you don't have a computer (they will change all sorts of specs and standards so you cant use your old system without _allot_ of work arounds)
  • The intro posted is not correct. The article says that Grid software infrastructure is being developed on the "open source model," it does not say that it incoporates Linux (although I'm sure Linux will be a major OS used with it). MS has also contributed $1 million to the effort, and hopes to tie in .NET, and Sun already has a type of Grid deployed under Solaris for corporate computer networks.

  • Don't put a burnt CD in there from Europe. It could disrupt the entire Grid! Then the flux capacitor could blow!
  • DP! ^_^

    [Happosai]
  • I've heard that RAID does it by storing the same data on two different disks and verify the results.

    Depends which type of RAID (Redundant Array of Independent Disks) [finitesystems.com] you opt for.

    Doing something like this just decrease your disk space in half...

    Wrong! See the above link for RAID 5.

    ------------------------
  • by jon_c ( 100593 )
    So how is this different than any distributed processing network? The problem with disturbed processing is that it's only good for tasks that:

    A. large/complicated enough to constitute such a network: nuclear simulation, weather prediction, chess, encryption. Most of which have no little to no interest to the common user.

    B. Can be distributed, Seti and encryption work because a central server can farm out sections of work to different clients. For most tasks this is not practical, especially real time problems.

    C. Doesn't need a big pipe. for example a rendering farm, or maybe mp3 compression would be nice, unfortunately the data is to large to make it worth while.

    -Jon

  • Some problems are intrinsically checkable. For instance, if you do a distributed factoring search, and report factors, it is easy to check if it is really a factorization of the number in question.

    Other problems need redundancy. This isn't just to guard against malice, but to protect against hardware failure.
  • If so, how will anti-nuke activists prevent Los Alamos from running simulation calculations on their PowerMac?

    Shhhh, don't tell anyone! I've been using my wormed version of the SETI screen saver for particle transport calculations for years.

  • Sure it's old. Pervasive computing? IBM might have had this in mind when it partnered with MS and it's (Q)DOS. After all, the Quick and Dirty Operating System and 8 bit computers were supposed to provide smarter terminals. Combine these smarter terminals with the then building DARPA net and you can see that someone must have had this in mind twenty years ago.

    Since then MS has revealed far more oppresive vision for "grid" computing. Where did you want to go ten years ago?

  • Computing as a utility doesn't necessarily mean raw computing power ala SETI@home. Rather it can mean hosted applications offered in the same pricing model as a utility company, i.e., pay for whatt you use. Check this alpahworks site out as an example [ibm.com].
  • What's the 51st state?

    Besides, everyone knows that Britain, England and Scotland are all parts of London. Whilst Wales and Northern Ireland are far too confusing to be worth mentioning. :)
  • Haven't we already seen enough of the dangers of grid computing [tronmovie.com]?.

    I think people should leave this stuff well enough alone.

    --SC

  • Funny ... all the anti-nuke activists I know have PowerMacs, too.
  • "taking advantage of a well-known idea that many are already working on and was forecast many, many years ago."

    Perhaps it should be under the `patents` section then? :)
  • and Denmark is the capital of Sweden!
  • by dpilot ( 134227 ) on Thursday August 02, 2001 @07:41AM (#2176734) Homepage Journal
    > If compute power hits a stable plateau in 10, 20, 100 years, whatever, then the cost of compute power will also roughly become a constant number
    >of dollars per clock cycle (or peta-clock cycle).

    IMHO, we're very close to this point, if not there already. But in a different way. Consider this an economic limit, not a technological one. We can keep shrinking chips, but it keeps getting more and more expensive to do so.

    The first hint came with the sub-$1000 computer. Prior to that, a top-end PC was about $2000-$3000, with a lower priced PC about $1500-$2000. We kept buying all the power we could afford. But with the sub-$1000 computer a class of users began buying all the power they NEEDED, and let the cost ride down. More expensive PCs became the tools of gamers and technical use, and Microsoft was the only force pushing basic compute power upward.

    I'd like to upgrade to a 1.5 GHz Palomino this Fall, about my normal schedule, but times are tight, so I'm probably going to pass for another year. (Maybe a Hammer, then!) And to look seriously at it, my K6-3 does just about everything I ask of it. Star Trek Elite Force runs great, RealMyst was lackluster, though.
  • Sun's Grid Engine doesn't seem nearly as powerful as the Globus toolkit [globus.org] used by the Grid.
  • by Maran ( 151221 ) on Thursday August 02, 2001 @06:37AM (#2176736)
    Here's a link to the BBC Article [bbc.co.uk]. Maran
  • It's not clear that even Vinge's Singularity would literally prevent Moore's law from going away. (I don't believe that the Singularity will do away with the laws of physics.)

    Are you implying that Moore's Law is dictated by the laws of physics? Moore's "law" is not a law--not in the same sense as the "laws" of physics (e.g., gravity). Moore merely predicted the exponential growth of transistor density [intel.com] in ICs. This prediction is more of a sociological obsevation--a technology industry truism--and a self-fulfulling prophecy than it is a physical law.

    Microprocessors with tens of millions of transistors were no less physically possible in 1965 (according to the unchanging laws of nature) than they are today. Man simply had not developed the expertise, tools, and vision to make them.

    Moore's law may cease to hold true someday because of some physical limitation (though this may depend, in part, of how you choose to define a transistor--is it a silicon FET, an organic structure, or anything which functions conceptually as a switch?). However, there's no reason it couldn't end today if we simply chose to stop developing denser ICs.

  • DAMN them for... for... for... talking about doing something!!

    It's so... so... misleading!

  • Moore's law (actually it is more of a prediction) dealt with semiconductor densities doubling every 18 months. This implies a drop of cost and an increase in performance. Of course you have to wait for it to happen.

    For those who have an IMMEDIATE need for high performance computing, parallel systems are the answer. Simply put, you can have access to high performance computing now rather than wait for a single machine to become cheap enough.

    But don't forget that Moore's law made this all possible. Smaller components also made cheap, high-speed communications commonplace. This can only mean that the cost of setting up such a grid will become cheaper over time, not more expensive.

    Look at AOL, for example. They provide a distributed service to millions of people. This is made possible because communications and computing power is cheaper more than it has ever been. The longer it is in existance, the more services they offer. This happens without an appreciable increase in price.

    The same thing will happen with grid computing. It may be a specialty item NOW, but in the future, it will become a CHEAP commodity, not an expensive one.
  • The gist that I got was that the internet served as a backbone for a VPN. In order to connect to the grid you would probably need to log on (via the internet again), at which point IBM starts metering your disk usage and CPU cycles. That's just my impression, tho; I got no facts.


  • Is this slashdot's first DataGrid related posting?

    More info about the DataGrid...

  • It's at http://www.msnbc.com/news/608152.asp?0dm=C12MT [msnbc.com]. This appears to be just an extension of the secondary purpose of the internet - distributed research. But instead of being able to connect to a supercomputer across the country, it allows a researcher to connect to ALL the supercomputers across the country . . .

    Even tastier, though, how many PCs in university labs are wasting cycles (or using them on SETI@home or dnet)? Wonder how likely it would be to get a client on those and use it like another big computer?

  • Thanks for the re-post!

    This way I don't have to waste precious moments of my life on the irritating task of coming up with fake names and info with which to sign on to the NY Times reader's list.

    (Screw them and their opt-in database. They can data-mine like everybody else.)

    -Fantastic Lad; The most irritating Lad of them all!

  • Wouldn't that be a hoot? If there is a grid and other people use your processing power however distributed, sure, charge as much as possible for it. Maybe I can charge per use of GCC or have a sliding scale for apps and scripts that take less processing power. Of course we in California could get a grid/barter by exchanging processing power for the oh-so-expensive electricity.

    Do we need to pass measures to get a local grid?

  • From what I understand, it is not like a everybody can tap in to "the grid", but certain organizations can link their computers to form their own grid.

    You can't hurt me with the things that you do,
    I pick up dandelions and I give them to you.
  • I guess I am kind of confused. Wouldn't this require everyone on the grid to essentially have fiber to the curb? A 56k modem is slow enough, without also being used to support Taco's Diablo binges... which brings me to my second question. I mean, sure this would be a wonderful thing for SETI crunching and other scientific endeavours, but what about:
    1) Somebody eating up a whole bunch of processing time to brute force cyptographic codes and
    2) Somebody eating up a whole bunch of processing time trying desperately to frag that very last guy in quake VII?
    Will the processing power be so immense that it won't fill up?
  • It's time to stop playing those mind games. Everytime I hear or read the word "leverage" I have to think "hype".

  • So wil this create a clock-cycle futures market? seems like an interesting idea. Betting on the future price of computation in a direct way. Of course, there is the issue of measuring. Perhaps sold as 100 CPU-secs on an IMB model FOO 6000 with specs....
  • For the animation industry. It's called an on-demand rendering farm, and we're a bit cheaper than Kinkos for CPU time. We charge for CPU resources used, that's it. http://www.netrendered.com [netrendered.com]

    Oh, to answer the economic argument, unless you're doing a LOT of computing, it's cheaper to lease the time than to buy. I can give you ~30 hours/month of system time for less than you can buy _one_ equivalent machine. And my way (a) it's tax-deductible, and (b) you only pay as you go - not all up front.

  • by Arethan ( 223197 ) on Thursday August 02, 2001 @07:01AM (#2176750) Journal
    Enter Sun Grid Engine [sun.com]

    And yep, it's free!

  • I dont know alot about what is going on in Computer Science these days, but my sense is that all these new concepts (distributed computing, parrallel computing, OOP, etc) have been around since the 70's or so. The reason we havent ever seen anything like this before though is finally the hardware is powerful enough to put these theories into practice.
  • it's been more cost effective in most cases to buy your own cpu than to lease it from a grid

    It seems like there are cases when this is not always true. The one that comes to mind immediately is the researcher who needs to tackle a very large problem but only needs those resources for a short time. The grid would be ideal for that.

    It also seems that another reason that earlier distrubuted computing ideas didn't pan out was due to communications bottlenecks, e.g., the cost of farming out my compiles to 100 machines rather than running the compiler 100 times on one machine has a lot to do with the time spent sending the bits back and forth. If everyone on the grid (in a few years time) has a fiber connection and a guaranteed set of resources, then this problem is minimized.

  • The new grid bugs will be running around IBM now.

    You are standing in an open field west of a white house, with a boarded front door.

  • I agree its a vanilla corporate release, but its good news. A lot of people don't even know what grid computing is. This can help spread the word of yet another excellent OSS project [globus.org]

    I had heard of grid computing before, but hadn't read much about it. Google turned up lots of resources this mornign - worth teh read. The article was right - the software to manage a grid will be super complex and the security implications are daunting.

  • There's also a BBC News article [bbc.co.uk] on this, and it has links to the Grid Forum and Globus.

  • Somebody eating up a whole bunch of processing time to brute force cyptographic codes

    ...

    Will the processing power be so immense that it won't fill up?

    Yes. Even the most difficult of tasks won't use up *all* the bandwidth. And I'm sure the system will have some type of safe guards against extreme usage such as attempts to harm it, etc. We're talking about 500-1000 MHz * thousands, and lots of memory too.
  • by hyrdra ( 260687 ) on Thursday August 02, 2001 @07:23AM (#2176757) Homepage Journal
    I'm not sure I understand -- who provides this "grid"? Are they built and maintained by IBM around the world? I don't think IBM would be thrilled to discover that Compaq is using the IBM grid to advance Compaq's bottom line. I like IBM, don't get me wrong -- but I doubt they're such humanitarians.

    Think Internet. Right now, we're paying for bandwidth, because the Internet is largely an information-only medium. However, in the future, we will also be able to have a certain amount of processing power, shared by everyone, used by everyone. IBM is just providing the structure (and at first the systems for the demo) to access mass computational resources. Soon, you will be able to access network wide applications which are processed on many machines across the network in a distributed way.

    Right now we have an enormous processing surplus. Most machines sit unused for hours. Check your load averages if you don't believe me. Even a personal desktop used 8+ hours a day will barely break a few percent. Now imagine if we had some infrastructure, which is what IBM is aiming to do, to harness and unite all this power for general use? We would have an enormous amount of processing power available.

    Is the "grid" made up of PCs on the Internet? First, most of those PCs are on dial-up connections, making things very complicated (and the PCs themselves not very useful). Second, who compensates the people who own the PCs? Is it strictly voluntary, like SETI@home? If so, how will anti-nuke activists prevent Los Alamos from running simulation calculations on their PowerMac?

    Bandwidth will come in time. Even so, imagine having all of AOL's dialup connections available for processing. 56k isn't that much, but imagine millions of connections at once. As soon as we get lots of bandwidth and always-on connections wide-spread, this will be much easier. It's an upgrade path too. We can still start now and as people get faster connections and faster machines, the overall system power will increase.

    As far as compensation, this is a public thing. We all use each other's resources, and we all contribute to the available processing resources. The sum of the parts of something are greater than one part alone, working alone. Similar to how Gnutella users each contribute and take, and why it works so well. Just translate the information into processing power. You can take as much or as little as you want, most people falling somewhere in between (this is how it always is and is a regular pattern).

    I'm sure there are going to be leeches. But many people will want to share because they realize how the system works. Distributed systems like Gnutella do work (albeit a few leeches here and there), and this is proof that a processing system will also work.
  • Beowulf clusters are easier to imagine, at least amongst the /. faithful.
  • While I see your point that with all these tremendous increases in the computational power of personal computers, generally there may be less of an incentive to engage in a cooperative global computing grid. However, such a grid (and even the very existence and prevalence of the Internet somewhat attests to this) has its advantages. No matter how powerful a single computer can be, networked computers are very often more powerful than solitary ones. Considering that there will always be demand for greater and greater computing power, I doubt that Moore's Law will deter us from such a global computing grid.

    (Let me just mention also that the so-called "Moore's Law" isn't really a law, but just an observation for the present times and technology.)
  • Oh. So, basically, it's just like Mosix [mosix.org]? Doesn't sound too different to me.

    --
  • As far as compensation, this is a public thing.

    Here's an idea that I've been tossing around. Maybe it will be feasible with a grid installed. What about paying for internet access with clock cycles? I could certainly use perpetual 'free' net access more than I could use a supercomputer on my desk. I wouldn't know what to do with it.

  • If I had a [unit of currency] for every time someone said to me, "Kenya. Isn't that in South Africa?" I'd be reading Slashdot from a Sony Viao. :)
  • What's to prevent one disgruntled employee at one of the facilities from screwing up the results. I've heard that SETI@home does it by giving the same work unit to two different users and verify the results.

    Doing something like this just decrease the "Grid's" speed in half...

  • At last something useful... Link here! [nytimes.com]
  • If you truley resent that this is "vaporous" then why give in? I'm sure you get a lot of crap submitted, most of repeatedly. Why did you cave in this instance? So you could make a 'witty' observation about how it really wasn't news, let alone something that mattered?

    Honestly, if you felt strongly about it, don't post it. Have some balls, for cryin' out loud. You're just as bad as the Karma Whores. Anything for attention.


    Carl G. Jung
    --

  • By what would you charge? Gigahertz/hours ? :)
  • yes, no and kinda. By not everyone, individuals and most researchers can't access the grid at the moment (and in fairness the ones who can are already the ones with access to the big iron machines). Its going to be about 3 years before access becomes wide spread at universities, prob 5 for government and industry to start using it and 10+ for home users and small businesses. How the system works can best be described as Virtual organisations. For which the best description is An organisation that can be created for a specific project, which can last from days to years with facilities, people and data distributed throughout the world, all of which need access to data which is protected from unauthorised access and with the ability to request processing from a variety of sources according to their needs by the use of software agents. e.g. Tell your agent "I want to do a galaxy simulation requiring X Gflops of processing" the agent goes away, finds out whose supercomputer is available, agrees price to process and then runs returning results to the user. The difficult part at the moment (and I get to write my PhD thesis on how we solve it 8( ) is how we can authenticate and track millions upon millions of systems with different resources and then have a billing structure in place so people get compensated accordingly. Of course the fun part is having different operating systems, different data formats, time zones, certificate authorities and a host of other problems to deal with especially billing. This isn't the free flow of information which the internet is *supposed* to be, these are physical assets which cost money and there is an opportunity cost in using processors in terms of power, support, initials costs etc.
  • Yes, MOSIX [mosix.org] does it and is appropriately calling the creating machine ``home-node''.

"Money is the root of all money." -- the moving finger

Working...