Follow Slashdot stories on Twitter


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

To Grid Or Not To Grid? 68

dbgimp writes "In my job at a (large) investment bank I am constantly being pushed to use grid technology. I have many problems with this (not least that our data center is at best 100 Mb/s and our software is actually more data than computation heavy). A typical batch job takes 10-30 minutes consisting of around 10,000 trades. I would far rather spend the time and money on multi-core machines and optimizing the software than on the latest fad technology. I am interested to hear from other people in a similar position and, in particular, why or why not they chose grid software over improving the existing code to leverage better processor technology, and which grid software they chose to use and why. Or, conversely, why they chose not to use grid software."
This discussion has been archived. No new comments can be posted.

To Grid Or Not To Grid?

Comments Filter:
  • Clustered Benefit (Score:5, Interesting)

    by eldavojohn ( 898314 ) * <eldavojohn@g[ ] ['mai' in gap]> on Wednesday October 04, 2006 @07:27AM (#16303013) Journal
    Well, I'm not sure about what your particular job is but my current job is developing webservices. There are two servers that I use, a clustered and an unclustered. I deploy the same projects to them--and occasionally find myself rectifying strange resource allocation problems on the clustered server. There's only two machines on that cluster so it's more symbolism right now to the consumer that our software is scalable.

    That's right, it seems to me that upper management likes the idea of having a clustered system because if a customer ever asked if our software would work for 1,000 people, my manager would say, "Sure, just buy more machines for the cluster." And everyone likes that idea. The idea that well, the system might not be able to handle everyone right away but wait a year or two and CPU cycles will be so cheap we can just buy 30 low end machines and cluster them to get the job done. Thanks to the common scheme of access that all databases use, this is an actual option.

    I offer only the suggestion that maybe your bosses like the idea of just being able to throw more machines at it. Look at it from a financial perspective, if you tailored the code for multi-core CPUs--something I'm not even sure how to do--you would have to rebuild and maybe recode everything for future generations of machines. I can see why grid computing might sound so enticing to your employer. Look at Google's distributed scheme, hundreds of thousands of cheap machines running a stripped down form of Red Hat--I don't know if that's 'grid' computing but I imagine it's along the same lines.

    It isn't clear to me whether your bank offers a service for trading or you do them in batches. It seems that the latter is true. Now, you mentioned you work at an investment bank so money probably isn't that big of an issue. Just go to your superior and say, "Look, I need the following." and if he balks at you just ask him how important these 10,000 transactions a day are to him.

    So, to me, it would seem more intelligent to use the following idea. Buy new network hardware that handle gigabit ethernet. The cards, the router, whatever you have, just up it so that your internal network can really throw data around. Maybe look at relaying fibre if you don't have it. Then take what money is left over and buy a few more machines. Get a low-end server to act as a proxy that dishes out the requests for a trade to a cluster of machines. Write the software independent of the hardware so that you can always just buy more machines and install your client application on the machines. At some point, your choke point is going to be your database but if you make it that far, you've kind of hit a wall, in my opinion, and the only solution for that is to juice up the box (with database sepecific hardware) that's serving your database.
    • At some point, your choke point is going to be your database but if you make it that far, you've kind of hit a wall, in my opinion, and the only solution for that is to juice up the box (with database sepecific hardware) that's serving your database.

      Allow me to correct myself. If you fear this occurring further down the line, you do have another option. Buy multiple database machines and, in your client app, select the connection information for an account based on a lookup table. Then split your datab

    • by cofaboy ( 718205 )
      Cluster/grid it, if you use multi core CPU's you have fewer machines, one of those machines go down and you lose major processing power.
      Remember to ask yourself, "How much is MY job worth if those trades DON'T get processed because of something I recommended?."
      Plan your response accordingly
      Your boss is already looking for a potential scapegoat, make sure your not it.
      • Re: (Score:3, Informative)

        by dirtyhippie ( 259852 )
        Wow, that stuff about scapegoating is a pretty jaded take on things. Can't say I haven't been there, but still...

        If we're talking about an application that can truly benefit from clustering, and is built so that node failure can be detected and worked around relatively gracefully, this isn't much of a consideration. If you have 10 machines, and 1 goes down, you lose 10% thruput. If you look at it in terms of cores, 20 1 core machines is equivalent to 10 2 core machines, so your downtime per core essentially
  • I would far rather spend the time and money on multi-core machines and optimizing the software than on the latest fad technology
    I get the impression from the poster that it's not up to him to choose how he spends his time.

    Yet, he says to us: my boss asks me, but I don't feel like it. Well, though shit brother.
    • by kjart ( 941720 )

      I get the impression from the poster that it's not up to him to choose how he spends his time.

      Seems more likely to me that he is looking for a case with which to justify selling a non-grid solution to his boss. He may not be making the final choice, but it seems quite likely that he would have at least some influence over it. Having the right justifications and facts to back yourself up would obviously be of great help.

      • Seems more likely to me that he is looking for a case with which to justify selling a non-grid solution to his boss
        Exactly. But that's only interesting to him. What's interesting to us is in which cases a grid vs. non-grid solution would be a good choice. However, the poster has such an obvious slant against a grid solution that the discussion doesn't get a fresh start.
  • Well compare costs (Score:3, Insightful)

    by jellomizer ( 103300 ) * on Wednesday October 04, 2006 @07:37AM (#16303067)
    I would say give them the full price to get it done right then have management decide. Be Sure to give them the full quote with the cost of making 10gbs Network or faster for these systems. Then you need to realize how much of the code can be parallelized, estimate you time it will take you to make the changes, and add in proper debugging time. Next find alternative solutions that will increase performance. For example except for grids you have clustering which works better for some applications which have fast calculations but a lot of users that just make it slow. Grid Computing works well when you have a large segments of code with minimum communication between each system. If you need a lot of CPU to CPU communication then you will need to get a real supercomputer where the processors communicate across the bus.

    Or You just need to index your tables.

    The rule of thumb is go the safe rout unless you are told by higher ups to do otherwise, have higher ups sign off on the more risky method (to save your butt) then get the method working focusing on getting the job done right and stop complaining how bad decision it was.
    • Re: (Score:3, Insightful)

      by DaveV1.0 ( 203135 )
      Well, that is half the work. For this instance, he should do a cost benefit analysis.

      Just providing costs comparisons boils down to "Your way costs X, my way cost Y." But, that may not matter to someone who wants to be buzz-word compliant. When an executive gets it in his head that "this" is better than "that", the best way to handle it is to show that "this" will give a give a crappy ROI while "that" will give a great ROI.

      Unfortunately, sometimes even that does not work and you end up doing it the boss' wa
      • Well of course in the end you will need to do what the manager states. But giving them the facts first will guide them in the correct decision. But if they choose not to follow it and problems occure or gets to expensive you have a printed and dated "I Told You So" on your desk or in your email logs. It is important if you feel like your boss is making a mistake you need to do things to prevent a backlash from going to you. If you were wrong and the Boss was right and you did his work well then you look go
  • Maybe it's just me, but you sound like you're just resisting going to a different technology where you have to learn a new set of skills. Grid/multi-processor computing is definitely not simple, but depending on how many spare CPU cycles you have, you'll get a much faster and larger speedup in your runs than if you tinker with the code to make it run faster single-threaded. Also, won't you need to do some multi-threaded on a multi-core machine as well? (I'm not particularly familiar with multi-cores, so
    • I took a parallel processing course in university, and can very much attest that it's a lot more difficult than programming single threaded programs. Because of that, it's also more interest, and therefore a lot more fun. I mean, you could spend your time doing the same tired old thing, but isn't it every geeks dream to experiment with new and different technologies.
    • You must be a student who hasn't been jaded yet. My guess is that this guy's boss's boss went to a conference and heard some consultant pine on about how amazing grid technology is. Then he read Oracle magazine on the flight home and read about how Oracle uses the grid to save civilization as we know it.

      So the boss's boss told the boss "we're falling behind, use Grid technology"! And this poor bastard is stuck sticking square pegs in round holes.
  • by phoebe ( 196531 ) on Wednesday October 04, 2006 @07:47AM (#16303115)
    ... our software is actually more data than computation heavy ... I would far rather spend the time and money on multi-core machines and optimizing the software than on the latest fad technology. ...

    If the process is more data than computation intensive then throwing more machines at the problem is the most cost efficient way of going forward. You have already countered your argument for multi-core machines. Especially if this is finance it is highly unlikely that optimizing the software will produce anything remotely practical in a short time period or at low cost. Software optimization also can introduce bugs and lock you down on an implementation that cannot be easily updated.

    Take search engine technology as an example, Google have hundreds of thousands of machines running advanced software on non ultra-optimized platforms: Java and Python. The alternative is having a couple of hundred big iron machines running hand tweaked C / assembly. As a business you should be seeking to reduce the overhead of operations, by increasing the number of machines, lowering the cost of each machine, reducing the time optimizing the software by allowing higher level languages that are easier to use and maintain you can actually get better performance, reliability, and flexibility.

    • I agree with this personally ... but let's play devil's advocate.

      Dealing with large quantities of data has always been the sales pitch for mainframes. The question could therefore maybe be broadened to "can grids/clusters/multi-core/... really replace the mainframe?"

    • by jsailor ( 255868 )
      While I don't necessarily disagree with you. It should be pointed out that this mentality has a much larger operational impact: cost for facilities. Power consumption from these massive deployments of PC servers is responsible for an explosion of operating expense budgets - especially for financial firms. Power (and cooling) requirements are exhausting existing capacity and forcing the contruction of massively powered data centers at a cost of $300+ million each. Paying the power bill is the other prob
  • Cus then you get a load of iron on which to run ! We went up like 3000 places when we installed our cluster! It rocks!
  • ...of processing on the trades are you actually doing?
    30 minutes for 10,000 trades seems an awfully long time - I work in the same industry (specifically developing position management systems) and the only thing we do as a batch job is our daily rollover/mark-to-market, which finishes in less time than yours with a hell of a lot more trades than that.
  • A Cynics reply... (Score:5, Interesting)

    by Anonymous Coward on Wednesday October 04, 2006 @08:26AM (#16303411)
    The real reason folks like High Energy Physics experiments and university groups are using/developing GRID software is to get grant money.


    In fact, GRID software is constantly in flux, because there is no grant money to run a GRID, only to develop one, so they keep throwing stuff out and developing new parts -- to get grant money.

    And yes, I am posting this anonymously because I work for such a place, and mostly like my job.

  • Grid vs cluster (Score:5, Informative)

    by TeknoHog ( 164938 ) on Wednesday October 04, 2006 @08:31AM (#16303451) Homepage Journal

    Make sure you know the difference between grid technology [] and clustering. Basically, grid is much more complicated but more flexible; the name means you can connect something to a grid to get computing power, just like you can connect to the power grid to get electricity. It looks like you're thinking of clustering instead, which is easier to deploy and in many ways closer to a multiproc machine

  • Grid != Parallel (Score:3, Informative)

    by prefect42 ( 141309 ) on Wednesday October 04, 2006 @08:40AM (#16303527)
    I can't help but feel that people are missing the point of grid computing. Grid is not HPC. It's not super computers. You can build grids using HPC, but they don't have to go hand in hand. As such, all this talk about parallel whatnot is actually missing the point. I assume there exists code. I assume the code is serial, since most is. I also assume that there are many of these jobs, rather than one job that currently takes a day and a half. Typically there's no need to start getting exotic with MPI/OpenMP or whatever. Simply submit more serial jobs to do what needs to be done. Look at it from a batch scheduling point of view, and see what can be done. If you want to parallelise it as well feel free.

    Grid within a company typically just means decent remote access to a shared cluster. A web service that submits jobs to sun grid engine (which has nothing to do with 'grid' btw) would probably fill in all the buzzword bingo requirements of a grid project without being anything of the sort. For sadists look into OMII and GT4, but don't feel compelled...
    • Er, you seem to be saying that if I run a large number of independent ("serial") tasks at the same time that this is not parallelism. I'm trying to think of what parallelism would be, if it is not this. :-)
      • ;) What I mean is, take advantage of any parallism possible due to the nature of the task without recoding the application. An individual job won't be finished any quicker that way, but all jobs will be finished sooner. Parallel programming is different, since this involves multiple threads of processes communicatating either through shared memory or with some form of message passing.
      • by tolldog ( 1571 )
        I call those tasks "embarassingly parallel".
  • by lesinator ( 459276 ) on Wednesday October 04, 2006 @09:07AM (#16303875)
    I work for a large bank, doing very much what you describe.

    Our processes tend to be more computation (than data) heavy compared to what you describe, but we are using lots of clustered computers. Take your 10,000 trades and split them into chunks of 100 trades and have separate machines value each chunk and reassemble the results. Depending on the nature of what your software does this may or may not make sense. If you can split your workload into small chunks that can be analyzed independently you can achieve much better throughput.

    The newer cluster/grid software can be really shiny, but you don't always need it. Plain old PVM can still work wonders. Also, a lot of the commercial cluster software out there isn't well suited to this kind of high performance computation clustering.

  • I'm not a typical Grid user. I have tested and reviewed DataSynapse to use inside our organisation.

    We want to offload processing cicles from z/OS onto a cheaper platform. As we process highly secure data we do not want any of this to land on insecure Windows boxes so our Grid engines will be typically tightly controlled Solaris or Linux boxes.

    What I like best is the re-division of the application. The application submits a request for processing to a broker/manager. The broker/manager dispatches reque
  • by kpharmer ( 452893 ) on Wednesday October 04, 2006 @09:38AM (#16304443)
    My management is similarly obsessed with virtualization: they want to lower admin costs, lower lab costs, etc through this simple technology.

    So, rather than move everything over to lpars I took a simple step - purchased a large virtualization-oriented server highly touted as perfect for this, and moved over a single app, with the goal of putting two apps on this server. Along the way I learned:
        - io virtualization sucks for io-heavy applications
        - the tools to determine how much of the cpu your app is getting at a given moment stink
        - memory virtualization in which you resize application memory is primitive and almost useless
        - there were no guidelines for optimization of the server - just recommendations to try it
            hundreds of different ways and leave it on the best settings
        - basic setup of the machine required wading through tons of jargon that even the os engineers didn't seem to know well
        - out of the box - a single app on the new virtualization server performed more slowly than it did on a free seven year-old server
        - some of the most heavily-advertised virtualization features of the product just don't work
        - virtualization of multiple busy apps onto the same server is mostly a waste of money
        - virtualization of multiple mostly idle app (failover servers, test servers, demo servers, etc) should work very well
        - we spent at least $25k on labor just to create something that was a slam dunk
        - I'm glad that we started with a small prototype - and didn't waste a ton of cash moving everything over immediately the way some management hoped
        - I think in the end we'll get multiple apps working on this box just fine. BUT - we will have spent more money on this scenario than by simply purchasing separate systems. We may recoup a savings if we move enough idle systems onto virtual boxes.

    As a result of this experience my team now knows more about virtualization than any other people in the division, we now have a production server supporting it, my management is now cool on this technology, and there is no risk of being forced to migrate critical servers over quickly to the virtual world. I'd call that a success.

    I think that you're right - that grid is in a hype cycle right now. So - there are quite a few disappointments to be had along the way to its implementation. For example - if your workload is heavily transactional - you're really not going to get much benefit. In this example oracle supports grids - but it is really more about failover than performance. If you roll your own or use a more sophisticated product you can be safe in assuming that you'll hit unexpected issues, a gap between vendor marketecture & what you really need, and possibly the pain of having a vendor talking directly to your management.

    You might want to consider having management fund a small prototype to prove out the benefits. Then let them see that they can achive perhaps better availability but worse performance at a very high cost through this approach.

    good luck
  • It's a trade-off (Score:4, Informative)

    by davecb ( 6526 ) * <> on Wednesday October 04, 2006 @09:44AM (#16304527) Homepage Journal

    Sounds like a trade-auditing project I was once on.

    If the 10,000 trades are easily broken into small groups, such as by the initial letter of the ticker symbol, and if all the data for the analysis is fetched in the first step, you can in fact spread the processing over 26-odd machines for a speedup of (fixed part + (per-ticker-symbol part/26)).

    I have an article on doing the load-balancing part of this kind of processing, albeit on a large multiprocessor, at [][In PDF]. As you've already guessed, sometimes the problem doesn't decompose nicely into parts that can be distributed to machines far from the database.

    The rule of the thumb is that grid does distributed computation, where you ship small amounts of data to many CPUs. If you have large amounts of data, you need to have previously distributed data stores, and then you ship the processing to reside with it, instead of the other way around. Alas, some folks call the latter grid, when it should be called something like "data grid" (;-))


  • Quantian (Score:4, Informative)

    by lesinator ( 459276 ) on Wednesday October 04, 2006 @09:53AM (#16304713)
    Something you might want to experiment with is Quantian []. It is a bootable linux distro (knoppix descendant) with clustering (openmosix) and a huge variety cluster capable scientific & financial open source tools built in. It is a very quick & easy way to set up a cluster to experiment and see how you application could be altered to work well in a massively parallel environment. I've never seen a quicker or easier way of building a cluster. With Quantian and a pile of networked PCs, you can literally have a openmosix cluster in minutes.
    • by M1FCJ ( 586251 )
      I used to use OpenMosix on ClusterKnoppix but there hasn't been a new release since 2004, OpenMosix still runs on 2.4 kernel. It would be quite nice if OpenMosix managed to move to more recent kernels - still quite useful.
  • Our bank has experimented, and is running some production systems on grid infrastructure (Sun to be specific).

    Some learnings:
    1. Software licensing is your biggest enemy. Oracle in particular is evil in this regard, but every vendor fears grid computing since it doesn't conform to their pricing models and gives you more bang for the buck. Investigate the consequences of grid at the earliest opportunity.
    2. By linking numerous apps to a pool of servers, you've just complicated your software currency lifec
    • You stop thinking of computers as individual machines and start thinking of the network as the machine. This has implications for how you manage the systems. If you know what you're doing it actually simplifies almost everything, though this is really a function of the attitude and isn't specific to grids, any network would benefit from the same way of working. A grid fits right into an N tier architecture, it's just one of the layers.
  • hmm. (Score:3, Insightful)

    by pizza_milkshake ( 580452 ) on Wednesday October 04, 2006 @10:41AM (#16305471)
    I would far rather spend the time and money on multi-core machines and optimizing the software than on the latest fad technology.
    think about that for a second.
  • It seems to that you do not know, or have not stated, why these transactions take twenty to thirty minutes. Identify the bottle neck (e.g. network, memory, CPU, i/o) and then you'll know how to improve the performance.
  • I've been developing grid-based applications for my company (we work in the financial industry) for several years. We chose a grid solution because the nature of our applications lent themselves well to parallel processing. We were able to reduce the processing time of our runs from hours (or even days) to seconds.

    Not every application is an ideal candidate for the grid: the problems that scale best are composed of many discrete, independent calculations (think Monte Carlo simulations). Strictly linear p
  • low hanging fruit (which, in your case, appears to be abundant) is what you really should be pursuing first. it seems like an upgrade of your networking equipment to gigE or 10gigE would be a good start, since regardless of what forward path you choose (clustering, grid based, etc.) you're going to have to make the investment in fat enough pipes to get the data to where it needs to be.

    the next thing to think about is how to educate the powers that be on their options in terms of parallel processing. this me
  • (not least that our data center is at best 100 Mb/s and our software is actually more data than computation heavy)

    First: I assume that you are talking about clusters, not grids (grid=>cluster as road=>car).

    Second: The computation nodes *do not* sit on your regular datacenter network. A computation node only ever talks to its master and its peers, so they sit on their own, dedicated, high-speed network (usually no less than 1 gbps).

    Third: Some tasks are better for SMP, other for clusters. Find out whic

  • 1. Ensure the database layer is a parallelizeable RDBMS that has a concurrent access mechanism and is running on a multicore/multiCPU box.

    2. Grid/Parallelize the application layer -- i.e., ensure you can run parallel jobs with discrete data.

    3. If that doesn't help, then grid the database layer.

    If your application isn't built to scale today -- see the second point -- all the grid in the world won't help you.

    I agree with you that it sounds like the code needs some optimization -- 10-30 minutes to process i
    1. You can have a grid system in place as well as multi core machines.
    2. You can implement a grid in a couple of days worth of sysadmin time. A few wrapper scripts should be able to simplify and manage job submission.
    3. A grid is probably not what you think it is. Current grid technologies are essentially updated network batch queueing systems. You kick a job off, the grid determines the least loaded and fastest machine to run it on. Distributing parts of jobs over the grid essentially requires rewriting the softwar
    • I forgot (Score:3, Insightful)

      by Colin Smith ( 2679 )
      7. It's not a fad. The technology has been around since the 80s IIRC, possibly earlier. The word "grid" is a fad, but not the technology. They started as network or batch queueing systems. The word "grid" is like the word "middleware". It isn't well defined and means a bunch of different things to different people.
      8. Off the top of my head, freebies include Torque, GridEngine, Condor.
      9. Yes it would be a Beowulf of those. Mwhahaha!

    • Modern grids do a lot more than just kick off batch processes--they can be integrated into the programming model, and distribute objects (think .NET or Java) and interact with those objects. The idea is the same, but the interfaces are a lot more convenient than the old command-line tools.
  • in particular, why or why not they chose grid software over improving the existing code to leverage better processor technology

    To a point, I have to wonder this as well. I'm really annoyed at not having the ability to use 32-bit drivers in a 64-bit OS when it's still going to end up addressing the same fucking registers. (I'm talking about a webcam, here, BTW) Considering that current 64-bit processors are based off of and share the same registers/opcodes (in the AMD/Intel market, that is, I can't speak
  • You shouldn't confuse GRID with HPC, supercomputers, or clustering.
    GRID is complex, it's main advantage is the way it can handle users, data and computational resource administration in a very hetrogenous environment. GRID is all about adminstring groups of users all across the globe, using all kinds of different hardware, to process data that's to big to be stored in one location, and therefore also very distributed. It has al kinds of tools to distribute the management of access to resources, users, etc.
    • Most grids in place today are not linking thousands of machines across dozens of sites--most are deployed within enterprises. Some are into the thousands of nodes, but far more are in the hundreds or scores of nodes.
  • I have been using Ice [] for about a year now and can strongly recommend it as a middleware framework. They now support IceGrid which I have not tried but it appears much more elegant than Globus.
  • A large investment bank running a datacenter on "at best 100 megabit" ??? For data-intensive workloads? I don't know if you have a SAN behind that or not, but... you don't need a grid, you need a gig switch and an architect internally who has a clue and can bring management around.

    I've been at two investment banks, one midsized and one gargantuan. Gargantuan one has a grid, along with piles of Linux servers, piles of Sun servers, large medium and small databases of both transactional and data warehousing
  • The key here is that the task you need parallelize is DATA heavy, not computation heavy. In other words, what would be the benefit of multicore processors? Practically none. It makes me curious why you haven't mentioned traditional multiprocessor machines. A NUMA multiprocessor machine would show benefit, because of the seperation of memory. Having seperate caches will help also. Having a multicore processor with a shared cache will just cause more cores to sit idle while the cache is thrashing. Havi
  • I am interested to hear from other people in a similar position and, in particular, why or why not they chose grid software over improving the existing code to leverage better processor technology,

    Not sure how "comparable" my situation is to yours (aerospace industry) but in a similar "many machines versus optimizing to exploit smaller, faster, better machines" situation we came down soundly on the side of the former. The reasoning went roughly like this:

    We know today's budget. We can use it to upgrade

  • ...than anything else, of a lot of things said previously, except the first line: Determine what you need, to get what you have to do done. A three-tiered design will likely give you the most flexibility to solve the problem in any number of ways, plus give you the ability to upgrade layers independently if bottlenecks are found. As someone mentioned, you obviously need a big infrastructure update. At the very least, get into gigabit ethernet. Put your database on, at least, a MP machine, such as a Core
  • How many of these 30 minute jobs do you have? Do you have 10 or do you have 10,000.
    How long are people willing to wait on the 30 minute jobs?

    A lot of tasks like this tend to batch easy, and if you can batch it, then you can throw it on a batch queueing system (like LSF, the one I have my experience with).

    At the end of the day, its a lot easier to run multiple jobs on multiple machines than it is to optimize a single job. It all depends on where you want to spend your time and what return you want and expe
  • Cluster and grid arent multually exclusive either... personally, I tend to think of grid computer as "this generations MPAR" ;)

Do not simplify the design of a program if a way can be found to make it complex and wonderful.