Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Folding@Home Releases GPU Client 177

SB_SamuraiSam writes, "Today the Folding@Home Group at Stanford University released a client (download here) that allows participants to fold on their ATI 19xx series R580-core graphics cards. AnandTech reports, 'With help from ATI, the Folding@Home team has created a version of their client that can utilize ATI's X19xx GPUs with very impressive results. While we do not have the client in our hands quite yet, as it will not be released until Monday, the Folding@Home team is saying that the GPU-accelerated client is 20 to 40 times faster than their clients just using the CPU.'"
This discussion has been archived. No new comments can be posted.

Folding@Home Releases GPU Client

Comments Filter:
  • Power usage? (Score:4, Interesting)

    by Anonymous Coward on Monday October 02, 2006 @06:04PM (#16284611)
    Anybody got an idea of what kind of power constant full speed GPU calculations are likely to burn?
    • Re:Power usage? (Score:5, Informative)

      by NerveGas ( 168686 ) on Monday October 02, 2006 @06:13PM (#16284757)
      I don't have specifics for that chip, but I would guess 100-150 watts. In both performance-per-cycle and performance-per-watt, it far outstrips using a general-purpose CPU.

      20x-40x the performance at 1x-3x the power usage is pretty good.

      steve
    • Re: (Score:2, Insightful)

      by jeffs72 ( 711141 )
      Or heat for that matter. My geforce 7900 raises my box temp by 4 degrees C just doing 2d windows xp desktop work. I can't imagine running a gpu at 100% and cpu at 100% for hours on end. Better have good cooling. (granted mine does suck, but stuff).
      • by merreborn ( 853723 ) on Monday October 02, 2006 @06:37PM (#16285049) Journal
        I can't imagine running a gpu at 100% and cpu at 100% for hours on end.

        Clearly, you're not one of the millions with an active WoW subscription.
        • ...WoW using 100% gpu? ROTFLOL!!1!11!one!!eleventy!1 (Completely off-topic, but...) Seriously, WoW is not very intense on the graphics card.
          • Re: (Score:3, Informative)

            by packeteer ( 566398 )
            It still uses 100%. The the more GPU you have the more FPS you get.
    • From Vijay Pande: Keep in mind too that, at least right now, FAH draws only 80W from each GPU, so it's a surprisingly energy efficient folding farm too.
    • Re:Power usage? (Score:5, Informative)

      by piquadratCH ( 749309 ) on Monday October 02, 2006 @06:49PM (#16285205)
      The german newsticker heise.de [heise.de] cites 80 watts for a X1900 card while folding.
    • The power drain of Folding@home on a GPU is not worth worrying about in comparison to the millions of people who play high-end graphics games using their GPU. Even if it is 80 watts for running Folding@home, at the very least it is doing some good, which is more than what can be said for those high-end grapics games. (ducks)
  • drawback (Score:3, Funny)

    by User 956 ( 568564 ) on Monday October 02, 2006 @06:05PM (#16284635) Homepage
    the Folding@Home team is saying that the GPU-accelerated client is 20 to 40 times faster than their clients just using the CPU.

    Yeah, but what kind of results do you get if you combine the GPU-accelerated client with a KillerNIC video card? It must at least triple the speed. at least.
  • good, I think... (Score:3, Insightful)

    by joe 155 ( 937621 ) on Monday October 02, 2006 @06:06PM (#16284653) Journal
    I like the idea of F@H, but I do worry about 1) opening up my computer to security risks and 2)damaging my computer because the processor (or now GPU) is getting hammered by always being accessed.

    Are either of my worries vaild? can it damage it (or speed up its death) and what's the probability of a security threat?
    • Re: (Score:2, Interesting)

      by rrhal ( 88665 )
      The capacitors in the power section of your motherboard have a finite life. If you are handy with the soldering iron you can replace these in an afternoon for about $15. I wonder how well the new (to motherboards at least) solid core Capacitors will do.
    • Usually, if your GPU runs to hot, your machine will just bluescreen, or reboot, or something along those lines.
    • Re:good, I think... (Score:5, Informative)

      by ThePeices ( 635180 ) on Monday October 02, 2006 @06:21PM (#16284869)
      You wont damage your card. The GPU's cooling system is rated for keeping the GPU within its thermal design spec at full load, how long you run it doesnt matter as long as there is adequate ventilation. That applies to gaming too, so its not a problem. As to sppeding up its death, your card will become obsolete by the time that happens.
      • ``your card will become obsolete by the time that happens.''

        I don't like that kind of reasoning. If my computer is good enough today, it should be good enough 10 years from now. About the only thing I am willing to concede is that computers aren't always "good enough", but I do think they are now.
        • Re: (Score:3, Insightful)

          by Aladrin ( 926209 )
          Yes, it'll be 'good enough' 10 years from now, as long as you don't plan to do any more then than you do today. Don't buy any more hardware or software and hope to hell you have no problems.

          Face it, computers are one of the fastest changing technologies. Intel plans to have some ridiculous hardware in only 5 years. 80 core CPUs? Crazy. If you think your current dusl dual-core setup (I'm assuming you have the best PC possible to back up that 10 yr statement) will be able to handle what an 80-core doesn'
          • by Alioth ( 221270 )
            Drifting off topic a bit, but I know a lot of people who don't do more on their computers than they did a few years ago. Many of them are running quite happily on pre-year 2000 machines of around 500MHz, with just some extra memory. Running a web browser and an email client and occasionally a word processor just doesn't require masses of computing horsepower. And that is basically what most people do on a desktop PC.

            I used to change my computer about every 18 months. My current one, though, will be 4 years
        • Re: (Score:2, Insightful)

          If my computer is good enough today, it should be good enough 10 years from now.

          I hope I just missed your <sarcasm> ... </sarcasm> tags.

          Ever hear of Moore's Law?

          wikipedia: Moore's Law [wikipedia.org]

          Transistor density has been doubling every 24 months (I recall it being quoted as 18 months, but we would be arguing semantics) for as long as I can remember. In 10 years, that's 2**5, or 32 times denser than it is right now. And you think the computer you have now will run anything remotely close to what

          • Next let's quote Bill Gates, "640K should be enough for anyone."

            You aren't quoting Bill Gates. You are quoting an urban legend.

            Frankly, while I think expecting a computer to have a ten year useful life is a big stretch, but I don't think it is unreasonable to expect a computer to have at least five years of useful life. My dad's computer is a cast-off dual Xeon 500MHz, made in 1998. Granted, it has a 10k RPM SCSI drive (which it was designed for) and 1GB of RAM in dual channel setup, the system was spec'
          • Hey look... it's Mr. Obvious Man!!!!

            Seriously, have you seen the dust and grime a computer accumulates after 10 years??? I'd want to replace the thing just because it's really ugly and disgusting by that point.
        • by Sark666 ( 756464 )
          And will things like xgl further along the demise of cards even they might be deemed 'obsolete'?
        • You make a very good point.

          A computer that does some task today, should -- assuming it wasn't designed to be flawed or have a fixed life expectancy from the very beginning -- still be capable of doing that task in ten years. And for the most part I think this is true; it will.

          Most computers that are 10 years old still run fine today (ones that were well-made in the first place); the problem is more one of finding a purpose for them, and then finding software to run on them, then getting them to start. Actually, I would wager that lots of computers that are 20+ years old would still run fine today, depending on how they've been stored and taken care of in the interim.

          The problem isn't that machines really "wear out" all that quickly; with some exceptions few do. It's more the relentless drive of increasing expectations that puts working equipment in the landfill. At least for home users; commercial users have their support contracts to worry about, so it's slightly more complicated.

          Case in point: I have an Apple IIc in my closet right now, which I know for a fact works fine. I could take it out tomorrow, set it on my desk, put in Apple Write, fire it up and start typing away. Somewhere around I even have a dot-matrix serial printer that I could use to output from it. Everything that Apple advertised that computer as capable of doing, it is just as capable of doing today as it was twenty-one years ago. So why am I not using it? Why am I sitting here with a computer that's only four years old, when I have a perfectly functional computer from 1984 in my closet? It's not because I like spending money. It's because I want to do things that I can't do on an old computer. There are a lot of things that I consider necessities, or at least things that are nice enough to have that I'm willing to pay for them, that weren't possible or even considered more than a few years ago.

          If you honestly think that what you can do with a computer today is all you're ever going to want to do -- that you won't see some neat feature on your friend's box in 2014 and decide that you need to have it -- then you're absolutely correct; the computer you have now is the last one you ought to ever have to buy. Realistically though, most people aren't like this; they know that the computer they have today isn't going to be something they're going to want in five or ten years, and they're not willing to pay for a machine that's built to last longer than that.

          The things that people use home computers for has changed, and will continue to change, and the tasks that people want to use their computers for will drive the upgrade cycle far faster than the breakdown rate of the components does.
        • by Eivind ( 15695 )
          Sure it can be. In 10 years your current computer, assuming it ain't broken, will work just like it does today.

          Which is to say it'll be completely useless next to a current model. Infact it's quite likely that just the extra power-budget for keeping your current machine will outstrip the cost of changing to a more modern machine with more power.

          If your current computer uses 300W, then that is 2500kwh for a year (assuming it runs 24/7, which most folding@home machines do).

          In 10 years its a given that y

          • ``Sure it can be. In 10 years your current computer, assuming it ain't broken, will work just like it does today.
            Which is to say it'll be completely useless next to a current model.''

            I guess it all depends on your expectations. I had a Sun Ultra 5 (from 1998), until I sold it a few months ago. If it hadn't had problems with the video card, I might still have used it for another two years, at least. It did everything I needed. Now I'm running a VIA EPIA at 533 MHz, which has a better video card and m
      • I ran SETI@Home before switching to Folding@Home continuously on my laptop, which was mostly a desktop replacement in college, and has been a picture frame for the past 4 years. It's a Pentium 200 MHz MMX, and is slow as mud, but it still runs just fine. It has years of processor time running at 100% capacity, and it still hasn't died. It's a Gateway 2000, pre-name change. But as long as it continues to chug along, I'm not throwing it out. But non-stop processing for years on end doesn't seem to have bo
      • True but if he built his own machine and has not ventilated his case properly for running the GPU and CPU 24/7 then he will probably overheat with a slight chance of damaging his system. I imagine most laptops would do this also.
    • by billstewart ( 78916 ) on Monday October 02, 2006 @07:48PM (#16285869) Journal
      Folding@Home and similar projects aren't a security risk, as long as they're from trustable sources. They're certainly far safer than the closed-source game software that was the reason you bought a high-end 3-d accelerated video card in the first place. I'd prefer to see projects like that being open-source (at least in the sense of "you can read the source and do anything you want with it", as opposed to the stricter "accepts changes back from the community" part of the model.)


      Most of the distributed-computation projects have a very simple communication model - use HTTP to download a chunk of numbers that need crunching, crunch on them for a long time, and use HTTP (PUT or equivalent) to upload the results for that chunk, etc. Works fine through a corporate firewall, and the only significant tracking it's doing is to keep track of the chunks you've worked on for speed/reliability predictions and for the social-network team karma that helps attract participants.


      Online games normally have a much more complex communications model - you've got real-time issues, they often want their own holes punched in firewalls, there's user-to-user communication, some of which may involve arbitrary file transfer, and many of the games are effectively a peer-to-peer application server as opposed to the simple client-server model that distributed-computation runs. Fortunately, gamers would never use third-party add-on software to hack their game performance, or share audited-for-malware-safety programs with their buddies, or "share" malware with their rivals, or run DOS or DDOS attacks against other gamers that pissed them off for some reason.....


      As far as the effects of running a CPU or GPU at high utilization go, most big problems will show up as temperature, though there may be some subtle effects like RAM-hogging number-crunchers causing your system to page out to disk more often. Not usually a big worry if you're running a temperature monitor to make sure your machine doesn't overheat. Laptop batteries are an entirely separate problem - you really really don't want to be running this sort of application on a laptop on battery power. I used to run the Great Internet Mersenne Prime Search when I was commuting by train, and not only did it suck down battery, the extra discharge/recharge cycles really beat up a couple of rounds of NiMH battery packs. Oh - you're also contributing to Global Warming and to the Heat Death of the Universe. But finding cures for major diseases is certainly a reasonable tradeoff, and we'll do that faster if you're using your GPU as opposed to 10 people using general-purpose CPUs.

  • by CaptCanuk ( 245649 ) on Monday October 02, 2006 @06:07PM (#16284669) Journal
    Looks like a good use of my ATI card when I'm not gaming or Google Earthing under Linux. Sweeeet!

  • I doubt the GPU can do IEEE double precision floating point.

    Is 32 bit precision precision enough for a scientific application
    like protein folding?

    Is the entire algorithm of folding a big approximation anyway?
    • I highly doubt that they use floating point operations, but I could be wrong. Floating point numbers are inherently inaccurate. If I were the FAH team, I would probably be using fixed point, as it's fairly precise.

      I might also think that GPUs can handle doubles as well as floats. But again, could be pure nonsense. I am not familiar enough with the low level operations of a video card.
      • by qbwiz ( 87077 ) *
        That might be a bit challenging, considering that I don't think that GPUs work very well with fixed-point (or any non-floating point) operations.
      • Re: (Score:3, Informative)

        by jfengel ( 409917 )
        Since we're dealing with measurements (or at least simulated measurements) of the real world, the numbers are always going to be inaccurate. Even in fixed point, errors accumulate. They just accumulate in different ways.

        One problem with floating point is that it risks being unrepeatable. If you don't carefully define the terms of rounding, you'll have two different machines arrive at different results on the same calculation. But as long as you pick a standard (e.g. IEEE 754), your results are repeatable. N
      • Re: (Score:1, Insightful)

        by Anonymous Coward
        I would probably be using fixed point, as it's fairly precise.

        "Fairly"?
        • Numerical Analysis is a somewhat complex art, and many people aren't good at it.
        • Floating point numbers are usually much more accurate than fixed-point, depending on the problem. They're certainly much less work to use - if you're dealing with fixed-point calculations where different numbers have different precisions, then you've got to convert them all by hand, and preventing round-off accumulation when doing fixed-point conversion requires significant care.
        • On the other hand, sometimes floating-point isn
    • I think newer cards with HDR and stuff like that can handle a bit more than 32-bit floats.
    • You really don't need that many significant digits for most problems. With floating point numbers, 0.00000000005 (about the width of a hydrogen atom in meters) can be expressed as a float or a double, just like 0.5 can. Also consider that all widths and distances are approximate, since the particles are constantly moving in unpredictable ways. Using 64 bit prescision would be as ridiculous as saying that the moon is 14295433070.866 inches from Earth.
      • by Nixusg ( 1008583 )
        How about using metric you insensitive clod! No wonder the mars Lander crashed ;)
      • That's not quite right. Yes, one double-precision float can measure a hydrogen atom's width in meters. But funny things start happening when you start subtracting and dividing limited-precision numbers. You can get numerical instability and errors increasing exponentially. Most any numerical algorithm that uses floating-point has to be written very carefully to avoid these sorts of problems. Check out a textbook on numerical methods.
  • Utilizing power of the modern GPUs is certainly impressive, however there is a serious limitation at this time.

    While SSE vectoring unit could process a 8 double precision 64-/80-bit numbers at a time, GPU could process vectors of hundreds of numbers, but limited to the single 32-bit prescision.

    Most of the established CPU demanding scientific applications will need douuble prescision. Only few problems are very well suited for lower prescision.

    • by baadger ( 764884 )
      As an engineering student being forced as part of my degree to do a boat load of math I would hazard a guess that the crazy fucked up world of mathematics has a way to carry out double precision fp ops by transforming the problem into a vector of hundreds of numbers.
    • Re: (Score:3, Informative)

      by dsouth ( 241949 )
      FYI --
      1. SSE vectors are 128 bits -- that's two doubles, not eight. [There may be 8 sse registers, but that doesn't mean you can do 8 simultanous sse operations.]
      2. It's possible to extend precision using single-single "native pair" arithmetic. There's a paper by Dietz et al on GPGPU.org that discusses this.

      This doesn't make GPUs capable of double-precision arithmetic, and doesn't mean they will replace CPUs. But it can be used expand the number of algorithms where the vast "arithmetic density advantag

  • by Anonymous Coward
    IMHO, the work the oxford university/grid .org cancer project is more important than understanding folding. It seems that folding@home is not directly working on producing a cure and they are focusing on understanding "how" something happens.

    Check out http://www.chem.ox.ac.uk/curecancer.html [ox.ac.uk] and decide for yourself. Personally, I don't see direct value/benefit to the folding@home project. I understand that knowing about misfolding is important for certain diseases and maybe even cancers ..but I see the oxfo
  • by J.R. Random ( 801334 ) on Monday October 02, 2006 @06:24PM (#16284907)

    "With help from ATI, the Folding@Home team has created a version of their client that can utilize ATI's X19xx GPUs with very impressive results."

    And therein lies the rub. While GPU's are getting more and more like general purpose vector floating point units, they remain closed architectures, unlike CPUs. Only those that can get help from ATI (or Nvidia) need apply to this game.

    • by flithm ( 756019 ) on Monday October 02, 2006 @06:48PM (#16285179) Homepage
      That's not necessarily true. It is a relatively new field of computer science, and thus there's not all that much info out there yet. But once you understand the basic concepts of general purpose GPU programming anyone can do it.

      What's most likely is that the guys at Stanford started pushing the hardware to the limit, and in ways the driver developers might not have anticipated. Probably what they ran up against was bugs in the driver, and the help came from ATI in terms of ways to work around the bugs. Evidence backs this up from Folding@Home's GPU FAQ:

      [You must use] Catalyst driver version 6.5 or version 6.10, but not any other versions: 6.6 and 6.7 will work, but at a major performance hit; 6.8 and 6.9 will not work at all.

      Your next question might be, if that's true then why use ATI (who are known for poor driver quality)... it might simply be a matter of that's the hardware they had to test with, so that's what they needed to use.

      At any rate, it's definitely possible to get started doing GPU programming without vendor support.

      There's even some API's out there to help... The Brook C API (for doing multiprocessor programming) has a GPU version out called BrookGPU: http://graphics.stanford.edu/projects/brookgpu/ind ex.html [stanford.edu]

      There's even a fairly large community of people using Nvidia's own Cg library for doing general purpose stuff.

      There's also GPUSort (source code available to look at), which is a high performance sorting example that uses the GPU to do the sorting, and it trounces the fastest CPUs: http://gamma.cs.unc.edu/GPUSORT/results.html [unc.edu]

      And last but not least there's the GPGPU site that is a great resource for all sorts of general purpose computing the GPUs: http://www.gpgpu.org/ [gpgpu.org]
      • by Anonymous Coward
        The point is that you can get documentation to program CPUs, at really low level (instruction sets, register maps, glitches and workarounds, etc), without so much fuss and do whatever you want, just visit Intel, AMD etc sites and get the PDFs. While for GPU you only matter if you are big and keep the information secret or go with the provided code.
    • by RonnyJ ( 651856 )
      If GPU-assisted code ever gets turned into a 'selling point' for graphics cards, you can be sure it'll be opened up more.
    • I'm not a gamer - I'm somewhat happily running my home PC on the built-in motherboard graphics, and if I upgrade, it'll be to get more pixels on a newer display, not for accelerations. New graphics cards come out fairly often, either high-end cards to grab the gamers or low-end cards to grab the cheapskates, and Tom's Hardware always talks about how the new high-end card is really really cool and the new middle-end card does what last year's card did for a much lower price.

      So how common are the ATI x1900

    • by pyat ( 303115 )
      Not necessarily. You really don't have to have the lowest level understanding/knowledge of the GPU to do interesting work.

      I saw a nice presentation at the IABEM conference in Graz this Summer from a researcher writing BEM-based Laplace-equation solvers on GPU units.

      GPGPU for BEM -- By T. TAKAHASHI
      http://www.igte.tugraz.at/guest/iabem2006/printpdf .php?paperid=17697&type=fullpaper&preview=1 [tugraz.at]

      There are some links, and even some code in that paper (PDF)

      Essentially all he had to do was map his mathematic
  • People running SETI@home have asked and asked about versions with various processor optimizations, or versions that use GPUs, which are very much suited to lots of parallel operations. The SETI@home team answer is that they won't release versions that use specific optimizations for specific hardware because they're worried about the integrity of the results - They want people to be running as nearly the same client as possible. Given that it's very easy to double-check a given piece of data if there's any
    • by SETIGuy ( 33768 )
      The SETI@home team answer is that they won't release versions that use specific optimizations for specific hardware because they're worried about the integrity of the results

      Time to tune into the new century. SETI@home has been available under the GPL for several years now. Nothing prevents you from modifying it and using the modified version.

      I keep asking for people to send me processor specific optimizations and so far only a Mac/PPC version has shown up. I'm ending up writing the SSE version mysel

  • Damn, and I've got a 9600XT just sitting on a shelf.
  • Sadly, Mac support is still lacking. I've got a Mac Pro with x1900xt, and I'd be happy to donate, but it runs in OS X 99% of the time, so I have to run it emulated, and I can't do the graphics card thing. Any idea when a Universal version (and/or a GPU version) for Mac will be out?
  • Anyone know where I can find good starting places for GPU coding? Our Vectorspace engine would really benefit from that kind of power... I'd love to learn more about it.
  • ...but apparently finishing the friggin OSX/Intel port they've been working on since January isn't.

    It's ok, I didn't want to help cure cancer anyway.
    • The problem isn't the Pande group or their effort to get an OSX/Intel client, it's the compilers that are required that are not yet available, or may not become available. It's not as if they can throw it into XCode and just make a Universal Binary. -Sam
    • They've been working on the GPU port since June 05 or earlier. Wait your turn.
    • Obviously scientists don't even trust Macs :)
  • ok, pls someone tell ati to _write drivers_ for their hardware before starting to write innovative software on them. I have a 2-year old ati card that does not get 3D support with fglrx drivers. Most funny thing, I get 3D with *open source* driver ati!

    go figure...

    • by tcc3 ( 958644 )
      That would be the problem with ATI. Even when they bother to write drivers, they arent very good. I've never had a beef with their hardware but they could learn a lot about software supprot from Nvidia.
    • No. Someone tell ATI to stop releasing drivers and start releasing specs instead. I've got an X1600 sitting on a shelf because there was no way to get it to work properly with ATI's fscked up driver. My solution to the problem was to spend my last money to upgrade to a Geforce 6200TC... Because having the power of a Geforce 4 is better than having any graphics card that needs drivers from ATI, at least if you want any kind of usable hardware acceleration outside of Windows. The ATI cards with community-writ
  • It's a win/win situation. Folding@Home crunches more data and the electric company makes more money.
  • Has anyone harnessed these folding algorithms for de/compression? Because 20-40x more power that can be stuffed into several PCI slots for parallel de/compression so cheap is worth waiting through all these exotic @Home projects to get better Net streaming.
  • by naoursla ( 99850 ) on Monday October 02, 2006 @09:50PM (#16286981) Homepage Journal
    Is there any way I can use this to make my next graphics card purchase tax deductable?
  • Does anyone know the answer to the following?
    • Is the ATI 19xx available in PCI, AGP or PCIe x1?
    • Is a PCI or AGP card an option, or does it need PCIe's throughput?
    • When the FAQ says not to run multiple video cards, is that a system limitation due to DirectX, or can you run an nVidia as your primary card and have an ATI 19xx series in, say, the second PCIe x16/8 slot?
    • (And for personal interest, is there a 19xx series card that's passively cooled?)

    Basically, can I pickup a cheap X1900 and whack it into my PC jus

  • Good for ATI (Score:4, Interesting)

    by mollog ( 841386 ) on Tuesday October 03, 2006 @02:12AM (#16288313)
    I predict that this new client that runs on ATI hardware will cause a spike in sales of their products. I, for one, will be trying to get this card for my computer so that I can improve the rate that folding@home runs on my system. And I'm certain that others have the same intention.

    If you think about that, it says something about us that I think is important; people want to help and they're willing to spend their money to be helpful.

    The concept of voluntary grid computing is a curious one. Why do people do this? Surely one more little CPU grinding away at a huge problem won't make a difference. Yet even though we all know this, we do it anyway. The result of this collective hopefulness and helpfulness is tangible. But what else is strange is that so little notice is given to grid computing. I don't recall hearing about it on CNN or any other news television program. SETI gets air time because it's so, well, 'out there', but the folding, aids, cancer/find-a-drug stuff is operating in obscurity.

    BTW, kudos to Slashdot for helping get the word out. I first heard about grid computing here.

  • How reliable will the results from the GPU client be? I've got a video card that's on the verge of overheating, so it often exhibits stuff like a few flashing polygons when playing games. It doesn't crash, though.

    Will things like this affect the outcome of the calculations, and give bad results? While an overheated CPU usually crashes and burns long before it can submit bad data, I am worried that overheating GPUs might give bad data which aren't obviously bad.
  • Problem: Given 1D chain of aminoacids, predict 3D structure they fold into.
    Solution involves resolving two major problems.
    1) For any given two folds identify which one is the "real" one, or closer to the "real" one. This is done by assessing the quality of the fold using a scoring function, for example Gibbs energy (more physical approach) or how well the fold resembles one of the naturally existing folds (empirical approach).
    2) One has to be able quickly review myriads of different folds, preferably bypass

One man's constant is another man's variable. -- A.J. Perlis

Working...