Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Science

A Primer On DNA Computing And Software Breeding 50

There's been some interesting article published lately in the realm of DNA computing and software breeding models - kinda the land where 1s and 0s and Darwin meet. ArtsTechnica has got a primer on DNA computing which goes over the high points of DNA computing, and is accessible to anyone who remembers high school bio. Feed Magazine has got an article that examines breeding software and what that means.
This discussion has been archived. No new comments can be posted.

A Primer On DNA Computing And Software Breeding

Comments Filter:
  • Given a set of cities and routes/distances between them, is there a path that visits every city only once? If so (and there is more than one), which one is the shortest?

    It's called the Travelling Salesman Problem because back when salesmen did lots of travelling by car (may be done now too, I don't know), they wanted to maximize their time and minimize their costs. Solving this problem does that.

    You can easily model the problem as a graph or network of nodes/vertices (cities) and edges (routes) with weights (distances).

    This problem is NP-complete, which means there is no known efficient algorithm to solve it. As soon as the number of vertices gets high, the problem gets really nasty to solve (exponential) and you have to start using heuritics, etc. to find a solution. The highest number of vertices in a solved problem of this nature is something around 13,000. It was done with massively parallel computation and heuristics using convential computation techniques (i.e., not DNA).

    Woz
  • For you Netherlanders, or anyone else in the area:

    http://www.lcnc.nl/dna6/ [www.lcnc.nl]

    I went to DNA3 in Philly and it was very intense. Biochemists, Compuationalists, and Computer Scientists.

  • There's been a huge recent upsurge in the number of "wonderful new computers" (just look at the groundbreaking stuff mentioned on slashdot... quantum computer, DNA computer, etc etc)

    But exactly how are we going to be able to use this technology in the future? This is nothing but vaporware, a first step towards something that isn't a sure thing. Scientists may be able to encode and store data on DNA strands, but how are they going to do anything meaningful with them?

    Until researchers find some way to *quickly* read and write from these DNA computers, don't expect any new stuff to change our lives dramatically.
  • I can see where fairness might imply this, but you are talking about biases learned before the people in question learned to talk, or pretty nearly, so they don't see that there is any rightness on the other side.
    Personally, I consider such religious blindness to be extremely dangerous, unfortunately it's so common that politicians court it for the votes. And care not for the consequences.
  • The potential for dna computing has long been realized. As noted in the article, the problem was trivial and easily solvable by inspection; by the human brain, a highly complex arrangement of DNA.

    Storage of os's, data and aps that adapt are a function of life from amoeboids up. What is important here is molecular level switching, not DNA in and of itself.

    Arts Rant on

    1. Linus Torvalds (and others) store entire os's in their heads.
    2. (pick your favourite musician) stores their whole repetoire in their head plus uncountable variations on those songs.
    3. My two kids grow and adapt bettter than any app or box I have seen or am likely to see any time soon
    Arts rant off
  • Let me step into devil's advocate mode.
    The same could've been said of calculators. When electronic calculators came out they already had slide rules. These did the same job faster and easier and had more uses. Calculators were slow, cumbersome and ugly. Why should they have invested in calculators, they didn't accomplish anything new or better.
    Yes they seem kinda stuck. And somewhat limited in the current state. However so were computers when they only used machine code. Or only used vacum tubes. Yet the technology evolved and overcame these limitations and eventually we have what we have today.
    Most inventions are just better ways of doing things we already know how to do. We already knew how to get from point A to point B - walk or horse - before cars were invented. And when cars were invented they couldn't move any faster than cars.
    Yes it may be hype, but it may also eventually lead to some useful things. Why block that path?
    -cpd
  • This is nothing new. More precicely the idea of evolutionary computation isn't new:
    1.Abstract data

    2.Make a population of abstractions with somewhat random values.

    3. Evaluate each member of the population for it's "fitness" or how good of solution it is.

    4. Merge together good solutions, mutate a few, and kill off the ones that suck.

    5. while(not done)
    go_back_to_step_3();

    6. output your solutions

    "DNA" abstractions are nothing but binary(10110110) or quaternary(GATTACACCTTG). These suck compared to representing the possible solutions with different data structures. I you have a linked list of structs why should you take the time to recode it in binary when you can just tailor the mutation,combination,and evaluation functions to fit your data structure.

    Genetic algorithms are old had. Try Evolutionary Programing. Basicaly the same idea, but be smart about it and use data structures that are easier to code instead of taking all the time to recode it in binary or GTACCGACTA.

    This is more efficent to. You will have to do a lot less bounds checking because you are dealing with familiar data structures. Array[3][3] is a lot easier to dea with than "GATTACGAA"
  • Just think -- Now the computer programmer's creations can be breeding more than the programmer. 8)>-
  • Grrrrrr, grumble, grumble.

    What high school did you go to Hemos? Christ, I didn't learn anything resembling that in High School. But then again, I majored in smoking pot and being opressed by jocks.

    Seriously though, it's gonna cheese me off in a major way if all my l33t hardware skills get obliterated by some wierd-ass bio freaks. In all, it was an amazing article... that parts I could understand at least.

    Must resist!

  • As of right now, this is just another example of a neat-o hack; kinda like using an oscilloscope to play video games. You have some lab tools and a bored mind and you have a cute parlor trick!

    Regarding some of the issues it brings up:
    Stochastic processes as better? Maybe I'm out of the loop, but once I read this article I said "the factor that limits the usefulness of this is that it's stochastic! I don't want probabilities that my "computer" MIGHT compute, I want to compute!"

    regarding it being Massively parallel; yes it is, but up to a point: there is no inter-processor communication. And not all problems can be broken down into 27000 linear problems that are completely independent. however in the case where a problem can be broken down into 27,000 problems that are all independent, then that's where this takes off! Once you can encode your data in DNA format, you have these special purpose "processors" (enzymes) where all they do is find data to manipulate, and do so. However given the current set up, it's like computing in the old days: get your program together, submit it to the computer people, they processes it over a couple of days and hand you the results back.
    So my score: Useful? No. Interesting? Yes.
  • I was thinking the other day (a dangerous persuit), and a peculiar thought popped into my head. People are coded in binary. DNA is just a bunch of nitrogen bases ordered. There are four possible nitrogen bases, each could be seen as a different state (much like the high/low voltage 1 and 0). Now, this would appear to be a quadernary system, but just like hexadecimal, quadernary can be converted into a binary state. Each quadernary digit (A, G, T, C) can then be translated into two binary digits (00, 01, 10, 11). This 'code' drives the 'hardware' of amino acids. I'm sure its not an original thought, but it seemed pretty cool. Now, if I could just figure out how to make a hack for my DNA that would let me think faster...

    -The Tardis
  • Think for a minute about how this would be bad. You have to grow a computer right? Well then you would most likely have to do all sorts of very precise measurements and set up conditions. These are going to require a massive lab to get to work properly. Also there is a high degree of failure possible. If people are having a hard time getting organs created via cloning and the like how likely is it that a computer is going to be created?


    The point you're missing is the self-organization of these systems. The goal is NOT to have to spend months in a lab just to set up conditions to create one unit. These systems will grow, perhaps very quickly into the desired functionality, like a seed grows into a tree. I recall a story last fall about researchers growing LCD in an organic process that didn't require the high temperatures currently used in the manufacturing process. These high temperatures force the requirement of something like glass, which can withstand them without melting or burning. Using the organic process, LCDs can be 'grown' on thin plastic film, creating FLEXIBLE lcd displays, electronic paper, etc. We're only BEGINNING to tap the potential of organic technology.

    Also AI for the most part is still a plaything and something that one really can't easily study or actually get a job in. Sure you might learn something but getting money is top priority for survival.

    What could EVER lead you to say this? Sorry to go off-topic for a bit, but do you have any idea how ignorant that sounds? AI itself is an extremely quickly expanding field. Shall I give examples? AI is responsible for PDAs [palm.com] being able to perform hand-writing recognition. Via-Voice [ibm.com] and other recognition technologies use AI. GIS systems [mapquest.com] use AI to generate routes. Played any video games [idsoftware.com] lately? AI now produces better, faster, tougher monsters. AI is being used to detect insurance fraud, see Infoglide [infoglide.com] for example. Search engines [yahoo.com] use AI to produce better results. I could go on and on, but I'll finish with Slash [slashdot.org], which uses a kind of AI in the form of moderation. Please moderate me up! ha-ha Anyway, go do some real research before spouting off like that. AI is not just about the Turing test.
  • by Nagash ( 6945 ) on Wednesday April 19, 2000 @05:57AM (#1124048)
    There's more to it than doing actual computation. These are arguments that have been had a thousand times over in the past: "I don't see the benefit right now, so what's the point?" It may be excessively useful, it may not. The point is, we have to find out.

    Even if DNA computing proves to be too cumbersome to implement, we can still gain lots from it - for example, perhaps hidden deep within it is another model of computation. Maybe we can find something that's better then a Turing machine (i.e., it can check another program for infinite loops and find them). Hell, it may prove to be a useful storage device.

    DNA does operations on data. It does remarkable things with them and it can do a lot of things at once. Studying this is not a bad idea, but so is putting all or eggs in one basket. If totally concentrated on shrinking die/chip sizes, we'll probably regret it at some point. Push the limits. It's fun! :)

    Woz
  • IIRC, from the papers in the ALife conference proceedings, no evolved sorting network was better than the best known solution. I believe he had to use parasites to even evolve one as good.

    --
  • I actually find having a white background hurts my eyes when I stare at it for a long time, my IRC client is black background with white text because I can't stand the brightness for extended periods. Oh yeah, and didn't you ever use DOS? :P
  • Because biological computing can do things that "conventional" computing hasn't come close to doing and might not be able to do.

    Your arguments seem to be a rephrasing of prior arguments against mechanical computing.

  • Also AI for the most part is still a plaything and something that one really can't easily study or actually get a job in.
    You probably use AI everyday (atleast I do), everytime you use a search engine (Google or Inktomi based, and probably most others), you're using AI. Just because it doesn't sound like HAL 9000 doesn't mean it's not AI based. And I don't know what makes it hard to study (relative to computing in general) other than some kind of inflexible religious bias against it.

    It is true that knowledge of "AI" is not especially marketable right now (but the same could be said for knowledge of "algorithms").

  • With the amount of design effort required to just to come up with the appropriate dna molecules for processing an NP-Complete problem such as the Hamiltonian circuit problem, any hardware designer worth his salt could have created a piece of dedicated hardware to solve the corresponding problem in parallel in the same amount of time.

    DNA computing is interesting, but outdated before it came into existence. Sure, things are tiny, and theoretically that lets you do large amounts of things in parallel, but it's not paradigm shifting. You can build electrical circuitry that runs in parallel just as easily, it would just be slightly larger than the corresponding dna molecules, assuming someday they find an effective means of lining up large quantities of them.

    However, the most important reason DNA computing is already obsolete is quantum computing. In the past two years there have been some incredible, and to most unexpectedly early, advancements in quantum computing. Quantum computing, unlike dna computing, is paradigm shifting. Instead of attacking hard problems like NP-Complete problems by doing huge amounts of parallel computation, it does them the right way (by Nondeterministic Polynomial theory) and processes phase 1 nondeterministically.

    Nondeterministic computation is, from the simplest description, "magically" getting the right result by applying a process to an uncertain piece of data, whereby the actual processing reduces the data down to the correct answer, from which the correct input is then known. In this manner, problems such as the Hamiltonian circuit are no longer difficult, and can be solved in quite reasonable periods of time.

    The important difference between nondeterministic computational power and conventional deterministic computing is that when you take a problem like the Hamiltonian circuit problem, and scale it up to a huge number of vertices, dna computing falls apart because you would have to create an enourmous quantity of dna to correspond to the enourmous number of vertices. Quantum computers will scale quite fine in this area.
  • Can't you people tell when someone is just being funny? You people seen to be a bunch of sourpusses. I got a good laugh out of his comment.
  • The article about how genetic algorithm managed to evolve a very fast sorting algo is very interesting. But is this necessarily good? Computers and machines have often been used to replace humans because machines are cheaper. Will they end up replacing programmers some day? Will these developments affect the career opportunities of programmers in the near future or distant future? I was under the impression that a computer career was a safe bet. =)
  • I thought it said he had better sorting networks?? but it's been 5 minutes since I read it so it's very possible that i've already forgotten... but if Hillis' method does work well, why couldn't it be applied to the 'traveling salesman problem' (or any other mathmatical problem for that matter) mentioned in the feedmag link? Even though current methods don't scale well, wouldn't they be able to evolve into methods that do? just like in Hillis' expirement. I really have no idea what I'm talking about, so I'll just go back to my hole now...

  • No one is claiming that DNA computing can solve a problem better than silicon computing. Yet. That is viewed as the holy grail of the field. DNA computing is simply one way that we may end up doing molecular level computing. It has plenty of drawbacks, and the jury is still out on whether or not it will ever be useful, but the excitement is over the fact that you can do such massively parallel (optimistically 10^20 operations) computations. My guess is that it will won't replace silicon as a general purpose medium for computation, but rather that we will find some specialized uses for it. Laura Landweber [princeton.edu] (big shot in the field and cool person) at Princeton has some good ideas along these lines. It is a hot new field, and a lot of people are now working on it. There is a lot of work to be done just in quantifying and then controlling errors in the processes. It might also be noted that we have only begun to mine biology for useful enzymes to be used in computation.

    If you want to read more about DNA computing, the best source is the set of 5 DIMACS proceedings (DNA Computers, DNA Computers II, etc.). If you would like a more in depth review of the field, I published one in Evolutionary Computation 6:201-230. You can find a postscript version here [unm.edu]. It was designed to be readable by computer types and to bring you up to speed such that you could start contributing to the field. I don't know how well it fared in this department. Unfortunately, it now a bit out of date. A more up to date version will come out soon in a collection on Molecular Computing edited by Tanya Sienko from MIT Press.

    Cheers, Carlo Maley

  • Anyone have any thoughts as to what sort of potential uses there might be for DNA and computers? While most people are looking one direction -- using computers to analyze DNA -- there's another area that deserves some investigation: What about computers that *use* DNA.

    Probably the best compression format ever created, DNA offers some amazing benefits that could be implemented into computer usage at some point in the future. Imagine entire operating systems stored in synthetic DNA; your whole mp3 library taking up the space of a human hair; Software that grows and adapts along genetic guidelines....

    Anyone have any other ideas?


  • What a pity you didn't even read the article.

    and we simply don't have star trek level technology to support them

    Had you actually read the article, you would have discovered that some biologists suspect that mother nature may have already solved half these problems for us, as cells (esp in DNA replication) appear to use some pretty advanced quantum computing techniques already, and they do it at body temperature.

    Read it :) Interesting stuff, not the usual "well, if we somehow figure out how to break every existant law of physics, i can search databases in nlogn/x tries" quantum computing jibberish.

    Anthony
  • So is there someone at the FSF working on a truly viral implementation of the GPL for use with DNA computers?
  • go on, click the link [newscientist.com], click it [newscientist.com], you know you want to :)

    the article [newscientist.com] (go ahead, click [newscientist.com] on it [newscientist.com]) agrees with you largely, to quote it:

    Physicists generally concede that the task is so formidable that a practical quantum computer won't exist for decades.

    The forces of evolution, he claims, may have solved the problem of quantum computing several billion years ago. It's a startling idea--but if true, it could explain a puzzle at the core of biology.

    Essentially they're trying to figure out why information in DNA is encoded using 4 base pairs, when binary is way more efficient, and therefore should have won out in an evolutionary context. Apparently, if quantum computing is used at a couple points in DNA replication, 4 becomes more efficient than two, which isn't to say that it *does* it, but only that it might...

    Anthony
  • Could we see the succesful pairing of these technologies in our lifetime?

    How long will humans be relevant after such a union takes place?

  • It's called GENETIC computing, not DNA. Foo.

  • that's odd, i think my cookie expired while that was submitting or something, i didn't check the 'post anonymously box'. either way, that's me, not meaning to be an ac ;)

    anthony
  • by LaoK ( 124816 ) on Wednesday April 19, 2000 @06:08AM (#1124065)

    I've done a little bit of work in DNA computing, and my impression of the state of the art is that we're only at the point where we're wiring the vacuum tubes together in order to program (if that).

    It's kind of a "Nanotech-Complete" problem, in many respects. The repertoire of available enzymes is limited to those that are otherwise useful in molecular biology (endonucleases, ligases, methylases, etc.), but for some applications, "designer enzymes" are needed, and the technology to produce arbitrary enzymes is just not here yet (though a solution to the "protein folding" problem might be possible, given sufficient conventional computational power, e.g. something like distributed.net, or seti@home).

    The biggest problem with DNA computers is I/O, primarily input, or what's called the "encoding problem". Representing arbitrary information as DNA sequences is a computationally hard problem in itself, since one must ensure that the encodings are unique enough to only interact with each other in the desired ways which contribute to the solution of the problem, and most importantly, produce a true solution (i.e. do not corrupt the data).

    The output can be handled in a variety of ways, perhaps the most promising of which involves DNA microarrays (a.k.a. "gene chips"), which could potentially serve as an interface between DNA-based computers and conventional silicon-based computers.

    But I'm afraid we're a long way from having a general-purpose DNA (or more broadly) molecular biology-based computer. And by the time we have the technology to build one, I suspect the other applications of nanotechnology may have rendered the point moot.

    LaoK

  • 1) White text on black background is unreadable.
    Oh? Do you find your console messages during bootup unreadable?

    Some people actually prefer light text on a dark background. Usually, I don't much care; but when I have to deal with a low refresh rate, that's the only way I can avoid a headache.

    On the site in question, I found the small font much harder to deal with than the color scheme. The main body of text should generally be in the font the user has selected for their browser's default; if you're going to specify a different font for the body, make it larger, not smaller!

  • I think you are severly underestimating the abilities of those studying this. Considering genetic research has done "miracles" in the last 20 years with regards to how much it has learned about DNA, how long do you seriously think it will be before lots of practical applications come out of it?

    Both computational study and genetic study are new fields. Electricty was a new field 200 years ago. Look what we have now. I know this is a tired argument, but it's very true and must be respected.

    Woz
  • For a most intriguing experiment on evolutionary computing, have a look at Tom Ray's Tierra Homepage [atr.co.jp]. From the project website:
    The Tierra C source code creates a virtual computer and its Darwinian operating system, whose architecture has been designed in such a way that the executable machine codes are evolvable. This means that the machine code can be mutated (by flipping bits at random) or recombined (by swapping segments of code between algorithms), and the resulting code remains functional enough of the time for natural (or presumably artificial) selection to be able to improve the code over time.

    There's also an ongoing network experiment where several "islands" of evolution are linked via the Internet.

    Very interesting stuff.

  • Since the NP class, at its heart, still relies on Turing Machines, then you'd have to wonder how artificial the P=NP problem actually is. I mean, sure it seems that the DNA is going to still scale exponentially, but that's just with the naive algorithm that was given.

    It seems that a firm theoretical grounding using DNA or Quantum machines would be needed before we can predict the ability of these machines to solve problems.

    And another thing... it really bug me when people start claiming "my machine is alive" or "what about cancer," etc. This is similar to the logic of "A dog has 4 legs, a cat has 4 legs, therefore a cat is a dog." Don't be stupid. Just because some futuristic machine might have a bit of slimy goop in it doesn't mean it's alive.

    "I got a 98% in my high-school C++ class and I can't read the kernel, therefore nobody can read the kernel." -- ever enlightened slashdot respondent.

  • Hmmm. Recoculous, ha? I don't really know what that means, but sounds like you have a case of acute keytyposis.

    I would not want your DNA in my computer, that's for sure. It'd probably start stattering on my d's and i's.
  • 3. The technology just forces "programmers" to learn more and more stuff to become competitive in any forseeable way. Forcing people to learn massive ammounts of data and forcing them to accept some lousy paradigm is rather akin to a criminal act.

    That's kind of a bad aditude to take, IMHO. As technology improves, we always have to learn how it works in order to use it. Years ago, C++ came out, and everyone moves to that. Now, nearly everyone uses C++, with the "object oriented programming paradigm". Who knows what the next one will be. No one is forcing people to learn new things. It's just that people who know new (and hopefully useful) things will be in demand, and hired by people seeking programmers with that certain set of skills.

    On the topic of biological computers, however, there is an interesting point worth noting: There's been quite a bit of media hype recently about quantum and biological computers and other massivly parallel devices, and what untold wonders they can accomplish. Consider, however: There are very few problems that can really take advantage of that much parallelism... One of the few ones that can is finding primes to crack many of today's encryption methods. And as soon as someone mentioned that potential application to the government.... BAM instant funding. Hmm....
  • While what they are talking about in this article, as opposed to many of the beliefs of people here, is in fact something special that is refered to as DNA computing...DNA computing as described here is nothing more than a special type of genetic algorithm.

    So, " kinda the land where 1s and 0s and Darwin meet"...not quite. Genetic Algorithms and Genetic Programming have been around for a long time. DNA computing is just a special version of gentic algorithms...so nothing new...what's the big fuss?

  • Stochastic processes as better? Maybe I'm out of the loop, but once I read this article I said "the factor that limits the usefulness of this is that it's stochastic! I don't want probabilities that my "computer" MIGHT compute, I want to compute!"

    Damn right stochastic processes aren't better than hard answers. But what about if:

    a) The hard answers is really, really hard to get; and

    b) The limit or probability of error is so incredibly small that it may as well be zero (i.e. it's not a hard answer but it's really, really FIRM :-) )

    - then maybe you might like a stochastic process. I don't have a good example handy fulfilling these conditions but someone reading this is probably doing a PhD on just such a problem, right?

  • Convergence of biological technology and digital technology is the whole theme of Sundman's "Acts of the Apostles" (wetmachine.com) which in addition to being a spooky novel is also a pie in the face to Silicon Valley infocrats and biotechnocrats in pharmaceutical industry.
  • Let's clone the smartest people's DNA and use it for computer brains! Einstein in my box running Quake, what's the hack value of that?
  • 1) White text on black background is unreadable.
    --
  • What do we get when we cross linux with windows...
    A blue pinguin that crashes all the time? Oh wait pinguins can't fly.....

    Or better, BSD and linux... A demonic pinguin.

    Cross that with HURD and you get a demonic pinguin with bull horns...

    I think I am going to get nightmares from reading this.....

    Grtz, Jeroen

  • Americans amaze me time after time....

    Would this mean that a similar text would need to be placed in every religous book?

    This book holds the unproven theory that there is an almighty immortal entity, from now on to be called 'GOD' or 'Lord', that created the universe. Use at your own risk.

    Budhism is a controversial belief that a persons soul, an non-corporal entity whose existence is not confirmed, will be reborn in another living thing.

  • Odd timing for this one, I just finished reading this [newscientist.com] article over at New Scientist [newscientist.com] on how DNA may use quantum computing techniques...

    Anthony
  • by Pfhreakaz0id ( 82141 ) on Wednesday April 19, 2000 @04:46AM (#1124080)
    All of 'em would have to have a disclaimer:

    "Software evolution is a controversial theory holding the unproven belief that random, undirected forces produced a world of better software. Use at your own risk."

    Yes, they were really gonna put this in the books here. Fortunately, it was thrown out. [detnews.com]
    ---
  • Cross that with HURD and you get a demonic pinguin with bull horns...

    Cross the with HURD and you get ten years of demonic penguin sightings, but no actual demonic penguin.

  • Biological computing has the potential to revolutionize the way we do things. Using DNA, computers can be *grown* rather than manufactured. They would no longer require power, but rather nutrients. They could accomplish computations in a one-step process that takes billions of CPU cycles. I think its certainly an interesting field to watch. However, DNA is not just being used at the hardware level. There is a growing field of artificial intelligence research concentrating on neuro-evolution, using DNA to encode neurons of a neural network, selecting the best-performing set of neurons, and recombining the DNA in a form of 'breeding' to hopefully get better neurons. Check http://www.cs. utexas.edu/users/nn/pages/research/neuroevolution. html [utexas.edu] for some interesting links. Look for the section on Eugenic Evolution.
  • Software breeding and DNA computing are very different! Why are they in the same article?

    The former (most interesting IMO) is about using evolutionary principles to breed better software - and yes I mean software as we know it, not (necessarily) for these DNA computers

    The latter is making a calculator from DNA - still nice but quantum computing is where it's at.

    PS: I agree strongly with another poster who said DNA computing should be called genetic computing.

    ------------------------------------------------ -
    "If I can shoot rabbits then I can shoot fascists" -

  • This may sound like flamebait, but it's not... lately, it seems that there is a lot of hype about biological systems and how it "exploits evolution" or "uses the solution mother nature has been using for all this time", or some such nonsense.

    If you actually read till the end of the article, you'll realize that what they have done is merely to re-encode the TSP (travelling salesman problem) in terms of DNA reactions. As far as I can see, there is no gain whatsoever from doing this -- a computer doing a brute-force search on the TSP problem would have yielded the solution much quicker than the DNA method. The only potential advantage of the DNA method appears to be the "compactness" of data and the "stochastic, massively parallel" nature of it. But at the end of the article, it specifically says that this stochastic nature of DNA reactions (or any chemical reaction, that is) itself is the barrier -- it doesn't scale well. WTF??? That means that you have a method of solving TSP which is "massively parallel" (and therefore, by some strange fuzzy reasoning, it is "better" than silicon-based methods) but which doesn't scale well. Big deal, back to square one.

    IMNSHO a lot of this hype is just riding on the unfounded assumption that "stochastic" is "better" because "that's what Nature uses". BS, I say. Just because something is stochastic doesn't make it any better (except by superstitious association with "stochastic" processes like evolution or some-such.) You can get something useful like strong cryptography from randomness. But you can also get white noise from randomness. Until it's proven that a particular process actually has its merits (and not merely duplicating what can already be done by an existing computer or other method), it's all just hype.

    Granted, the "massively parallel" claim seems to be more credible; but in this case, they've just shot themselves in the foot -- the supposedly good "massively parallel" nature of the DNA method turned out to be a limiting factor, as you wuold need impractical amounts of DNA to conduct any non-trivial computation. Except perhaps for some hype value associated with the phrase "massively parallel" (boy, don't we love that term. Beowulf clusters, "massive" multi-CPU systems, etc.), I see no value whatsoever in this whole thing. As far as I'm concerned, somebody just hit upon something in the lab. Big deal, scientists have been doing that for centuries. Let's see something real produced before it's hyped like the Next Revolution.

  • by Hrunting ( 2191 ) on Wednesday April 19, 2000 @05:03AM (#1124085) Homepage
    So, if you repeatedly use legacy code in your new code in an attempt to ensure that your new code will support the same sorts of programs and environments as your old code, do you call that software inbreeding?

    And if so, does that make Microsoft a bunch of software rednecks?

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...