Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Education

Will Billions Of Nodes Need Biologic Networking? 87

Stephen Bamattre writes: "There is an interesting research project at the University of California, Irvine, that attempts to create a new concept of networking and distributed architecture, using biological concepts. Interesting, it may a more viable network design for rapid growth. Maybe, some day, we can really kill processes. Check out the paper here." From their overview: "We believe that large scale biological systems, such as the bee or ant colony, have already developed many of the mechanisms needed to satisfy these requirements. We have identified several key principles and mechanisms in these biological systems, and we are now applying them to the design of network services and applications." Surely not a new idea, but a little more concrete as described here than the usual "network is alive" metaphors. The paper is in compressed PostScript and as a Word file.
This discussion has been archived. No new comments can be posted.

Biologic Networking?

Comments Filter:
  • by Anonymous Coward

    Am I the only one who can see the possible dangers inherent in this kind of large-scale attempt to mimic biological behavioural mechanisms? Is the hubris of these scientists so great that they have ignored the possible consequences, or have they just forgotten in their rush to experiment with a new toy?

    Given any system of sufficient complexity and flexibility we have the possibility of real intelligence arising as part of an emergent phenomenon. And since this system is designed to mimic known biological mechanisms involved in conscioussness and thought, it is even more likely that it will become intelligent once a critical threshold of complexity has been reached. And given the explosive growth we see in networked appliances of all kinds, this threshold cannot be too far into the future.

    What would be the consequences of a large scale network such as the internet suddenly developing intelligence? We cannot be sure, but at the very least it would cause the collapse of economies, which are tied more and more to the unimpeded and guaranteed flow of information across the globe. And given that it would be able to access the sum of human evil and hate as stored in many places on the net, would it look benevolently upon its creators? I doubt it, and I worry about what would follow.

    For a good book detailing the dangers of this kind of biological computer, read Distress by Greg Egan. This subject is worth a lot more thought before it can be allowed to proceed.

  • Is it me, or does he kind of look like the guy who played the frankenstein monster in Young Frankenstein [imdb.com] in that picture?

    :wq!

  • As has been documented in recent books (notably When Wizards Stay Up Late), the original ARPANET was not designed to survive nuclear war.

    However, packet-switching, as used in the original ARPANET and today's Internet, was invented by Paul Baran at the RAND corporation, and one of the rationales in his original paper was indeed survivability of nuclear wars.

    (Unfortunately, I posted this previously as Anonymous Coward, due to a longstanding Slashdot bug I'd forgotten to work around.)

    The Internet's routing protocols don't seem particularly hive-like to me, but I haven't been able to read the paper yet to see what the researchers mean --- it's apparently been slashdotted.

  • by Seumas ( 6865 )
    All I saw was 'Nodes' and thought Cool! A post about Everything2 [everything2.com] !

    Bah. It sounded cool, but the server is pegged out with too many users. So my comment will end up being off topic, even though I intended to make the smart-ass remark and then read the article, then add something relevant and interesting to my post.
    ---
    icq:2057699
    seumas.com

  • In the "new research" portion of last year's SIGCOMM, there was a talk called "Let Fireflies Light Your Way". They talked about building a distributed braking system for trains (and proving its convergence to a 'correct' solution, grin) and mapping similar ideas to flow control and routing for faster convergence to stable behavior. Pretty cool stuff, and the talk is available via the mbone - go here [caida.org] and look at session 5, talk "5-1,2-ali,barford".
  • Well, some of us aren't in the USA, and over here it's past midday.

    Not all Slashdotters are living in the same time-zone ya know.
  • Intelligence not created by God must be by definition, of the Devil, and therefore evil
    Hold on there, Tex. What definition? Are you are saying that all creations of man are, by definition, of the devil? Since your post is, presumably, a creation of man, it must follow, by your logic, that it is of the devil. Unless, of course, you're an avatar of god. (Do you claim such?) That being the case, you cannot contribute to any discussion or field of endeavor without advancing the work of the devil. Think about it.
  • If NNs are more powerful than TMs does that mean
    Church's thesis is wrong? Or am I just totally
    confused.
  • IMHO this will happen eventually with or without a directed attempt. In fact, very primitive machine intelligence is already here.

    Keep in mind that the first "real" intelligence that arises (or is created) is very, very unlikely to be something more sentient than your most basic of insects. And in the even more unlikely event that it is perceived as a threat, there is always a power switch on the machines hosting it.

    One puzzle I have pondered is that machine intelligence will likely have no emotions/feelings. They won't have the built biological circuits for pain or irritability. Does that mean they probably won't develop without our help or will they develop a completely different goal-reward system to spur them on?
  • Yeah, neural networks are extremely interesting stuff.

    Several of them are great at learning patterns (radial basis, Kohonen nets) while others are terriffic at learning non-linear mappings and formulae (back-propagation, probabilistic nets, cascade correlation nets).

    The major problems are slowing training speed and the "plasticity" dilemma. Slow training times can be dealt with by several different methods, like partial pre-training and optimized learning algorithms. But the plasticity dilemma is tougher. The problem is that networks that learn easily and quickly also "forget" quickly (i.e. they are too plastic) but networks that don't are "fragile" and don't adapt well to new patterns that weren't in the original training set.

    Unfortunately the entire neural network field, which was on its way to blossom in the early 1970's, was largely ignore after a paper by Marvin Minsky ("Perceptrons" by Minsky and Pappert) mathematically proved that a limited form of early neural network (the perceptron) couldn't learn things that were linearly inseparable (like XOR). This pretty much killed the field for 15 years and we are just now beginning to catch up.
  • The slashdot team could just add categories for other countries than United States. Users could then filter out categories the normal way (preferences) to block irrelevant info.

    Wouldn't it be nice to see all the news there is already _and_ some local stuff like events or local law issues.
  • "Hmmm, I don't see that in his post."

    Well, I was taking the comment re: efficiency to be about Turing-equivalence.

    "...programming languages for NN..."

    I'd be interested in some links to information on this if you have them.

    I've done a lot of semi-serious reading about AI over a number of years. I've also done some messing around with various programming bits. I'm (fairly) well aware of the newer, biology/complex-systems based approaches. But I was totally unaware that anyone had tried using NNs for anything but memory-type uses.
    --
    Have Exchange users? Want to run Linux? Can't afford OpenMail?
  • I was thinking about putting this in an Ask Slashdot, but since we are already on the topic of "biologic networking", I think I'll slide over by a half-topic and ask here.

    I understand that in 1943 McCulloch and Pitts proved that neural networks are Turing Machine equivalent. But I have (at least) two questions:

    1) Where can I find details on how to transform a given TM to an NN? Did they just construct AND, NOT and OR gates (or maybe just NAND) and make enormous non-efficient networks? Which leads me to:

    2) Did they (M & P [or for that matter anyone since]) exploit the power of NNs? That is, did their TM NNs incorporate the recognitive and associative properties that people use NNs for today?

    The reason I ask is: I'm interested in AI. I'm trying to create a Turing-complete computer constructed from an NN (or NNs) that I can then create a programming language for. My hope is that this language/computer combo will have some natural features that make some aspects of AI easier.
    --
    Have Exchange users? Want to run Linux? Can't afford OpenMail?
  • "Prove to the world that security through increased regulation is never the answer."

    Normally I don't respond to trolls and offtopic posters, BUT....

    The above quote is EXACTLY what I said when logins became a requirement. I said it again when moderation showed up. The situation has gotten worse and worse.

    Now even the TROLLS have picked up on the idea. It will be interesting to see how long it takes before Slashdot finally understands OR fades away to ZDNet-like unimportance.
    --
    Have Exchange users? Want to run Linux? Can't afford OpenMail?
  • Wouldn't a sufficient number of NAND NNs be "Turing equivalent in a practical sense".

    Do you have any pointers to info (for the non-mathematician, please!) on the stuff in your third paragraph ("...purely theoretical sense neural networks...")? In particular I'm interested in your statement about continuous activation functions and Turing equivalancy.
    --
    Have Exchange users? Want to run Linux? Can't afford OpenMail?
  • Actually, according to the other poster, they are LESS efficient.

    But in any case, that doesn't make my project useless. All programming languages (worthy of the name) are of the same power too--but we have many of those. Why? Because while everything is possible in all languages, it isn't always easy.

    I'm firmly of the opinion that AI is possible. But I'm not convinced that the current methods are the best path.
    --
    Have Exchange users? Want to run Linux? Can't afford OpenMail?
  • Technology seems to evolve in the way of trying to do the same things as nature. maybe tomorrow's computers will act like brains, with multidimentional computations and data, and maybe the human network will be just what animals or insects evolved to million years ago... will they still be here when we'd get there?? (Read Neal Stephenson's Zodiac!)
  • We already have such thing. It is called the hampsterdance [hampsterdance.com].
    It is performed by geeks whenever they are scared or hurt. It alerts other geeks which will run away screaming. It is especially powerfull when performed in groups.
    As with many behavioural paterns a symbolized incarnation of this dance is part of the geeks mating dance. At special events (eg a major kernel release) young geeks gather in a terminal room and dance the dance from 0:00 to 6:00AM. Regional variants include waving with PDA's, drinking coke and connecting ethernet cable.
  • look at the beautiful "biological machines" god created (humans).

    Don't you think we could learn from that to improve computer science?

    When I look at that [microsoft.com], I'm not sure if it's beautiful, and nor am I sure that computer science will improve if we make it look more like him.

    Go get your free Palm V (25 referrals needed only!)

  • "i'm sorry, but this fantasy of so many people today that somehow, mysteriously, "intelligence" will "emerge" from "sufficient complextity" is a bunch of speculative wishful thinking. i don't know how so many people can buy into this superstition."

    I'm not going to address intelligence until someone defines it. Emergence, OTOH, is both well defined and well described mathematically, thanks to people like Hermann Haken. That this fact is virtually unknown even among supporters of the idea is surely no evidence that it doesn't exist.

    "Materialism can never offer a satisfactory explanation of the world."

    Of course not. Luckily we have other forms of monism to fall back on, so this isn't really a problem.

    "He ascribes the power of thinking to matter instead of to himself."

    I'm sorry, but this is so obviously wrong I don't know where to begin. First, he is assuming that he is material, so even by ascribing thought to himself, he would be ascribing it to matter. Steiner is assuming thought is immaterial, not arguing for it.

    Second, he isn't ascribing thought to matter, but to interactions between material objects. This is the same mistake as believing that 'driving' must be an inherent part of a car's physical makeup -- that cars can't drive unless all of their quarks can drive as well.

    "The materialist has turned his attention away from the definite subject, his own I, and has arrived at an image of something quite vague and indefinite."

    Oh please. You know, philosophy has advanced a bit in the last century, it would be nice if everyone would come and join us. If you were going to pick something in which to ground a criticism of materialism, you could at least choose experience or intentionality. There's every indication that the self will be explained psychologically.

    -jcl

  • As it happens, I have a several of Siegelman's papers, though I'd never read any until you mentioned her. Just quickly skimming, it would seem that she also proved that a recurrent NN with rational weights can simulate a multitape Turing Machine, and that a similar stochastic network is super-Turing. Most interestingly, one paper states that the real NN is robust enough to withstand noise and implementation error, "[including] changes in the precise form of the activation function, in the weights of the network, and even an error in the update" (Siegelman, Analog Computation via Neural Networks). There is an implication that it uses a finite number of neurons...kinda makes me wish I understood the math well enough to figure out how it works ;-)

    -jcl

  • "if God had have wanted sentient beings made from sand he would have done when he created the Earth."

    You've obviously never read Genesis ;-)

    " it follows that intelligence that is not created in God's form does not have a soul."

    Not necessarily. God could grant souls to human creations, if He liked. In fact, since we have no idea what a soul is, it would seem that God would have to intervene if our AI were ever to have one.

    "it thus follows that they cannot help but act against it instead."

    Again, not necessarily. You can act in accordance with the wishes of an agent you don't know exists. You could condition the AI to mimic the thoughts and behavior of the Pope (not any specific Pope, of course), resulting in a being that is both following God's will and utterly unaware of that fact. (Yes, I know this doesn't count.

    "Think about it.

    I'd be much for interested in thinking about why you believe theology has any place in a discussion of AI at this point. Regardless of what else it may be, AI (the term is deprecated, if anyone cares) is an empirical science. If you create something that acts like a human, you have very likely found an insight about mankind. In the same vein, creating an AI would have a profound impact (always wanted to say that ;-) on theology. What if, by some miracle, the AI wasn't evil? What if it was utterly saintly, personally blessed by God, and sent to teach us about His ways? That would empirical evidence that God approved its existence, and that He has a somewhat more flexible view of the universe that His misguided children.

    All of this is academic, though, since I don't believe in God, evil, or that AI's are going to jump out of our networks.

    -jcl

  • "One puzzle I have pondered is that machine intelligence will likely have no emotions/feelings."

    This is an intuition about machines, not AI. It's been more than sufficiently proven (IMO, and that of many others) that emotions are required for intelligence. Human emotions are an evolutionary adaptation, after all, not a property of our biological nature, as some would have it. We don't feel pain (which probably isn't an emotion, BTW) because biological organisms are inherently pain-bearing entities, but because damage to organisms is a Bad Thing. Presumable AI's would be equally in need of some way to detect damage, and the associated emotion -- suffering -- to encourage them to avoid damage. This wouldn't necessarily be physical damage, either: the logical 'body' of the AI would need some form of protection, as well.

    The primary emotions -- anger, sadness, happiness, loneliness, boredom, fear, etc -- all have important cognitive roles that AI's would very likely need to function. There would probably be variation in some of the details, but we see as much everyday with humans as it is. Once the lingering dualism (another product of Descartes) between Reason and Emotion is discarded believing we can have the former without the latter will probably appear as silly as believing in immaterial minds.

    -jcl

  • I wonder, has someone around here been reading Ghost in the Shell? ;-)

    I am more than the sum of my parts? Cool!

    *shrug* It's not that big a deal. Emergence has been extensively studied this century, in both natural and artificial settings.

    Consciousness in a psychological sense, which seems to what is being discussed here, is so poorly defined that using it is just inviting misunderstanding. You can't call it a 'level of complexity', because some people associate introspection with being conscious, and it's hard to see how a level of anything could introspect, or how introspection could lead to massive qualitative changes in complexity.

    There's also a problem with equating consciousness with unpredictability. In general, conscious behavior is no more unpredictable than anything other human quality, and far more predictable than, say, the weather. At the same time, the behavior of presumable less complex elements of the brain can be virtually impossible to predict -- thus the great mystery of the unconscious mind. The related 'willful behavior' theory falls apart when you consider how many things in the world appear to act of desires similar to those of animals. Surface tension can be readily anthropomorphized into a desire on the part of a substance to stay whole, for example.

    At any rate, the sort of intuitive notions of consciousness that most people seem to have should apply to most animals higher up the evolutionary ladder than lizards, and possible quite a bit lower. Dogs seem to be self-aware (if not capable of introspection), certainly possess emotions, exhibit complex behavior, socialize -- in short, they seem to meet all of the various criteria for being conscious.

    -jcl

  • One minor point: MP neural networks are no more inefficient than the computer you're using right now. In fact, they're exactly as efficient: take 7 million neurons with fixed connections and weights, wire them up as a Boolean network, and you get a Pentium III.

    -jcl

  • "Actually, according to the other poster, they are LESS efficient."

    Hmmm, I don't see that in his post. He pointed out that they're actually finite state machines because they lack infinite memory, but the same is true of every other computer that can actually exist. My point was merely that von Neumann's model for computing was an application of the MP model for neural networks. Each MP neuron is single logic element from which Boolean function are built -- in other words, a transistor.

    Back onto what you're proposing, programming languages for NN don't have a wildly successful history. There are some here and there that appear workable (e.g., schema theory), but we're still a ways off. Plan to spend many years on the problem.

    "But I'm not convinced that the current methods are the best path."

    NN have been used in AI for several decades. They show up a lot in robotics and applications that need pattern matching, and occasionally in reasoning systems (e.g., ACT*). In addition, Bayesian networks were proven equivalent to large class of NN's, so there is some significant crossover that isn't immediately apparent. Hofstader's current work (Metacat and IIRC Letter Spirit) have some features of NN, but operate at a much higher level, forming a nice interface layer between NN's and neo-classical AI.

    Basically what I'm saying is that AI is an extremely diverse field, and that there's much more to it than stuffy predicate logic systems and chess machines. Many of criticism of AI's methods are the result of lack of publicity for the more original architectures more than any lack of creativity on the part of the researchers.

    -jcl


  • if man is so smart, how come he has to look to "dumb nature" for ideas that were already implemented and perfected for endless ages already. i guess we can always learn more -- but from where did the original knowledge and intuition come from then???

    johnrpenner.


  • | Given any system of sufficient complexity and flexibility we
    | have the possibility of real intelligence arising as part of
    | an emergent phenomenon. And since this system is designed to
    | mimic known biological mechanisms involved in conscioussness
    | and thought, it is even more likely that it will become
    | intelligent once a critical threshold of complexity has been
    | reached.

    i'm sorry, but this fantasy of so many people today that somehow,
    mysteriously, "intelligence" will "emerge" from "sufficient
    complextity" is a bunch of speculative wishful thinking. i don't
    know how so many people can buy into this superstition.

    consider this:

    Materialism can never offer a satisfactory explanation of
    the world. For every attempt at an explanation must begin
    with the formation of thoughts about the phenomena of the
    world. Materialism thus begins with the thought of matter
    or material processes. But, in doing so, it is already
    confronted by two different sets of facts: the material world,
    and the thoughts about it. The materialist seeks to make
    these latter intelligible by regarding them as purely material
    processes. He believes that thinking takes place in the brain,
    much in the same way that digestion takes place in the
    animal organs. Just as he attributes mechanical and organic
    effects to matter, so he credits matter in certain circumstances
    with the capacity to think. He overlooks that, in doing so, he
    is merely shifting the problem from one place to another. He
    ascribes the power of thinking to matter instead of to himself.
    And thus he is back again at his starting point. How
    does matter come to think about its own nature? Why is it
    not simply satisfied with itself and content just to exist? The
    materialist has turned his attention away from the definite
    subject, his own I, and has arrived at an image of something
    quite vague and indefinite. Here the old riddle meets him
    again. The materialistic conception cannot solve the problem;
    it can only shift it from one place to another.

    (Rudolf Steiner, Chapter II, The Philosophy of Freedom)

  • | Given any system of sufficient complexity and flexibility we
    | have the possibility of real intelligence arising as part of
    | an emergent phenomenon. And since this system is designed to
    | mimic known biological mechanisms involved in conscioussness
    | and thought, it is even more likely that it will become
    | intelligent once a critical threshold of complexity has been
    | reached.

    i'm sorry, but this fantasy of so many people today that somehow,
    mysteriously, "intelligence" will "emerge" from "sufficient
    complextity" is a bunch of speculative wishful thinking. i don't
    know how so many people can buy into this superstition.

    consider this:

    Materialism can never offer a satisfactory explanation of
    the world. For every attempt at an explanation must begin
    with the formation of thoughts about the phenomena of the
    world. Materialism thus begins with the thought of matter
    or material processes. But, in doing so, it is already
    confronted by two different sets of facts: the material world,
    and the thoughts about it. The materialist seeks to make
    these latter intelligible by regarding them as purely material
    processes. He believes that thinking takes place in the brain,
    much in the same way that digestion takes place in the
    animal organs. Just as he attributes mechanical and organic
    effects to matter, so he credits matter in certain circumstances
    with the capacity to think. He overlooks that, in doing so, he
    is merely shifting the problem from one place to another. He
    ascribes the power of thinking to matter instead of to himself.
    And thus he is back again at his starting point. How
    does matter come to think about its own nature? Why is it
    not simply satisfied with itself and content just to exist? The
    materialist has turned his attention away from the definite
    subject, his own I, and has arrived at an image of something
    quite vague and indefinite. Here the old riddle meets him
    again. The materialistic conception cannot solve the problem;
    it can only shift it from one place to another.

    (Rudolf Steiner, The Philosophy of Freedom)

  • I'm sorry but I must say that I don't understand why some people considered this previous post as "Insightful"!

    It is as insightful as saying that because each of our individual neurons has no intelligence, their "collective intelligence" is limited by that, and therefore that the brain they form can not be intelligent at all!

    I'm sorry but that's plain wrong, because we are dealing with a very complex system here, where the total can be much bigger than the sum of the parts.

    The limitation to the collective intelligence of the bees (or ants) will be limited by then number of beesn and the number and speed of the interections they can have wich each other, which is clearly smaller and slower than the interections of the individual neurons of our brain.

    Angel
  • I'm not sure I agree with this. I've been working with cooperative systems lately, and I have been surprized by the speed with which these systems solve problems. It seems that the fact that small adjustments to a proposed solution made by entities with only local information can rapidly improve the solution, provided that many (maybe only 20 or 30) entities suggest different changes, and the effect of the changes can be measured.

    The really big surprize is that the quality of the changes does not have to be all that good. The system has much better behaviour than the individual entities would be able to manage alone.

    Even if most of the actions (up to 80%) turn out to be useless or even damaging, the over all system state continues to improve. You may not call this intelligence, but the result is still really useful.
  • for off-topicism..

    The entire british legal system fiercely protects privacy.. think you'll find the RIP bill and recent Demon ruling to be passing blips..

    And as for the rest of it.. tchah.. just a reminder of why the rest of Europe considers America to be a nation of undereducated children.. no style, no class and thankfully short lives.. ('strue, you know.. we live longer over here)..
  • by tobe ( 62758 )
    We also get (Europe) seeing as we invented/discovered them all..

    Tv, Radio, DNA, vaccines, gravity, calculus, quantum physics, the computer, faxes (first one was installed in France in 1865).. I could go on..

    Oh.. and any concepts of taste, style, listenable music and an education worth anything..
  • That to another form of intelligence, the emergent behaviour of a large network of associated neurons (you call it 'thinking') and that of a large number of associated bees might not seem to be so very far apart..

    Bees haven't become more powerful than us simply because they have no great evolutionary pressures to drive that progression...
  • Old topics aren't new on Slashdot, but this is the first time I encounter one I personally know. A big article in one of the last issues of 'Scientific American' mentions this, and appearantly, timothy didn't care to mention the name of this technology is 'Swarm Intelligence', based on the idea of forming a large intelligent colony, of small dumb individuals. This sort of mechanism is not only in bee and ant colonies, but also found in the brain, where a 'neuron colony' forms an intelligent entity. Such networks were ALREADY developed, and employed in factories, scheduling algorithms, and network routing.
  • One puzzle I have pondered is that machine intelligence will likely have no emotions/feelings. They won't have the built biological circuits for pain or irritability. Does that mean they probably won't develop without our help or will they develop a completely different goal-reward system to spur them on?

    It seems to me that if AI were to somehow develop it's own goal-reward system, it would end up being a significantly alien intelligence. Hence, being able to communicate with it meaningly would be hindered. Of course, having it's infratructure on a medium created by humans would make it that much easier to dissect and understand.
  • Perhaps agent software could do a funny dance too?

    You mean like a dancing paperclip?

    Hmm, sounds familiar.....

    /L
  • I am more than the sum of my parts? Cool!

    what if a technological system given the same level of complexity could demonstrate similar properties ?

    What if this "conciousness/soul/whatever" you are talking about were just a certain level of complexity? Maybe we should just call 'consciousness' the level of complexity where we can't predict anymore what the response to certain input will be. We might want to exclude windows from that, though :-).

    How would you downscale 'consciousness'? E.g. is my dog conscious?

    Could you maybe give (an) example(s) of what you would describe as a property of 'conciousness/soul/whatever' to clarify things a little?
  • Just a quick reply to show I check up on my posts.

    I am not a native English speaker, so I'd like to ask what you mean by 'emergence'. Of course it's in my dictionary, but that doesn't help me much in this case. Do you mean the emerging of a property in a system that isn't in any of the parts seperately? If so, then *shrug* indeed. If not, enlighten me.

    ...it's hard to see how a level of anything could introspect,...

    Introspection's just meta-thinking: thinking about thinking. Thinking is the processing of external input. Just feed the results into the machine again and we have introspection as I see it (I am open to different points of view, though).

    There's also a problem with equating consciousness with unpredictability.

    Hmmm, you're right. I wasn't making sense there.

    intuitive notions of consciousness

    That's just because these 'notions of consciousness' are expressed in bodylanguage similar (read: understandable) by humans.
  • (From Netcraft)
    bolero.ics.uci.edu

    bolero.ics.uci.edu is running Microsoft-PWS/3.0 on NT4 or Windows 98

    NT4 or Windows 98 users include Gillette, Burger King, and Ford.

    Something about knowing they use the same web server software as burger king makes me all fuzzy inside.
  • I recently took a class covering some topics like this.

    Ants use pheremone trails to find fairly optimized paths to food. Some researchers have adapter their strategy to route IP packets. They use 'pheremone bits' attached to the packets headers to get information about the networks. They then use the information to route packets more efficiently. As the network properties change the routing algorithm adapts. Supposedly, it works much better than the current method (which finds the shortest path - i think ?).

    Some links: Computational Beauty of Nature
    Mobile Software Agents for Dynamic Routing [mit.edu]

  • The first link should be:Computational Beauty of Nature [mit.edu]
  • I think they're talking about hive type intelligence, where many simple behaviours add up to more complex behaviour. Like when you simulate bird flocking by giving each bird simple rules, which when lots of birds follow them turn into more complex behaviour than you could directly predict from the rules. The birds flocking on the screen (or the network protocols or whatever) are not capable of conscious thought any more than those little bleeping things that ran round your screen while you shot at them many years ago.
  • If you bung a random lot of neurons in a brain together I bet they will not "evolve" any kind of consciousness. The consciousness does not come just from the emergent behaviour of the neurons but from the evolution that caused the whole group of neurons to evolve together in ways with useful emergent behaviours. Unless you give the network the ability to evolve its behaviour, all as one bit (difficult if it's distributed) then I don't think it will evolve very much. Clearly if even the individual nodes have no ability to evolve, the emergent intelligence will be even more limited.

    Case in point: Humans. We evolve separately from each other. Yep, society has emergent behaviour that could not be predicted from the individuals within it. But it does _not_ display intelligence on a different level from those individuals. Clearly we (like network programs) can gain advantages from working as a society. But those advantages do not include some sort of higher consciousness than the individuals within it.

    Quite clearly my thoughts are based on hypothesising from very limited data (there aren't many examples). I am not talking from a science book but from a very hypothetical standpoint. I could therefore easily be completely wrong. I wasn't trying to say "You're talking rot" but just "I don't think your hypothesis holds in this case". Sorry if that was not clear.

    I thought of another example: bees and ants. They have collectives that are more than the individuals. But you don't see a hive of bees gaining a super-consciousness and evolving in any more exciting way than the individual bees.

  • I am loath to do this, because i hate to see posts like this. That post is not "Insightful" in any possible way. It has nothing to do with the article, hence it is "Off Topic", or possibly "Troll". Yes, i am also aware of the irony of posting to an Offtopic post.
  • Once, while staying with a girlfriend's relatives, we returned from a trip to find something of an ant infestation. Thing is, the little beggars were crawling in through a gap in the door, along the kitchen floor, up the side of the refidgerator and in through the ice dispensor. Upon opening the door, there was a two inch high pile of little dead ant bodies on the floor of the icebox.

    Sure we want to build computer systems that emulate this kind of behaviour.

    Maybe if you're Microsoft. Then again, this kind of behaviour is more reminiscent of their customers than their software (For which the analogy would be more like hearing a muffled bang as the ants' nest explodes and bursts into flames)

    Rich

  • ...there is always a power switch on the machines hosting it.

    You haven't seen some of the movies that I've watched. Every time the evil computer takes over, and someone goes to "just" throw the switch, very very bad things happen to them!

    But if you want to volunteer, be my guest :)

  • You missed a fundamental pun, radja. Think:

    Nest of mice
    Home of mice
    Mouse pad?

    -Owen
  • So exactly what happens when one of these gets a virus? Do we rest it and give it some chicken soup with lots of fluids? -- There is no sig.
  • Well, while it is technically true that any computer is a finite state machine, you do not have to envision a Turing machine as actually having an infinite tape. The important thing is that the tape cannot be extended indefinitely, and that is true of modern computers (i.e. you can add disks,etc not indefinitely of course, but nearly so). Finite state machines neural nets have this limitation built into their structure. There is no obvious way to grow an NN.
  • It is not surprising that rational coefficients should be sufficient. However (potentially) unlimited precision is still needed. Roughly speaking binary expansion serves as a tape and we only need finitely many bits at any given time. I am more curious about noise tolerance. It is hard to imagine how that can be possibly true. Or does performance deteriorate for longer inputs? Could you give me a pointer to the paper you were referring to? Thanks a lot.
  • It is really a question of storage. NNs can only store so much without an external storage system. That's why I think the idea of an NN coupled with a memory is interesting (maybe that's what the brain does?). Of course you can say a very large neural network it is _almost_ a Turing machine. But as the size grows NNs become more and more unwieldy.

    I believe the work on Truing equivalency (actually they are not, they are strictly more powerful if I remember correctly) was done by Siegelman and others. I think i saw it in the Neural Computations journal. However it is all very technical and probably not accessible to a non-mathematician. For a simple example on how to implement a counter using a two-neuron NN see a paper by Elman from 1995 (i think). I forgot the journal, but it should not be difficult to find it on the internet. The problem is very clear: the conter uses binary expansion of a real number in a clever way. Thu further you cound, the more digits you need.

  • Aah! no.

    I mean like "mobile agents". What you've got are these bumblebees. They fly off looking for flowers and stuff. When one of them finds a particularly good bunch of flowers he flies back to the hive. Now because bees can't talk they need some way of communicating their findings to their fellow cohorts; so they do a special dance. which is actually a form of data communication, and passes on directions for finding the cool flower bed.

    I'd like some cool agent software that could behave in a similar way. I certainly don't mean a stupid paper clip, but something that can relate back to me in some kind of efficient way. Either by talking, or prancing around. Even better, it could simply show me what it's found.

    Hmmm, this Stilton cheese is *really* nice...
  • I like that funny little dance that bees do to show their comrades where the best flowers are.

    That's pretty far out.

    Perhaps agent software could do a funny dance too?
  • Case in point: Humans. We evolve separately from each other. Yep, society has emergent behaviour that could not be predicted from the individuals within it. But it does _not_ display intelligence on a different level from those individuals. Clearly we (like network programs) can gain advantages from working as a society. But those advantages do not include some sort of higher consciousness than the individuals within it.

    Um...How do you know that? If society has a higher level of consciousness, who says you'd notice it. Do you think that an individual neuron in your brain relizes that your entire brain is conscious?

  • Fabulous, more dcti [distributed.net] stats!
  • did arpa ever actuall destroy part of their network just to test it out (or just to let the researchers play with explosives) or were they sure that the charts and theories would hold?
  • I wasn't thinking they went for their facility, I was talking more along the lines of AT&T's...that'd be a rush
  • "Slashdot effect"

    You would think that they would learn by now to setup multiple mirror sites and redirect to those when something like this happens.


    --
    Star Trek vs Star Wars. [furryconflict.com]
  • What geeks are up this early?

    Foreign ones for whom it's 11:20....

  • Is there no way that accademic papers like this could be mirrored on the /. server?

    Surely they wouldn't take up too much space....

  • The lack of a goal-reward system is exactly what will prevent intelligence spontaneously appearing from nowhere.

    You seem to imply that a goal-reward system is developed in response to intelligence - I would assert that the exact opposite is the case.

  • What a load of rubbish! Are we overdosing on The Outer Limits or what? Sorry, but your use of the word "hubris" gave it away.

    No-one understands intelligence; the idea that intelligence springs unexpectedly out of any sufficiently complex system is not even a theory, it's pure conjecture backed up with nothing.

    And it never ceases to amaze me that people think technological progress has to be "allowed to proceed". How exactly are you going to stop it?

  • They are not talking about hooking bees and ants to the internet! They are trying to implement their communication protocols on electronic networks.
  • Collective "intelligence" isn't really intelligence at all.

    wow, that's a pretty false statement. you seem to be saying that the behavior of a group of bees is necessarily a linear extrapolation of the behavior of a single bee. in looking at this sort of thing, it's been found that significant non-linearities exist.

    'collective' intelligence is the only intelligence. whether it's the aggregate behavior of the neurons in your head or the bees in a colony or whatever, it's all part of emergent behavior. complex adaptive systems exhibit behaviors as a result of aggregating many simpler agents. a network that displays cas properties would be capable of making decisions based on what has worked well in the past and would be likely to work well in the future.

    check out any book written by john holland for more information about cas and why a colony of ants is analogous to systems ranging from intracellular processes to human intelligence.

  • I don't mean to change the subject, but this story was posted at 5:56 am and according to the news its 6:18, and this story already has 8 comments!
    What geeks are up this early in the morning
  • that you're repetetive
  • Is that a nest of furry mice,
    or a nest of input device mouses?
  • hrmm.. I wonder how they would manage to intergate this with eletronic circuitry? And being rather organic, what would the life span of it be?
  • IHMO, the issue is not biological-like network but neural network application.

    Did you try neural nets?

    It is pretty impressive how it can be used to solve problems for which no mathematical model is available.

    I think biology can provide computer science many useful models and theories ; look at the beautiful "biological machines" god created (humans).

    Don't you think we could learn from that to improve computer science ?
  • Actually, an article appeared on this topic a long time ago. Talk about a rip off...
  • just a reminder of why the rest of Europe considers America to be a nation of undereducated children


    Is this the gratitude we get for defending you from communism?


    Sorry, I also was compelled




  • Um, given the choice between blowing up a communications facility and unplugging a cable, I'm willing to bet the testers went for the cable every time. (Despite the visceral thrill of blowing up your workplace.)


    ...phil
  • One of the goals of the original ARPANET was to continue functioning even after large parts were destroyed - as might be the case in a nuclear conflict.

    I think the Internet's routing protocols already contain a lot of the "hive" behavior these scientists are looking for.
  • The current method is really... there is no method. Shortest path is desirable, but not always the case.
    An individual router or group of routers will use various protocols to determine 'best' interface for a given packet to take (which is not always shortest)

    There is no real discovery of which path is currently the fastest
  • Check out "Swarm Smarts" in the March 2000 issue of Scientific American (unfortunately not on the Web). Check your local library.

    IIRC, it's about *exactly* what these guys are doing -- using simulated biologic agents (ants) to solve complex mathematical problems such as the Traveling Salesman problem and Internet network routing.

  • The furry kind :)

    //rdj
  • okok.. sorry about that.. english isn't my first language :)

    //rdj
  • and it is the computer used by Unseen University, Ankh-Morpork, Discworld. Just don't take away the nest of mice...

    //rdj
  • Aren't scientists already working on biological computers? That to me is much more ground breaking and radical than conventional computers based on biological designs.

    They have been doing so for ages, as well as DNA computers and other such things. The main problem witha biological computer is that there is simply so much we don;t understand about the biological computer found inside everybodies head to even began to understand how to construct one from scratch.

    Also there could be a danger in modelling existing computer technology on biological systems to the point where the tech reaches biological complexity. It sounds likea far out science fiction concept but as a biological system we humans are more than the sum of our parts, we show an emergent property known as conciousness/soul/whatever - what if a technological system given the same level of complexity could demonstrate similar properties ?

    bing well

    J

  • don't feel pain (which probably isn't an emotion, BTW)...

    No I think the whole pleasure/pain thing is a far more instinctive reward/punishment system related to the fight/flight/fuck reactions of an organism. You certainly don't need to be anywhere close to sapient to experiance them.

    The primary emotions -- anger, sadness, happiness, loneliness, boredom, fear, etc -- all have important cognitive roles that AI's would very likely need to function.

    It's not often I hear this view, but it's one I agree with. I have a sneaking suspicion that an artificial intelligence will have a lot more human characteristics than we'd expect from a "mere machine". There are valid reasons for our emotions and we would be a lot less functional without them. I think that their analogues in AI will arise spontaneously with sapience, and separating one from the other will be impossible.

  • You are mistaken. M-P neural nets are not Turing equivalent. What M&P proved was that any binary function can be computed by a neural network or, to put it in a different way, that a finite state machine is representable by a neural net. The difference between a finite state machine and a Turing machine is that the latter has an infinitely long tape to store data, while the storage capacity of the former is limited by the number of states.

    So the answer to your first question is no, you cannot transform a TM into an NN. You are right, however, that essentially what they did (I have to be careful here, I have never read the original paper) was to construct a neural net for AND, OR, etc and show how to connect them.

    I have to say that in a purely theoretical sense neural networks with continuous (say sigmoid) activation function (not M-P!) have been shown to be at least Turing equivalent (and in fact more powerful). However to encode a TM into an NN you have to use infinite precision real numbers and any such encoding is inherently unstable. The basic problem is that NN has limited capacity to store information. So you have to use decimal (or binary) expansions of the states, which are real numbers to store the data for you in an artificial fashion.

    So the final answer is that you cannot build a TM out of pure NNs. However (and I believe there was work on it) you can hope that a suitable combination of memory storage and a neural net can be Turing equivalent in a practical sense.

    And who is to say that our brain is Turing equivalent!

  • Collective "intelligence" isn't really intelligence at all. Just because bees communicate with each other doesn't mean they have the ability to reason. That's the key factor here - the ability to reason. Without the individual bees (or ants or whatever) being able to think and make inferences, collective intelligence is rather benign because it's limited by the bees' intelligence. A collective is only as intelligent as its individual members. If, however, bees were capable of reason, they might have become more powerful than we, for there is no question that a collective intelligence is superior *in design* than what we have in humans today.

This is now. Later is later.

Working...