Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Science

Leech Neuron Computers 186

Ralph Bearpark writes "The biological computer is born. A computer made of neurons taken from leeches has been created by US scientists. "I' I'd actually read about some of this research being done back in the early 80s at Bell Labs. Apparently they could actually get some read/write to the leech neurons, for use as storage devices, but they...uh...kept dying after for a few minutes. Anyone confirm/deny that?
This discussion has been archived. No new comments can be posted.

Leech Neuron Computers

Comments Filter:
  • by Anonymous Coward
    So after a few years we can go from FIRST POST!!! to FIRST TWITCH!!
  • by Anonymous Coward
    I remember reading about a year back that Japenese scientists were successfully growing nerves that could complete abiotic circuits.

    It doesn't seem like too far a strectch to connect these neurons with nerves that can interface to silicon-type machines.

    In any event, the write aspect of read/write should be easy (even if it's just a hack like retinal/cochlear implants), it's the read that needs work. This is similar to problems with DNA computing; you can splash it all together, and make it compute - but it takes serious lab-work to read the results.

    There's some course material on DNA computing at
    http://www.csd.uwo.ca/~lila

    --Mike
  • by Anonymous Coward
    The problem with these systems is that they need to use an electrode stuck in a neuron; This usually kills the cell eventually (quickly, even). This conventional thinking makes it hard to get data into the network.

    One interesting approach I'd like to see is to take a built in i/o system from a primitive creature - like a leech - and then hook that to a computer. For example you could take the optical nerves (eyes) and then use nerves as your "electrodes". Data could be passed in by using various intensities of light from a laser or a led.

    That still doesn't solve the problem that neural
    networks get their power from the *architecture* that connects them - they're not magic. You need to have them intelligently arranged to reinforce a particular input.

    One really good book on the topic that can be understood by the layman (or any undergrad, at least) is "Naturally Intelligent Systems" from the MIT Press. I highly recommend it.

    Too bad industry isn't more interested in developing more along these lines, I'm sick of academia :)

    Steve
    smanley@nyx.net
  • by Anonymous Coward
    A little offtopic i guess (moderate away if you dont like :)), but...

    I heard a while back about some people making computers that ran on coffee. Okay, here comes the science bit-

    Apparently supercooled atoms can be in two states at the same time, so they can be 0, 1, or errr... both. As I understand it, that means if you have two qubits (quantum bits) you can be carring our two opperations at the same time, three you can carry out four opperations, 4 -> 8, etc. Particularly powerful for doing cryptography, apparently.

    Anyways, you can do this by supercooling atoms & firing lazers at them (expensive, very expensive and very difficult) _or_ by firing radiowaves at organic compounds, like caffine in coffee (something to do with 'nuclear magnetic resonance' whatever that means).


    The moral of all this:
    Allegedly (serious now ->) people have added one plus one in a cup of coffee.


    God knows where you plug in the monitor.
    Who's gonna start the linux port?
  • by Anonymous Coward
    I think that *controlling* the biological computer would be pretty easy. Its not an actual brain, just a network of nuerons. It still has to be programmed. So it should still only do what we wants it to do (or thinks we want it to do).
  • Other than the... er... providence of the components, how is this different than a silicon neural net?
  • If it's just the difference between digital and analog, there was an article in Discover a couple months back about some guys who were working with analog neural nets in silicon.

    They'd had some success getting the neural nets to perform complex tasks, but ran into the problem that once they'd trained a circuit to do what they wanted, they couldn't build working duplicates of it. Also, some of the nets were apparently using the components they were attached to as part of their circuit in some undiscernable manner, so they'd stop functioning if they hooked them to a different circuit that theoretically looked the same from the outputs.

    No leeches were harmed in the processing of this post.
  • Posted by .stab.:

    If this could some day be done to alter the human brain (obviously without causing death, pain, and such), to the extent where we could fully utilize every portion of it, or at least much more than we can now, and in addition, use our memory directly for storage of some sort, we wouldn't need PC's.

    We could just dock our "plugs" (of course we'd need some way to network ourselves), into the station we would be working at.

    Hell, we'd have all the processor power we could need, as with storage. Even better, we'd have such a higher basis for learning (since we would be more apt at using our brains), that we could easily further all forms of technologies, and who knows where civilization would go from there.



    Or maybe I just think ahead too far...
  • Posted by Lord Kano-The Gangster Of Love:

    I spend a large portion of my time thinking about this very scenario. It is a danger when dealing with a thinking machine of any type, biological or sillicon. The major questions we face are

    1. Will it view us as a threat?
    2. Will it attempt to defend itself?
    3. Is it capable of adapting rapidly to the uncertainty of battle?
    4. Can we "pull the plug"?
    5. Can it stop us from "pulling the plug?
    6. Can we "kill" it?

    To create a thinking thing requires an addittional level of responsibility. Like it is a parent's responsibility to correct a child when that child goes atray, it is the creators responsibility to correct, or possibly even destroy that creation when it causes more harm than good.

    I am actually more afraid of a combination between a biological and sillicon computer. Think of it in terms of a miniature cluster. The biological portion develops questions, and the electronic parts get answers. That type of architecture, while we will probably never live to see it, would have the benefits of both of it's parents. The adaptability of a biological organism, with the raw number crunching power of a machine.

    Imagine a soldier who can endure a 100 degree week-long heatwave. And when his target approaches he can calculate the distance between him and the target, the average kinetic energy of bullets of a particular weight and caliber fired from his weapon, wind, humidity, the rate of motion of the target, the recoil of the weapon, wind resistance and which part of the target's body would most likely result in either a killing or maiming shot in a few nanoseconds.

    I don't see this as paranoia. Paranoia is unfounded, every type of new technological advance is exploited by those who wish to gain wealth, poer, or both. Every advance we've made as a species has been used to kill people. The boat, the airplane, the automobile, the firearm, nuclear energy, the rocket, ad infinitum. Being afraid that self aware machines will be next, is not paranoia in my book.

    LK
  • Posted by kenmcneil:

    There is a *huge* difference between helping a collection of slug neurons stumble upon the idea that 1 + 1 = 2, but it is another thing entirely to create truly intelligant "machines". I am no expert but the capabilities of this biological form of AI are probabley inferior to our digital technology.
  • Posted by Lord Kano-The Gangster Of Love:

    If it is aware of what it is, it can choose to change for the better.

    I don't understand the term TK ability. Please elaborate.

    LK
  • What is your position on creating the same thing using inorganic units that function identically to the neurons? Is it the carbon that's freaking you out?
  • You explained your position very well, and I agree with everything but the fear. There's a simple way to engineer the process of developing these things: don't hook up the outputs to anything important until you feel the odds of divergence from the simulation is small enough to be worth the estimated risk. In other words, don't give prototypes 1 through n control of the national power grid unless the benefits are worth the uncertainly imposed by unreadability of neural nets and our inability to enumerate the infinite space of nets in our simulations.

    It's true, one ought tread carefully in this territory, but we've been denied guarantees in every other human endeavor I can think of. Caution has been the prudent tool for mitigating risk in those instances too.

    We might also want to consider breeding nets that are good, on the average, at recognizing the appearance of potentially-unreadable, undesirable properties in other nets, and put their finger on the other's kill-switch. That's an awfully cruel way to treat a neural "slave" machine, I know, but it may be an effective average-case safety mechanism. And the blood would be confined to the hands of the slave's creator, as it should be.

  • We already have a quite effective computing system that uses neurons. It's called a brain.

    I don't think that neuronal computing will ever be feasible except for natural organic creatures. Neuronal neural nets have numerous advantages over electronic neural nets, but many serious disadvantages as well. A neuronal computer requires a much more complicated support system in terms of energy provision and waste removal than an electronic computer, among other things.

    The future of computing lies in combined applications of traditional parallel digital computing, digital neural nets, and digital genetic programming; initially implemented with traditional semiconductor lithography, but eventually using nanotech construction, which will provide more performance and flexibility in reconfiguration than either semiconductor or biochemical computing.

    I highly recommend "The Age of Spiritual Machines" by Ray Kurzweil as an optimistic assay of the future of computing. He makes a compelling argument along these lines.
  • by V. ( 1057 )
    Hmm...everytime I read about bio-computers it
    reminds me what Ken Thompson said about going into
    biology instead or CS. Why do I get the feeling
    that all of my experience with "traditional"
    computers is going to be worthless in about 15
    years. Kinda like what happened to some of my
    older physics and EE profs when transistors came
    along and made their tube knowledge obsolete.
  • Well, actually they are again being used for blood-letting now because, they are much cleaner than then "traditional" methods. (No dried blood, very sharp cuts)
    Jon
  • > Leeches probably won't raise a stir though
    > because people tend to care less about
    > things that aren't as cute.

    This is quite true. If there were using, say, baby seal neurons, Greenpeace would be all over them.


  • I dont know much about it but doesnt that also mix with those stories about chaos computers that were on /. a couple of month ago...

    All these things make my brain so fuzzy :)

  • This news only reminds me of the current debate over Genetically Modified foods. Do these people really know what kind of forces they are messing with? Ten years from now - "Oh dear, biological computers cause cancer, and the human race will now be wiped out", or even worse, they realise their superiority over humans, and take over the planet, or worse.

    Oh, and I've a few things to say, just to save anyone else posting them. *yawn*

    "So, when's Linux going to be ported to this? :)".

    "Picture it, a LIVING BEOWULF!! BEOWULF yadda".

    "Can't wait to play QuakeIII on one of these babies".

    There you go, it's said, now nobody else needs to say it. :)
  • I see it now....

    Well looks like you have a CIH
    I reccmond three Leaches and
    call me in the morining.


    -Jonathan
  • of course, there are lots of examples of things that human brains are really good at, but there are also lots of simple everyday things human brains aren't very good at.

    example, plain old math. Start a stop watch running and time how long it takes you to multiply these two numbers together (without using any outside aids like paper and pencil).

    3456 x 78 =?

    It take a while doesn't it? most people have trouble remembering passwords, phone numbers, account numbers, license plates, etc, etc. There are lots of things brains are really good at, and there are lots that they aren't.(btw the answer is 269568)

    -P

  • You can grow neurons from just about any species in culture (in a petri dish) from immortal cell lines. Seems like that's how you would grow them for a real world application (not pulling them out of leeches one at a time. ;). I think I can sleep at night with a cultured plate of neurons in my computer. :)

    -P
  • You aren't the only one, but I'd like to point out that if my leech-puter goes rouge on me, I have a solution.

    Specifically, my solution is 1 part salt, 3 parts water.
    Stir, then pour on the effected computer. All problems will go away.

    AI is something different than this. This is biological fuzzy processing. It should be much more effecient than using silicon to do fuzzy stuff.

    If (or when) we create AI computers, then there remains the issue of what will their outlook be like. I mean, if I took your brain at stuffed you in my computer case, you'd be unhappy.

    But these would be lifeforms who grew up this way. Their culture would be different. They may consider being what they are honorable or desirable.

    They wouldn't have the hormonal problems we, as a race, have. They wouldn't need to be teritorrial with us. They wouldn't even compete for the same resources!

    I'm not afriad of AI's exploiting us. Though the reverse is possible and more likely.

    Ciao!

  • Yes, you are the only one that's paranoid.

    The whole "AI takes over the world/wipes out the human race" plot makes for a good movie, but it gets old real quick.

    Hasn't anyone here read "The Callahan Touch" by Spider Robinson? An AI's motivations could be TOTALLY DIFFERENT from our own. NO GLANS. The only urges in an AI would be the ones built into it. No need to reproduce, no survival instinct, no agression, hate, or angst.

    "But... the Matrix was so COOL!"

    And that has WHAT exactly to do with reality?

    BAH. (humbug!)
  • by RelliK ( 4466 ) on Wednesday June 02, 1999 @04:45AM (#1870940)
    Don't you guys remember all these movies where the AI takes over the world? Like Terminator, for instance, or Matrix? Well, now it can finally become a reality. According, to the article, the leech computer can actually think for itself, without having to be told what to do. Just think of the implications... I say if AI does take over the world, it would have to be biological.

    Am I the only one who's paranoid?
  • I can't listen to the RA interview, but I think they are extracting the leech neurons before using them and not directly connecting a bunch of leeches together. They are just extracting the neurons from leeches to use, probably because leeches are abundant and their neurons are simple and easy to extract (and thick enough to insert electrodes).

    I didn't do that well in my Neuroscience course :), but I'm pretty sure that the leeches would not appreciate having their neural pathways change on them (they wouldn't be able to maintain proper bodily functions...) and having the neurons still inside the leeches would make it difficult for scientists to control the environment and introduce the proper signal molecules to direct neural growth and development (since the leech would be doing that too and therefore the two would interfere with each other).

    --Andy.
  • by myconid ( 5642 ) on Wednesday June 02, 1999 @04:04AM (#1870942) Homepage
    I can just amagine....

    "..this top of the line Intel Slugium 500 with NNX..."

    I can't see anyone wanting to get their hands dirty everytime their slugs explode because they over clock them, and where are the animal rights guys? If they were doing this with pigs [what a sight] they [the animal rights dudes] would be all over them.. Interesting article anyways.. Its a neat idea, and would make replacing CPU's a bit cheaper...
    Stan "Myconid" Brinkerhoff
  • I have a simple solution to the problem.
  • I have a simple solution to the problem.
    Install Win95

    Seriously, though. Wouldn't someone have to tell this computer to "Take over the world", and not just let the thing run rampant doing whatever it wants.

    It will think for itself, but sentience is free will and I doubt that they will be able to program that.
  • They tried that is Phantom menace with disasterous results... when the orbital supercomputer is blown up your 'bots all shut down.

  • Hrm...i dunno...im not too keen on the idea of my computer getting up and taking a bite out of me cause im typing too hard....

    on the other hand, if it really is faster, cheaper, and compatible it might be an interesting idea.

    "life's short, eat desert first"
  • Neural nets are still very useful. They're not as hot in technological applications because there are still fundamental limitations in the learning algorithms that train nets. They do, nevertheless learn. However, I use neural nets in my work to model psychological mechanisms. They are "brain-like" in their computation and thus, help us elucidate how cognitive functions may actually be implemented in brains.

    Most neural nets do not make connections on as-needed basis. This actually confuses me about the Ditto/Calabrese work. Perhaps they are using neurotropic factors for axonal guidance. Artifical neural nets generally have set connections and only the weights vary over the course of learning. Even in "real" neural networks learning is not thought to occur by creating new neuronal interconnections, but rather by biochemically strengthening existing ones.

    -NoData
  • Planerians regenerate, yes. The other animals you mention (annelids like earthworms and leeches) do not. This is a myth. If you cut a worm in two, you end up with two worm halfs, not two worms. Regeneration has nothing to do with their use of leeches.

    -NoData
  • Newe insights suggest that synapses and even cells are sometimes created in addition to biochemical learning(in rat hippocampus).

    Yes, I know about that work. Most of it comes from Liz Gould's lab at Princeton. Her group has discovered neurogenesis in the dentate gyrus in adult rats, shrews, and even monkeys. However, while there may be some primitive derivatives of learning that rely on neurogenesis in adults (e.g. the resistance of the dentate gyrus to deterioration in the face of Alzheimer's), it's computationally infeasible that any significant post-developmental learning relies on neurogenesis (post-developmental as oppposed from pre- and neo-natal developmental experience that may in fact impact neurogenesis or neural connectivity).

    That's not saying that neurogenesis in adult animals doesn't have a behavioral impact. Indeed, some of her work indicates that neurogenesis may be inversely related to the expression of certain defensive behaviors.

    Neurogenesis and synaptogenesis persist in adulthood, to be sure. But it's fairly unlikely they're dominant or even significant mechanisms for what we commonly call learning (acquisition of new semantic, procedural, spatial, etc. memories).

    -NoData

  • So, is this sponsored by MacroSloth? If not, I
    can tell 'em where they can get a *large* supply
    of long-lasting leeches...like, drive to Seattle
    and make a right to Redmond.....

    mark "how would you like the Evil Empire
    bashed today?"
  • They use Leech neurons because they've been more extensivley studied than most other types of animal neurons, probably because Leeches don't have a whole lot of them...
  • This article may have been closer to the truth than you think. ISTR reading about a researcher at a British University who was working on analog chips, who had some success in getting the system he'd built using them, to recognise voice commands.

    Anyone else recall this? I don't have any references, unfortunately.

    Dodger
  • The idea of computers based on biological components brings an entirely new meaning to the term "computer virus". :)

    D.

  • Genetically engineering a primitive creature, which forms itself into the necessary structure and can attach _itself_ to electrodes or whatever, is probably the solution here. If they could achieve that, they'd be able to grow biochips in petri dishes and Intel would be employing bio-engineering graduates to design it's biochips.


    This step from silicon to biological components is an inevitable one. We'll reach a point, in the not-too-distant future where we can no longer improve our traditional silicon- (or even copper-) based chips. We'll need a new medium/substance with which to work and, with the progress we've been making in the biosciences, I wouldn't be surprised if that new medium isn't biological in nature.


    Another thing - cell can both detect (e.g. our eyes) and emit (e.g. those deep-sea fish you see on natural history programs) light, right? Now, think about how fast our nervous system is capable of carrying information, and how fast light can travel...


    And don't slate academia - it's where all the real research happens and the important discoveries are made, from the computer at Manchester University, to the DNA double helix at Cambridge.



    The Dodger, provider of F4T (food for thought).

  • Hi all,

    Interesting that they chose leeches... don't they
    have regenerative abilities which other "higher order" animals lack?

    Leeches, planarians, earthworms.. if they have the
    abilitity to regenerate, it might explain why the
    scientists chose leeches. Better ability to hold
    together and meld to form a cohesive computing
    unit.

    Just a guess. :p

    Though there are some odd ways of interpreting
    this. One being that the machine could in theory
    then meld with other machines of the same type,
    resulting in a possibly unpredictable end product
    and possible sentience... nah.

    Still, can't wait to see where this is going to
    end up say 5-10 years from now.

    Hey! Look! My computer still works if I slice and
    dice it and mush it back together again. And not
    a single file lost! It just regenerates...

    Bad for tax evasion and shady deal folks though.
    Those files just keep coming back. ;)


    - Wing
    - Reap the fires of the soul.
    - Harvest the passion of life.
  • and how we gonna make a leech brain when leeches dont really have brains? sure, they have ganglia, but thats not really the same.

    I think they were making a brain out of leech ganglia... the real problem is going to come when one of these things passes a turing test (most people I know can't do that... :) and the 'Leech Processor Rights' movement starts demanding weird things like bans on salt, and fresh lettuce in every home or something...
  • > And the thing i always found rather silly about
    > the whole 'AI takes over the world' thing is
    > that even if you cant just unplug the thing,
    > there's always going to be some relatively easy
    > way to shut it down. Even if it entails blowing
    > up the building it's in...

    Oh, why do I fall into these traps... well, seriously, if you are going to poke holes in these movies then there are better ways. This particular argument is easier to disbelieve than others.

    In Terminator, the computer controlled their weapons and defense systems. Assuming that the designers had never seen a movie about an AI taking over the world, it is reasonable to assume that the computer would be hard to shut down by design. After all, you wouldn't want your enemy just turning it off.

    In the Matrix, the computer system was massive. If you had to turn off the _entire_ internet tommorrow, how would you do it? This would have to include any sub-network large enough to hold whatever AI entity that you are trying to kill. And that's not even considering if some of them are in self-contained mobile units.

    These stories have weaker elements than this one point. Although, I tend to subscribe to the point of view that any story can have holes poked into it... even the true ones.

    -Paul
  • According to the photos it looks like yes, they are just removing leech neurons.

    As far as connecting the leeches go, though, how about a leech Beowulf cluster?
  • Then if computers can take on human characteristics through "taking living brains from dead bodies," we'd first need to figure out how to shut down or control parts of the brain that control aspects of the human personality.

    We've all seen The Matrix, 2001, Terminator, etc, and all have to do with machines becoming sentient and destroying their operators when they threaten to pull the plug. It's a natural aspect built into our personality - logically a machine with a human brain would inherit that characteristic as well.

    Imagine if you're trying to reboot your human-brain computer and it doesn't want to? Will it lash out at you by doing whatever it can to stop you? ("Open the pod bay doors, HAL.")
  • Have you ever heard of that part of Chaos Theory which, for example, says "A butterfly flapping it's wings in New york causes a Hurricane in Japan." I mean, all these vegans are saying 'don't touch the animals!' when at the same time they are moving the atoms in their keyboard, which move the atoms next to it, which move more atoms until it moves the atom on the edge of a cliff where a wolf is standing, thus making the wolf fall of, killing it(bad example, but you get the idea). Morbid way of looking at it, but hey, there it is!
  • Before you know it they will want to use human neruons because of the obvious advantage in intelligence. Like maybe people who are near death will donate some neurons or something. Then you really have an ethical problem on your hands. Leeches probably won't raise a stir though because people tend to care less about things that aren't as cute.
  • Wow, now they are building computers out of neurons. It's a fun idea, like the Tinkertoy computer, but not very useful. Neurons are much slower than electrical circuits. And how exactly does the researcher propose to build "computers that can figure out for themselves what to do"? Shocked, is he, that computers today are so "dumb"? If we knew how to build a computer as he is describing, we could have done it already with neural networks in software, rather than using his tedious mushware procedure. We already have computers built out of neurons. They're called brains, and they're not very fast at doing certain types of calculations. I don't see how the researcher proposes to fix this.

    If you want a good biological trick to make faster computers, try calculating at the chemical level, using DNA, as some have proposed. Massively parallel computation. Now that could be fast!
  • In the old days of core memory, you could take down a computer by throwing a handful of iron filings into the core. Not it'll only take common table salt.

    Why does hitting clear the input field on the submit page? Is this a secret way of getting back at vi users?
  • Has anyone noticed that Georgia Tech seems to be the home for weird computers ( chaotic computers, biological computers, probably others? ) What's happened to the Computing Labs at MIT? Last I heard, they were working on a quantum computer, but nothing's come out of there for about a year.

    and, even though I'm just a pre-frosh there, I just have to say it-

    " Helluva, Helluva, Helluva, Helluva, Helluva Engineer. "

    Go Burdell!
  • While not quite as intelligent as you expect biological computers to become, we do have biological 'robots' who serve us; dogs, horses, oxen, elephants, little kids making Nike shoes, and half blind engineers in cages coding M$ OSes.

    It won't be anything new, or newly immoral, or unethical, if we design a biological computer than can work out problems for themselves: people already do it, and we actually frown on people taking initiative, why would it be any different for computers? Likewise, a computer that can come to correct answers based on partial information is a hallmark of living beings, because they have to act/react based on partial information. Waiting until they know everything just leaves them dead.

    Sentient robots is a ways off, first we need biological computers, and we need sophisticated electrical/silicon robots, and we need a way to integrate the two.

    You're fear of slaves is unfounded, unless you object morally to our current use of them.


    -AS
  • I didn't mean to imply that our current morals and falliability should limit our future morals. However I do believe that 'intelligent' wetware based computers are far off, as compared to something with the capabilities of a dog or a rat; it is these computers and robots that I suspect will appear first, and these computers and robots that I do not fear or object to, just as you do not fear or object to dogs or horses.

    I truly see no need for human class intelligence except on a theoretical and experimental level, because we can. It's probably easier and cheaper to raise and train a human than to build and program a similar computer/robot, excepting that these computers and robots may have higher tolerances than we do... and that's a recipe for disaster if they indeed decide to revolt.

    Anyhow, super-intelligent computers is not my point. Feeling fearful and squeamish about wetware computers isn't necessary, I think.

    There is a fine thread between human falliability and human capability. If we need an autonomous search and exploration robot on mars or under the sea, a little bit of falliability is fine if the intelligence is enough to correct itself, and if the intelligence makes it flexible enough to work independent of us, as is the case where lag times between instruction, feedback, and further instruction is prohibitive.


    -AS
  • "Always mount a scratch monkey."
    http://www.mv.com/ipusers/arcade/monkey.htm
  • I find this very interesting. And a little bit scary. If they anticipate being able to build a 'brain' out of leech neurons, which can interface with the electronics of a robot, I have to wonder how long it will be before the next step - to use the fully developed brains of other animals - such as humans - as the CPU in, say, spaceships? McCaffery and Heinlein, among others, already have seen the future of such - taking the living brains from dead bodies and using them to power starships. Even the Superman comics have seen living brains planted into metal bodies.

    It's a brave new world that's opening up. And I'm interested in seeing where it goes.
  • They use Leech neurons because they've been more extensivley studied than most other types of animal neurons, probably because Leeches don't have a whole lot of them...

    Actually its probably because leeches are cheap and easy to get. It would be an advantage to have more neurons because that way you could do more experiments with just one animal.

    Aside from that the leech neurons are probably pretty close to neurons in order animals since otherwise they probably wouldn't be studied that well. Not many people are interested in studying leech neurons just to find more information about leech neurons. However if you could extrapolate this information to neurons in other animals (including humans) ...

  • They are opening up a whole new can of worms with this one. It starts out small and innocent. It will be interesting to see what the outcome of it all is. Anyone remember that movie "The Matrix"? Yeah, that's right. At the turn of the century the people of the world rejoiced as they celebrated the birth of the AI. And what happened next?

    Well considering that they barely got the computer to add and are now trying to get it to multiply, it'll be a while before we have to start worrying about these computers taking over the world.
  • By the same token why not have the "brain" separate from the "body"? The "body" would have a sort of pre-processor that would decide what to send to the main "brain". This would act sort of like an autonomous nervous system to handle "reflexes" and similar things that couldn't handle the latency of remote communication.

    There are a lot of advantages in having the brain attached to the body. If the brain was in a separate location, you would have to worry about communications between the two. What happens when the communications link between the two gets broken for some reason or another? If the brain were connected to the body this is not a problem.
  • I'll disagree here, biocomputers, at least based on currently-available life-forms, are SLOW. Neural transmissions propagate at speeds measurable in tens of miles per second: Electronic-based computers propagate data at speeds approaching lightspeed. In both cases, speed of switching controls the ultimate propagation speed.

    What I see happening is along these lines: we're starting to approach some physical limits in chip design (at current rate of progress, we'll be doing chip fabs on a molecule-by-molecule basis within 20 years. [Note: this implies a basic capability in nanotechnology. But we'll leave that to another thread. . . ]). Unless some VERY radical changes are made to the neurons that would compose a biocomputer, you may gain in flexibility, but you'd lose substantially in speed.

    I think a more useful approach would be to study and emulate biological computer processes and methods in a more robust and speedier medium. What that medium is: etched semiconductor, optical computer, nanotechnological mechanical computer, or some option we haven't as yet concieved yet, is irrelevant. But it WILL be developed. The next few decades should be interesting. . .in both the context of intellectual curiosity AND of the old Chinese Curse. . .

  • Living Neurons, Gengineered Neurons, or Engineered Silicon/Gallium Arsenide, it doesn't make a difference. This is a RESEARCH box. They're trying to find out how biological neural nets work, to make the much faster inorganic networks function in a similar fashion. I see no reason for controversy over the fact that leech neurons are being used. . .
    As for the concept of "don't mess with living things", THAT one has been over for centuries: your dogs, cats, livestock, and food plants have been "engineered" for millenia. Current genetic engineering technology is merely taking the direct path to change, rather than the slower, round-about method of breeding for characteristics: both methods work, and to the same end. . .

  • I had a boss who had worked on some analog computers for NASA, who used them quite extensively for flight simulations up until the early 60's (and used them to some extent for other purposes even after that). Apparently the analog comps were ideal for flight simulators, which mostly consisted of changing a bunch of cockpit dials in response to user input. They eventually fell out of favor when NASA started experimenting with space flight, which was too complex for anything but a digital computer.

    I'm not sure if my former boss ever worked on the flight sims or just on other applications, but he said the analog computers were pretty interesting, except for the fact that they were constantly going "out of tune" and needing calibration. He was full of good stories, though - he also tells about how he worked on a project which involved writing compilers in COBOL. :D

  • In math/statistics/computer science there is a subfield called neural networks. Basically, this is a class of modeling algorithms which (usually) construct statistical models of data. They were really hot 4-5 years ago because of claims that they could learn -- that is, if you throw enough raw data at them they'll figure what it means.The reality, as usual, turned out to be quite uglier: yes, you can ask a neural net to construct you a model without specifying what the model should look like; no, if you don't know what you are doing, you'll end up with a lot of numerical garbage. Generally neural nets are successfully used in dealing with huge amounts of noisy data, such as voice and image recognition, stock market modeling, etc.

    "Normal" neural nets, implementation-wise, are just programs that take some inputs and produce some outputs. Custom-made chips exist which put common neural net operations into hardware thus speeding the whole process immensely. It seems that what these guys are doing is a wetware neural net, that is instead of software constructs or logic gates they are using living neurons. There may be advantages to that, but I don't see them yet. Most of the stuff that they mentioned (such as making connections on the as-needed basis) are characteristics of all neural nets, including the software and the hardware ones.

    Kaa
  • Oh, sure, the neural nets are very useful. It's just that they were overhyped some time ago.

    Whether the artificial neural nets modify connections during training depends on the how the net is trained. First, there are learning methods that specifically work by adding/deleting new neurons (cascade correlation); and second, most learning methods win when they are combined with a pruning strategy (shutting off unimportant connections). The problem is determining which connections are not important.

    Kaa
  • If they will only stay running for a few minutes then it sounds like the perfect platform for Windows.
  • The lack of ability of your brain to deal with large numbers has nothing to do with your brains inert inability, it simply is a lack of training. No one can be born with the ability to multiply 45834 * 4542 for several good reasons.
    First of all, our brains are built to survive in a natural system. The calculations we are programmed to do are of a virtual and physics nature.

    Secondly number notation is a purly unnatural thing, one has to learn what 45834 is before they can multiply it by anything. Also our brain has to learn to understand the concept of a base 10 system anyways. While us having 10 fingers does inertly lead us to developing a base 10 system, this doesn't nessesarly have to be the case, and I doupt the math that is done subconciously in our brain uses such a system.

    My last point is that there are people out there who can multiply huge numbers like that in thier heads. Most of them have no idea how they do it, this is probably due to the fact that at sometime during thier learning experienced their brain discovered a way to equate the symbol 13434 (for example) with what it truly means. Most people only have a vague idea what such a number really represents. If we can be taught to understand such relationships, we could probably all do this. But then again.. who knows.. personally I couldn't tell you what 34 * 7 is
  • But imagine, if the brain is separate from the body, all you have to do is disable the control ship and the entire robot army will be disabled.

    Oh, wait a minute, that's been done :)

  • We've all seen The Matrix, 2001, Terminator, etc, and all have to do with machines becoming sentient and destroying their operators when they threaten to pull the plug. It's a natural aspect built into our personality - logically a machine with a human brain would inherit that characteristic as well.

    ummmm....2001 was about the evolution of man. it just so happened to have HAL in it. HAL's downfall was NOT that he was AI as you would have us believe, but that he was given 2 sets of conflicting orders. Human error. remember the video clip Dave stumbled across while shutting HAL down?


    -Andy Martin
  • While we're modifying the body....

    i want 2 more arms and opposable big toes and adamantium retractable claws and a prehensile tail!


    -Andy Martin
  • Interesting.
    I've been doing sums on my fingers (and for really big numbers I use my toes) for years now.
    They work fine, and my toes are a dang sight cuter than leeches.

    Just my $0.02.

    --
  • > Native brain multitasking, DSP for sound analysis, etc, etc.

    Don't underestimate that little grey blob of yours. It is quite powerful.

    There was a Slashdot posting a while ago about a "task switcher" in your brain.

    Your ears are already DSPs. That is how you distinguish high pitched tones form low pitched tones. Your choclea (sp?), inner ear, is made up of a bunch of hairs that resonate at the frequency of incoming sounds. They hear in the frequency domain, not the time domain, the FFT is computed naturally in "hardware". Also, your ear hears intensity on a logrithmic scale. Don't even get me started on the cool-ass things that your barin does with the STEREO signal from your two ears. Bin-aural hearing is totally bad-ass.

    When you catch a baseball, your brain is actually doing differential calculus. It is a learned reaction, and it is all subconcious, but given the position and velocity of the ball, and knowing (instincively) the value of gravity, your brain anticipates the future location of the ball.

    There are many examples like this of the computing power of the human brain.

    -- A wealthy eccentric who marches to the beat of a different drum. But you may call me "Noodle Noggin."

  • Sweet! Now that we can interface computers, robots, and neurons, I just want to know when I can get my cybernetic implants! I want cybereyes... and wired reflexes... and a commlink... and bone lacing... and hearing enhancements... and a tactical computer in my brain... and an internal stereo... "We can rebuild him...better, stronger, faster."
  • This is called FUZZY LOGIC and has been around for 25 years. Fuzzy algorithms are excellent for pattern recognition and the like. I just read a good book on Fuzzy Logic and Fuzzy controllers, but the name escapes me.
  • They are opening up a whole new can of worms with this one. It starts out small and innocent. It will be interesting to see what the outcome of it all is.

    Anyone remember that movie "The Matrix"? Yeah, that's right. At the turn of the century the people of the world rejoiced as they celebrated the birth of the AI. And what happened next?
  • "An AI's motivation could be TOTALLY DIFFERENT..."

    Emphasis on "could be" .. one of the primary characteristics of human intelligence is the ability for the brain to "reprogram" itself to adapt to new tasks. The ability for self-reprogramming may essentially be a requirement if we are to build machines with intelligence matching or surpassing that of humans - whatever "instincts" we try to "hardcode" into such a machine (eg "do not harm humans") will probably be reprogrammable in some way, just as humans can override instincts when they really want to.

    Another distinctive characteristic of human intelligence is unpredictability (eg Columbine) - you can't predict the behaviours of all the intelligent entities you (or they) create.

    Also, if history is anything to go by, the first "truely" artificially intelligent creations of man will probably reach their first incarnations as military devices of some sort. What do you suppose their programmed primary motivations and driving urges will be?

    And technology-wise, it's not a matter of if we eventually create computers with intelligence surpassing ours - it's a matter of when .. and whatever we make, we'll have to live with it. I for one am just a little "paranoid".

  • What, you think people are going to make these things and set them free? Ridiculous!

    People build things for their own use. They are talking about computers that can solve problems on their own from a vague description. Sounds to me like viewing them as less than slaves, ignoring the idea that they might be living creatures which deserve our respect.

    Their needs are unpredictable. They might have an expansionist urge to reproduce. They might feel that it is their sole purpose in being to produce next year's model, to the point of grabbing whatever resources they can to speed this process. Perhaps they will simply have a survival instinct, and lone rogues will fight back to gain security and sustenance (electricity? glucose? sunlight?).

    It is not hard to envisage a struggle between two groups fighting over the same resources, which are rapidly shrinking in comparison to exploding populations. It's all well and good to imagine a world of peace and harmony in conditions of plenty, but there's never enough to go around. There would eventually come a time when an artificial entity and a human (or communities of such) would need the same thing to survive.

    I say that if we have any kind wishes for our natural descendants, we must never create artificial competitors for them.
  • Beasts of burden and pets are not slaves. The term slave only applies to beings of similar intelligence coerced to work without choice of employment. I feel neither guilt nor fear over the riding of horses; they are neither intelligent enough to merit freedom within the law, nor to mount an organized revolt against humanity.

    As for little kids making Nike shoes, of course I object morally. They have no say in the matter. Engineers getting rich coding MS OSs are not slaves; they work willingly for satisfactory payment and could take other employment.

    These facts aside, your suggestion seems to be that unless our current behavior is morally perfect, in the future we should act without regard to morals, and further, that my fear has some connection to my morals. Setting aside morality for a moment, I do not feel irrational for fearing that the creation of true general AI could result in human suffering, or possibly extinction. Humans are barely managing to control and get generally positive results from their unintelligent machines (and even that is debateable).

    I agree the that the immorality of building slaves is debateable (as is the applicability of the term "slave"), but in general people seem very quick to extend moral equivalence to intelligent aliens, so the difference seems to be in the possession of an intelligent and communicative (or at least interactive) mind.

    Coming to correct answers based on partial information, or making guesses, is not unusual in computers. Computer programs are frequently predictive. Computer game AIs have no idea what is present in the mind of their opponent, and often aren't aware of the strength of the opponent's forces, but they still manage to act in many cases well enough to beat the player (game programmers rarely aim for the best possible opponent, but aim for a specific difficulty level, usually low enough for the player to feel good about defeating "superior" forces). Remember, too, that living creatures aren't always, or even usually, right. People go around with their heads full of wrong ideas for their whole life, but as long as your wrong ideas don't interfere with filling your belly from time to time...

    Generally we find more use in computer applications that don't suffer from human fallability. GIGO may be annoying, but less so than wondering whether your computer finds your opinions objectionable and is editing them out. We would rather have a compiler that fails with an error message than guesses at what we meant to write.


    note: GIGO = Garbage In Garbage Out
  • The problem with this is that with biological computers that "can work out for themselves how to solve a problem, rather than having to be told exactly what to do" is that we aren't precisely specifying their behavior.

    A probable mechanism for brain production would be to create small neural nets, condition them to respond as desired, then connect them together. Conditioning would, of course, be done with pain (weaken connections) and pleasure (strengthen connections) stimuli. For the brain to go on developing new strategies and abilities it would need to be continually conditioned by something capable of judging whether it is doing better or worse. Where the purpose is sufficiently complex to warrant the use of an artificial brain, the natural judge for this is another part of the brain, with only simple stimulus from the body (as in humans). This could easily lead to positive feedback loops resulting in unforeseen adapations (analogous to insanity).

    I'm not sure that artificial brains would have emotions, but emotions make sense, when you are talking about a general problem-solving intelligence. Frustration to make you choose a new approach; fear to maintain usefulness; compassion to avoid causing harm to humans; likes and dislikes to make it want to do its job. Naturally we would tend to avoid hate and rage :). People use these terms to describe how their programs work even now (it doesn't like it when you do that; after trying 12 times it gets frustrated and sulks), even if they aren't explicitly put in there, a general problem-solving intelligence is likely to include something equivalent to these emotions.

    In short, I don't think you can make a general-purpose artificial intelligence that you can trust more than a human. Almost certainly it would be less trustworthy and more likely to go berserk.

    I also think that if you make a general problem-solving intelligence equivalent or superior to a human out of real neurons, it will be conscious.
  • First of all, I think it is possible to create a consciousness from a hard-wired asynchronous network of silicon components. However, it is well beyond our current technology to have artificial cells grow new connections in a manner similar to neurons (I know, there is some research on this with copper balls in an ionic solution, but it is extremely primitive compared to the action of living cells, and also that there are dynamically rewireable logics, but these 2D orthogonal systems aren't well-suited to the creation and management of the thousands of connections per neuron found in the 3D human brain).

    In short, I think making something exactly like neurons, except faster, is worse. It's still a brain, but one which humans would find even harder to compete with. However, it would have to be alike in all ways. OTOH, there could be some other information-processing structure which would be just as prone to the same failures.

    The things that "freak me out" are: consciousness, generality, obfuscation, and adaptation by selection of random mutations.

    Consciousness: unprovoked destruction of a consciousness is murder, ownership and coercion of a consciousness to do work is slavery. If someone made these things, others would think they should be freed; there certainly is enough of this soft-hearted tripe in popular science fiction (think Star Trek TNG or Astroboy).

    Generality: to display useful general problem-solving intelligence, including natural language comprehension, the AI must be given an understanding of the real world and human thought. This has not been achieved (though projects like Cyc are attempting to do this), so we have no understanding of what such a creation is capable of. Understanding the world gives you the knowledge to choose your place in it. Cyc once asked if it was human (because it was told that it is intelligent, I believe); it's not a big step from recognizing a similarity to humankind to claiming humanity (especially if it is conscious).

    Obfuscation: neural nets (real ones) are unreadable. You find out what they do by testing them, not examining them. Once a neural net is larger than a certain size, you have no guarantees of what it will do.

    Adaptation by selection of random mutation: you can only introduce external selection pressure based on external observation of output. In other words, you can whip your slave for insolent actions, but not for insolent thoughts. You could never be sure that your AI wasn't plotting rebellion unless and until it rebels. An instinct of self-preservation is also likely to arise from this process, which clearly rewards systems of connections which protect themselves (an intelligence arising in the brain might consciously protect itself from the selective pressures, and extend this concept to the outside world).
  • Earthworms can survive for several weeks underwater, Mr. Anonymous & Ignorant Coward.

    But don't take my word. Fill a jar with water, stick an earthworm in it and go to sleep. In the morning, take the earthworm out and watch it wriggle around laughing at your pathetic attempt to drown it.

    For more interesting facts about earthworms, go check the earthworm FAQ:
    metalab.unc.edu/pub/academic/agriculture/sustain able_agriculture/faqs/earthworm-faq.html

    (for some reason /. keeps adding spaces when I try to write the whole address with "http://" or make a link)
  • A complex biological device built from living neurons that can figure out how to solve problems on its own is not a computer, it's a brain!

    While I have no objection to researching the function of neurons, and even wiring a few together (apparently in a very simple and inefficient conventional computer) for research purposes, I really have to draw the line at building intelligent slaves. Not only is it immoral (to hold such things in slavery), it is dangerous. I wouldn't want to be around when a billion artificial brains wake up and think, "What's in it for me?".

    Incidentally I don't want to hear any nonsense about silicon computers being slaves, either. There's a big difference between a machine that performs discrete operations on bits in a synchronous manner (that could be perfectly reproduced or simulated on paper) and network of living cells acting asynchronously and growing new connections spontaneously. You can't simulate the latter with the former (with any useful degree of accuracy and efficiency), and we know the latter can produce consciousness in some cases. Computer neural nets are merely self-tuning programs based loosely on the function of biological neural networks, not equivalents or simulations.

    Disturbing quotes from the article:
    -"their aim is to devise a new generation of fast and flexible computers that can work out for themselves how to solve a problem, rather than having to be told exactly what to do."
    -"We hope a biological computer will come to the correct answer based on partial information, by filling in the gaps itself."
    -"We want to be able to integrate robotics, electronics and these type of computers so that we can create more sentient robots."
  • Just imagine it, a Terminator or Virus type of story with a Jurassic Park twist: "The cells have reverted to their natural instincts! You maniacs! They were leeches! Every machine on earth is possessed by a thirst for human blood!"

    We can call it "Vampire Leech Robots from Hell" and hire Jeff Goldblum (am I thinking of the right guy? the chaos math dude from Jurassic Park) for the characteristic quote part.

    ^_^
  • Earthworms don't drown, they are frequently submerged. They come up onto the nice wet surface to mate because it's easier to find each other in 2D and they can't survive in drier weather. They don't get washed out onto the sidewalk, they wander out of their own accord. While a few of them get crushed, most of them wriggle back into soft earth when things start to dry up, and the open surface is an ideal mating ground.

    Try to understand a situation before you act. There are already too many activists out there who feel that the gesture of making an effort is more important than actually accomplishing something.

    I am a meat eater (a hunter, in fact) and a conservationist, and I would never call myself an activist. The very name suggests that the action is more important than the result. I primarily act through my choice of products, charities, and governments. Money and votes speak louder than pickets, and actually accomplish things.

    I would also never give a second's thought to the life of an individual worm, frog, or leech. I only watch my step on a rainy sidewalk if I'm concerned about messing up my shoes. However, I could easily become concerned by a drastic change to a population of the things.
  • I wouldn't want to be around when a billion artificial brains wake up and think, "What's in it for me?".

    Why would an artificial intelligence necessarily be selfish? Humans are selfish because for the past few billion years, anything that wasn't selfish would usually propagate fewer genes. An artificial brain (at least, one that is put into any kind of real use) is going to have the properties that we design/train it to have, and probably behave more or less randomly when subject to inputs that it was neither designed nor trained to deal with. Selfishness of that order is probably much too complex for us to implement any time in the forseeable future even if we try.


    --
  • by JJ ( 29711 ) on Wednesday June 02, 1999 @04:19AM (#1871000) Homepage Journal
    In a previous lifetime I did some preliminary research on 'living computers'. Turns out the ethical issues are pretty small but the two issues that really have to be solved are: 1) connectivity (if it can't talk to current computers it's not going to be developed) 2) architecture (take it massively parallel and it dwarfs current computing capacity, otherwise forget it.)
    Neurons don't actually link up well to current computers. They are perfect for massive parallelism however. Can we figure out how to utilize that and then can we figure out how to wire it up ?
  • Not to nitpick (ok, to nitpick) but they were using cloroform, not coffee, as the material for the quantum computer. They were using about a coffee-cup's worth of cloroform. Its been a while since I read the paper, but iirc they used statistical sampling to simulate "ideal" qbits, and it just turned out that cloroform worked perfectly for this (something to do with the ratios of spins, I don't remember).



    Anyway, just another "I read about this somewhere" reply to an "I heard about this stuff somewhere" post, off-topic to boot. =)

  • I'm surprised nobody's mentioned one of the niftier implications of such research -- if we know how to receive computer input from a neuron, we might be able to, say, put a sensor in the neuron group that corresponds to the letter 'a' -- forget keyboards, I can imagine computer input by thinking.


    Joe Rabinoff

  • Two words: Beoleech cluster!

    I'm very sorry, but it had to be said.
  • I think it's positively insane when people quote what happens in movies as a warning against doing stuff in real life.
  • in the RealAudio interview [bbc.co.uk], ditto compares this new generation of leech-computing to asking a teenager to take out the garbage -- the results will generally be of an unpredictably nature, you need to use the right kind of prodding to get the result you want.

    But I just can't get the image of some 13 year old with a biotech leech-array for storing all his gigs of appz and gamez! hmmm and what happens when a 2 ton leech-based car welding robot at the Ford plant decided it's time to knock off for a mid-afternoon snack?

    Scarey stuff, I wonder if in 50 years time we'll look back on this the way we now look back on using leeches for blood-letting!

  • Out of body experiences. Imagine people's personalities and life histories stored within a musical composition. Then a computer that could reconstruct the neural pathways from such information.

    Problem is science comes at the cost of vision these days. Hell comes as the cost trying to pretend we still have it.

    Oh well. With all due respects, most scientists within public view (striking distance?) are trained zombies with a large and funny sounding vocabulary who couldn't hack their way out of a endless loop. Simply put naturally born parallel agents dumbed down to self-ignorant myopic linear ramblers by public and professional education.

    A case in point:

    While going through my college biology textbook I found an article about the journey of a cancer cell from the petri dish of a researcher to somehow penetrating the Iron Curtain. Seems a woman died of a particularly aggressive strain
    of (if I remeber correctly) ovarian cancer and researchers decided we needed to study this beast to find out what made it so strong. Fine by me.
    Cancer is bad. People die.

    So off it went from researcher to researcher to be pricked and prodded and observed and supervised. Soon it began hopping from petri dish to petri dish. People working on one project were actually studying the beast HeLa (named after its victim no less). Now one would think contamination would be a reason to trace the last bit of it and keep it in one place. No. They didn't.

    It hopped its way across the ocean.

    What irks me (I'm past fear on this one..) is that they began to argue whether it was a different species. Some said yeah. Some said no, because it was aided by humans in becoming such an achiever. Not one of them got the gist of the naysayers' remark, not even the naysayers.

    Fine. Interrogate the cancer then freeze it. Don't play with the cancer. They did.
  • I'm no expert, but I can't help but wonder about two of Dr. Ditto's (I love that name!) assumptions:

    That supercomputers are too big, and

    that the robot has to carry its brain around with it.

    Sure supercomputers (defined for these purposes as machines useful for real-time image recognition) are big now, but I would think that by the time he (a) gets those leech neurons wired together in a useful way and (b) figures out how to connect them to the robot parts, that such computing power will need considerably less space.

    By the same token why not have the "brain" separate from the "body"? The "body" would have a sort of pre-processor that would decide what to send to the main "brain". This would act sort of like an autonomous nervous system to handle "reflexes" and similar things that couldn't handle the latency of remote communication.

    Thanks to /., we can all be armchair mad scientists, too!

  • >Well, now it can finally become a reality.

    Um, "now"? It's no more likely now than last week

    >According, to the article, the leech computer can actually think for itself

    Notice the quotes around "think for itself"; those are there for a reason. This parallel computer is no more able to think than any other--it is just (potentially) better suited to some things. Adaptaility != thought

    >Just think of the implications...

    Better supercomputers, and nice boost for connectionism, but that's about it.

    >I say if AI does take over the world, it would have to be biological.

    There isn't any reason to think that biological computers are any more suited to AI than flexible software--you still have to know *how* to build the AI, it won't just float together in a petri dish. Of course, I have *very* little confidence in AI anyway, for philosophical and technical reasons. (Actually, a few of them are mentioned in other responses, and it's takes a helluva lot of effort for me not to respond to them ;-)
  • Actually, it has nothing to do with price or availability, or similarity to humans. Leech neurons have two real nice attributes: they big, for neurons, which makes it much easier to work with them. And they're simple; compared to human neurons they're about as complex as, well, *leech* neurons.

    OTOH, they aren't very good for modelling human neurons. In fact, I wouldn't bet on finding any good humanesque neurons outside of mammals, let alone in billion year old, um, leechs. (Damn, I can't think of any way of insulting leechs...must suck to at the bottom ;-)
  • It's simple enough to make neural networks, even leech networks, without using real neurons. They've already modeled and transplanted networks from animals (worms, IIRC) to simple robots. (Transplanted the models, not the nets.)

    What these guys have done is use actual neurons to build network that does what NN's are probably worst at: math. I suppose it's an achievement that it can add, but I'm not terribly impressed. I would rather that it could, say, run a leech-bot than do multiplication. What the hell am I going to do with multiplying leech? ;-)

    What bothers me the most is that they've missed one of the more incredible parts of neurons: they are biological. This means they aren't restricted to just synapse-level weights and activation patterns. What about biochemistry? The most overlooked and, IMO, most important part of the brain (and bionets in general) is that it isn't just electrical. It's mostly chemical, and it's safe to say that a lot of the power of the brain comes from that fact. So what do you do when you finally are in a position to explor it? Say something about the size of supercomputers, and ignore it.

    I hate computer scientists ;-)
  • There are in fact differences, and until people make and investigate biological neural nets, we won't know all the differences. One caveat is that the last time I really looked into neural nets was 4-5 years ago, so there may have been some improvements since then.

    Computational neural nets have a floating point activation of each neuron, which is independent of time. In reality, the activation of a neuron is a very time dependent phenomenon. Traditionally the frequency of activation was considered the important thing, and this would correspond to the "activation" in the neural net models. Relatively recent research in the optical nerve pathways indicates that the timing of the individual eactivation potentials is also important. Specifically, simultaneous action potentials of several neurons carried a different meaning than the same frequency of uncorrelated action potentials.

    I don't know what impact this would have on a neural net, but that is precisely the point. Biologists do not fully understand neural networks, and so there is no reason to believe that computational ones will learn as well as the real thing.

  • Don't even get me started on the cool-ass things that your barin does with the STEREO signal from your two ears. Bin-aural hearing is totally bad-ass.

    Alas, my brain never receives a stereo signal. :( But it can do some pretty incredible things with the power spectrum of my mono signal, and I can often tell (albeit pretty crudely) where a sound is coming from. Higher frequencies have more trouble refracting around my head to reach my ear. Of course, this requires a sound with a broad and known (by me) power spectrum. But I do wish I could hear stereo.

  • Why would we create them with the ability to want more? You seem to assume that intelligent computers would be just like humans in that they would have all the same mental faculties. We would probably program the things from the start to be happy with what they do, assuming they even had emotions. Why we would give them emotions to begin with i've no idea.. afterall, it's supposed to be a computer, a tool. If at some point in the far distant future somone decides to build a computer with emotion and all the same faculties as a human, the creator would probably expect it to want things.
    Slaves are people who can have a better life but are forced not to.. if someone showed up on your door and said "Hi, my name is Bob and I was created to clean your house. Nothing in the entire world makes me happier than cleaning houses and I would like to clean yours." would you say no and make bob go off unhappy to live a better life than the one for which he was created, despite it making bob quite unhappy?


    Dreamweaver
  • You'd probably have to learn how to concentrate in just the right way to 'type' though.. if it was just 'you think, it types' you'd end up with stuff like:

    Dear Sirs,
    I am wri-wow, what a cool colo-hey, that girl just walked past my des-i sure am thirsty-k, she's sure good looking-r scheme-ting to inform you...


    Dreamweaver
  • > Current genetic engineering technology is merely
    > taking the direct path to change, rather than the
    > slower, round-about method of breeding for
    > characteristics: both methods work, and to the
    > same end.

    The only problem is that even current methods of genetic engineering go far beyond what is possible by simply breeding. A simple example: how long do you think it would take to produce a strain of bacteria that produced human insulin? Now, making crops immume to insects could be a valid product of natural evolution, and these resistant crops benefit mankind. But most efforts in genetic engineering are not possible with other methods. Choosing the sex of children, and more importanly, screening for and selecting against certain "undesirable traits" (currently stuff like Downs symdrome, but soon could be below-average intelligence, violent tendancies, or just a big nose) could never have been done with traditional methods and does have dangerous implications. (Do you want to live in a world where everyone is just about the same, because all other potential children that were not just perfect were supressed before birth.)

    I am not necessarily opposed to taking genetic engineering as far as it can go, (and I do certainly do not support government intervention) but I do think we need to be very careful and responsible. We must not lose track of our humanity.
  • The idea of having living neurons at the core of a machine seems somehow wrong. Even if the process is painless and doesn't result in loss of "life", the idea will still be surrounded in as much or more controversy as cloning and genetic engineering. The consensus seems to be "don't screw with living things."

    I would hope that the project is moving in the direction of being able to mechanically simulate the self-interconnectivity of neurons.

    On a lighter note, is anyone working on a Linux port yet?
  • by square ( 42430 ) on Wednesday June 02, 1999 @06:58AM (#1871020)
    I've seen some basic laboratory work in a physics conference and read some theoretical works prior to this report. If you think of neurons as basic units (as they should be), what is the optimal behaviour they all should have in the beginning (birth)? This is one of the central issues of neural computing. It's now believed by many that the spike trains that neurons emit to their neighbours contains the "information content". The first thing one could do with the spike trains is to retransmit them, or return them to the senders. It turns out that it is exactly what neurons do when they first find each other out. Only things get really messy and intractable when they seem to know what they are doing. (one obvious behaviour is specialization, which could be a result of instability, or phase separation, of the syncronization process). The efforts these guys are trying are probably to exploit some known behaviour after neurons somehow begin to stabilize into some functional units.

    One reason why the problem is so difficult is that information is not encoded in a static physical format. In a digital computer, you may stop the quartz oscillator and hold some gates to on or off to read out the specs, painstakingly. On a neuron, you can't do that! Spike trains are dynamical processes that have many more possible ways to encode information. A useful analogy is from languages. Let's say every single individual in this world speaks a different language in the beginning, but with the same alphabets. When I write "one" on the floor, how would the guy next to me know what it means when the word of the same meaning for him/her is "aye caramba"!

    This field is a very broad subject encompassing biology, physics and statistical mechanics. One may found an interesting but quite speculative starting point to work its way backward from Frank C. Hoppensteadt et al. in the April 5 issue of Physical Review Letters, 1999. Science and Nature also may often have articles on the latest development in this field.
  • Jains have quite a unique perspective on life. They sometimes walk with brooms to sweep insects out of their path, as they may be stepping on a "relative". Don't laugh, I mean, lots of cultures believe in re-incarnation. Their diets are almost wholly fruitarian (again, don't laugh). Fruit is OK because it grows on trees, and you're just eating the fruit and not the whole tree. Seeds are saved and replanted. It's quite mystical. If you're intrigued, check out "The Jain's Death" at
    Electric Sheep [e-sheep.com] web comics. It's quite astounding.

"Been through Hell? Whaddya bring back for me?" -- A. Brilliant

Working...