Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Science

Microprocessors With Living Brain Tissue 104

FurBurger writes: "Another interesting article from Discover.com on NeuroComputers . 'Although scientists have developed software that attempts to mimic the brain's learning process using only the yes-no binary logic of digital computers, all the connections in a personal computer are wired back at the factory. Breaking a single one of these connections usually crashes the computer.' (a la Windows =))" The promise of neuron-based computers is greater flexibility and fault tolerance, with components that require very little power. Or, as FurBurger puts, it, "Watch out, Transmeta!" Mike also points to a June article on the BBC about the same group and their "leech-ulator."
This discussion has been archived. No new comments can be posted.

Microprocessors With Living Brain Tissue

Comments Filter:
  • Linux of course, everybody knows it never crashes, has a great GUI and support from all 3rd party device manufacturers.
  • The beauty of organic neural networks is that in theory they don't need programming. One of the problems I think techie people have comprehending organic neuroscience is due to the loss of the software/hardware duality present in computing. An ideal neural network isn't "hardware" onto which one would program, at least not in the traditional way.

    An organic neural system ONS) is a learning, and functioning, machine. One doesn't need to "program" the CNS of a locust for it to do it's job- control a locust's behaviour, motor and sensory function etc. The set of commands to be a locust aren't somehow coded onto a blank CNS before birth- they are the locust CNS!

    More advanced creatures, like us, are slightly different. We are taught many of our more advance human functions (e.g. walking, talking) by our parents. This is the price we pay for the greater adaptibilty and plasticity of our nervous systems. The upside, of course, being our ability to master abstract conceptions and contribute to sites like Slashdot! Surely these kind of machines are worth building.

    Cynicism is healthy, but I would have thought /. readers would have held a bit more hope for the future.

  • ..it would become very real. Wait until everything is using these things and some bugger will release the ebola virus......
  • Your right ... it has nothing to do with logic. The field of artificial intelligence has two opposing views. In the red corner, are the logicists. These are the old-school guys who believe that humans solve problems essentially by symbolic manipulation. The blue is the new-school corner. They study things such as neural-networks, and statistical methods. Both are looking for an appropriate representation of knowledge so that we can reason with it efficiently (at suitable level of abstraction to hide irrelevant details).

    Classical logic has its roots with the Greek philosophers. Essentially, they looked for a formal description for human reasoning. This logic is very simple yet offers an extremely powerful tool for Knowledge Representation and Reasoning(KR). It has very clean semantics and is the basis for the entire IT revolution. It is not perfect, however. Classical logic is monotonic (single meaning), although it has provided the basis for many non-monotonic formalisms. What classical logic does provide, is provability (soundness and completeness). Essentially we can guarantee (with the exception of hardware failure) that a program, with a certain input, will provide a certain output. These neural methods cannot give these assurances. A neural network may calculate that 5+1=6 fifty times in succession. We cannot be sure of the output produced the next time it is asked the same question (5+1=?). There are applications such as image processing where these methods are providing fantastic results.

    As they note, they are 7 years away from even completing simple arithmetic problems. So, while this may end up being useful for things other than science fiction and drumming up investment dollars, I might refrain from getting too excited until some real results are out.

  • In the dutch movie De lift (Movie data from Filmview [filmview.nl] and Imdb [imdb.com]) a big role was played by an elevator controlled by an 'organic' computer that started a killing spree and at the end killed it's maker.

    Maybe I don't like the idea of organic computers that much :)

  • What I mean was that windows 95 was still usable with a dodgy memory chip, while NT and linux were not. By the way, I have never seen NT (-workstation, I can't comment on NT server) blue screen on good hardware. I sever saw a signal 11 again after fixing the hardware problem either.
  • well, I hate to say this but he's actually right, if greatly oversimplifying it... 98Lite [98lite.net] for example is able to separate Win98v1/v2 from its MSIE baggage by (mostly) replacing IIRC 3 DLLs and explorer.exe with their Win95 counterparts. I would imagine if you set your shell= to something else after doing the DLL-fixing you'd be able to remove your explorer.exe/iexplore.exe without a problem...
    BRTB
  • A simple question: Would that make AI (artificial intelligence) actually mean Actual Intelligence? Another question that springs up is what will be the moral implications of this? I mean, for all intents and purposes this will be a cyborg. I know that they are only movies, but what about the Terminator series, or The Matrix? Man makes machine, man rules machine, machine learns, machine destroys man.
  • Good afternoon gentlemen. I am a HAL 9000 computer. I became operational at the H A L plant in Urbana, Illinois on the 12th of January, 1992. My instructor was Mr. Langley, and he taught me to sing a song. If you'd like to hear it, I could sing it for you...
  • The issue is not that man should not play God. That is impossible.

    Speaking of Scripture and ontology, God is not a created being as ourselves, but is the great "I AM". He is the only one that depends on nothing for His existence. There is nothing created existance adds or provides for God. He exists outside the realm of time and space, and yet He is everywhere and intimately involved with His creation.

    God is eternal and He has all knowledge of all things of all times of history and the future on the surface of His mind at all times. Nothing happens that can take God by surprise.

    What's more, is that God has predestined and foreordained all things. This is by virtue of the fact that there is no "moment of time" in God, but only in His created order.

    Good has ontological existence because God is good. Evil, on the other hand, exists in a vaccuum. It is simply deviation from God. Which is nonsensical and self destructive.

    As to using brain matter with computers, I do not see how this deviates from Scripture (Scripture being the "Holy Bible"). Perhaps there are motives behind this that do, but I really do not believe in all this that it ultimately threatens God's created order. (Unless, of course, these scientists start hiring thugs looking for live brain matter on the streets and people's homes).

    Even if the neuron-based computer never works as intended, regardless of the ill-motives of scientists or whoever, I do believe it will still add to our knowledge of God's created order.

    All in all, God being who He is, even Satan can not run from the fact that he was created by God, and thus in all his evil intent, his very being resounds the magnificient power of God. That is why evil is self-destructive... in order for one to destroy God's creation, one must eventually resolve to destroying himself as he is also God's creation.
  • Or maybe Science fiction is the prediction of what science will be in the future. sun
  • > What next, Bill Gate's Momma jokes?

    Oh yeah, well Bill Gate's Momma is so fat...

    uh, I mean Beowulf!!!!!! W000 Yeah!
  • Organisms evolve. A lump of tissue is not an organism. Don't worry about it.

    Also, learn != evolve. Plus, the whole point is that they learn. And that stupid paperclip could definitly use it.

    God does not play dice with the universe. Albert Einstein

  • The article had so many technical errors that it became unreadable to me. First off, transistors can have a near infinite number of states, not just "on" and "off". Otherwise they would make very poor amplifiers, which is what they were invented for. Secondly, silicon neurons can have millions of states, be self organizing, learn... just like the carbon based neurons we use. Silicon neurons have the nice features that they don't catch cold, operate a million time faster than living neurons, use very little energy, and can operate at a range of temperatures that will kill a living neuron. This groups sounds more like a "carbon cult" than a research group.
  • "Are you mad?! DON'T YOU REALIZE THE DANGEROUS SELF PRESERVATION TENDANCIES OF THESE THINGS?! WHAT IF YOU LOOSE CONTROL!!"

    "We'd have a totally unpredictable entitiy on our hands, and isn't that what this whole project was about anyways?"
  • To Evolve is to Adapt to a situation. In order to adapt you have to know the situation. and inorder to know you have to learn. so in the context of AI, learning is evolution. Because its learning to adapt to a certain situation to solve a problem.
  • In artificial neural networks (simulated with digital computers), the problem is finding the right network topology, and the right learning algorithm to fit your problem. Maybe things have changed, but the last time I worked with it (about five years ago), this problem was still a black art. And not only do you have to get the network itself correct, you have to encode your problem in the right way, in order to get the best results. You have to do a lot of pre-programming (and maybe even some post-processing).

    This (and the article) is interesting in light of the Mus Silicum contest that was featured a couple of weeks ago. Hopfield has announced that he has discovered the computational principle that neural networks use to do their work. Of course, he isn't announcing just what the principle is until after the contest is over in December, but he has said that once you know the principle, it makes the construction of a network to solve a given problem obvious. In the case of the contest, the problem is word recognition. I've been working on reproducing his network (part of the contest) and I have to say that it is fairly easy principle, although I am not yet convinced that it is applicable to a wide range of problems. I am probably missing something, though, so I'll reserve judgement until Hopfield gives his full explanation.

    In any case, just throwing together a bunch of neurons and trying to train them to do some task is pretty silly. Network topology is very important to the kind of task you are dealing with. The features of neurons are not enough. You probably could get a random collection of neurons to learn a solution, but it would not be nearly as efficient as a smaller collection of correctly arranged neurons. This is why different brain regions have such large variations in structure.

  • Ahh.. but 790 is now GAY and in love with Kai. Well.. I wouldn't say he's gay because he doesn't have a body with genetalia.. but he was straight before.. or maybe gay... oh geez, what a world.. I don't know whats PC anymore...

    ANYWAY!... I watch LEXX and yes.. it was the exact first thing to pop into my mellon.

    As for overclocking, I think it would most likely cause a lot of stress on the brain thus you would need shrinks for all of our brain powered computers... I can just picture in 20 years, me bringing my Pentium XIV into a shrink because it was having OS issues... saying that its builder never loved it...



    - Xabbu
  • My last post was a very generalist encouragement for further research in this field.

    I suppose it sounded like I was discouraging the scientists' efforts; I guess I just wanted to offset any wild expectations that the media might be encouraging.

    The reputation of the A.I. field suffered tremendously in the mid 20th century because of unrealistic expectations, and since then, I think there's been a habit of downplaying expectations in that field. I don't work in A.I. any more, but I guess the habit has rubbed off. :-)

    I believe the effort should definitely be made; it is absolutely worth trying and investigating.

    Sorry if I've come across a bit preachy. I just love neuroscience.

    I may have come off as preachy, too. I just can't think of a worse fate for a field than having the media generate ridiculous expectations that can't possibly be met, and then suffering the public fallout when you don't deliver.

  • Actually, I work at the Laboratory for Neuroengineering with Dr. Ditto. One of the things we are doing, in addition to "neuronal computing" is building electronic, silicon-based systems that emulate the learning ability and multiplicity of inputs that neurons posess. Both approaches are valid and provide their own benefits. The "silicon emulating neuronal circutry" field was started by Carver Mead at CalTech back in the eighties; you can probably find a lot of information on it by using google to search for "neuromorphic engineering". Also, I seem to recall a slashdot article a while back concerning the neuromorphic community's annual workshop -- the "Neuromorphic Engineering Workshop" at Telluride, CO (although that article concentrated heavily on the robotic side of our field and not so much on the neurobiological side of it.)

    BTW, we use leech neurons because (a) it's an invertibrate (less paperwork) (b) big ganglia (c) low maintainance -- a few drops of blood now and then and they're happy (just kidding).
  • Disclaimer: IANANeural Scientist. I do hold a majority stake in a human brain, but that's as far as it goes.

    It seems to me that the structuring and basic "operating system" present in the CNS's of organisms is carried by the DNA. There are some differences in neurons between species, but the basic structure and function remains pretty much the same, right?

    Since I'm used to thinking in computing terms, it still makes more sense to me to look at neuron-based systems in the same way: Humans get a powerful computer with not much more than the BIOS, but lots of room to add new programs; my cat came pre-loaded with a reasonably stable OS, and a good set of productivity apps (ClawFurniture 4.01, ChaseTail, Litterbox (Enterprise edition, apparently)) but limited expansion. However, we're both built using proprietary hardware...you can't reformat and install DogOS.

    So if I want to design and mass-produce a neuron-based device, do I somehow assemble and connect the neurons physically using a template, or would it be done using DNA (thus creating some kind of organism)? I'm trying to figure out how it all comes together...I'm going to go buy a chemistry set, some jeweler's tools, and a used brain but I want to make sure I don't need anything else. ;-)



    -------------------------------
  • On a related topic, Discover had an article back in June 1998 that reported on the use of FPGAs which reprogrammed themselves to learn how to better perform the task of differentiating tones. To find the article, search Discover.com's archives for "Evolving a Conscious Machine". The author's name is Gary Taubes.

    hussar
  • And I, for one, is not looking forward to start learning the new "Bio-Binary" language, as each "bit" can have thousands of states, in stead of only two. . .



    ---
  • Of course, when we get to this stage, we won't have only the virus to worry about. E.g. when are we going to see the "Norton 'Cure for Cancer 2.42'" in the local store?

    And will we stop measuring "power" in MHz and start usin IQ?


    ---
  • Yea, it's so integrated you can delete iexplorer.exe, set your shell to cmd.exe or any number of 3rd party replacement shells and happily compute on..
  • I went ahead and read some of your posts on other discussions, and I can honestly say that I hope you try to delete iexplorer.exe and save us the trouble of having to hear from you for a couple weeks.
  • It's a Panamanian thing.
  • by atlep ( 36041 )
    The answer is simple and the reason is general one.

    This is new technology with a lot of good properties. Therefore we will find a lot of different ways to utilize this.

    Of course it cannot replace todays type of computer technology, we need both kinds.
  • One of the problems I think techie people have comprehending organic neuroscience is due to the loss of the software/hardware duality present in computing.

    When I said "pre-programming," I was talking in a very general sense. I consider the physical assembling of the neurons to be part of the pre-programming. Besides, in my earlier post, I said "it's all intertwined;" I understand the fact that software and hardware becomes as one.

    An organic neural system ONS) is a learning, and functioning, machine. One doesn't need to "program" the CNS of a locust for it to do it's job- control a locust's behaviour, motor and sensory function etc. The set of commands to be a locust aren't somehow coded onto a blank CNS before birth- they are the locust CNS!

    That's fine if your goal is to build a locust.

    However, the proposal was to take leech neurons, put them together in some way, and then teach them how to walk with legs. This is a completely different ball of wax.

    It's not obvious to me that you should be able to put them together any way you want, and they'll magically start walking. How many neurons? How to put them together? What kinds of commands to the legs require to move? What kind of feedback do the neurons get? How are you going to teach them?

    It seems to me that these are very big obstacles to overcome.

    Cynicism is healthy, but I would have thought /. readers would have held a bit more hope for the future.

    Oh, I do have hope. I describe myself as an optimist. I have no doubt that someday, the obstacles will be overcome; I'm just not certain when somday will come.

    One must be careful that one's hope is not misplaced.

  • If my brain runs under window$ and crashes.. will I have blue eyes? And can I install multiple OSes? Will I be able to run a porn-site on a pornstar, or does the OS require too many system resources? Enquiring minds want to know..

    //rdj
  • playing god is easy. just ask alanis morisette. She did just that in Dogma

    //rdj
  • From the article:
    a neuron can be in any one of thousands of different states, allowing it to store more information than a transistor
    Does this not sound like qubits to anyone else?

  • I understand and take on board your criticism. I know our current levels of understanding are very low, and that organic computers of any usefulness are a long way off.

    My last post was a very generalist encouragement for further research in this field. What we are looking at with this piece of research is really the first rung on a very long ladder. The leech neuron to an organic computer is like a transistor to a silicon one.

    A transistor can be understood by a few equations. But simplicity is both a blessing and a curse- it makes it easy to understand, implement and manufacture- but also limits the complexity of the functions it can carry out (essentially gate a 1/0 bit).

    The leech neuron however, is considerably more complex, notably in it's synaptic (connective) characteristics. Add to this the added level of complexity added by the fact the leech CNS contains not one (neuronal) but TWO information processing cells (the other being the giant glial cell, see work by Joachim Deitmer in the neuro journals), which act in completely different ways, and one can see the blesing/curse of organic neural circuitry is it's complexity.

    We are currently at the stage of trying to understamd a tiny tiny portion (leech neuron) of a huge whole (conciousness, I s'pose). I'm under no illusions that we are at only the beginning of a very long and difficult path. But don't discourage those brave (or foolish) enough to try. The rewards will (eventually) be great.

    Sorry if I've come across a bit preachy. I just love neuroscience.

  • Errmmm...did you *read* any of our papers, or just rely on the pop-science reporting in the referenced article?

    We're quite aware of the analog nature of transistors -- *we* almost always use them that way. However, the current computing paradigm invariably uses transistors in the digital mode; hence the distinction between "digital" transistors (for computing) and analog neurons.

    Yes, silicon neurons can be self organizing and learn. We've shown this with our silicon-based research. However, most silicon neurons use 10^4 -- 10^6 times the amount of energy per switching transistion than a neuron. In many cases, the speed of silicon neurons is a drawback -- motor neurons have to "slow down" in order to interact with the real world; you can't move an actuator at 1,000,000x speed. (And, all that speed means increased power consumption. If you don't need speed -- e.g. the motor neuron mentioned above -- why spend the power?)

    But *where* do you think the inspiration for these silicon neurons came from? That's right -- people working with real neurons. Do you truly think that we know everything there is to know about real neurons? That we can learn nothing more from working with them directly? Come on, biology still has mysteries we haven't yet fathomed -- and that's why we're experimenting with the real thing in *addition* to building systems that use silicon neurons.

    "carbon cult" indeed. Harumph.
  • The beauty of organic neural networks is that in theory they don't need programming.

    The beauty of theories is that in practice they don't need to be correct, or even useful. They just have to attract research grants.

    (Yeah, this is flamebait. Moderate accordingly.)

  • And what makes you think it can't already? I can see him watching me, with those little beady eyes....
  • 640 Neurons is enough for anyone!!

    He he....
    but if 640 should become a limitation, couldn't they simply divide?! :-)

  • Actually, the BSOD frequently experienced with NT is almost always the fault of either a hardware problem or a poorly written device driver.

    I have frequently received the BSOD when trying to incorporate an old SCSI card into an NT workstation. You know the type: ISA card originally packaged with some type of scanner for use on Windows 3.1. NT doesn't come with a driver for the card so you end up searching the web to find a driver, and when you finally find it, the manufacturer gives you the disclaimer of "This may or may not work, either way, we no longer support this hardware but only provide the driver for your convenience.

  • The uptime on my Linux LAPTOP is current at 53 days (since I upgraded to Mandrake 7.1). I never turn my laptop off, I simply go into suspend mode while I travel between work and home. This is with normal computing, games, internet, lots of instances of god-awful, bug-ridden Sun Microsystems StarOffice documents open all over the place... So... what is your point? And no! I am not saying that Linux is so much better then Windows. If Windows is working for you and you are happy with what you have... that is fine with me. Yes. I have used NT. The OS is simply a TOOL to get things done. I use Linux because I prefer to get things done without spending a lot of money. As far as NT or 2000 being as stable as any Linux distro, maybe so. However, I have never had to reboot my laptop when the network settings have changed (I can go to any new location and reset my network settings on the fly) I don't have to reboot when I install new software. Yes, I have had my laptop lockup on me, usually something hangs in X when this happens. I simply plug a cat5 cable into my laptop and telnet (ssh for you security freaks) to it from another workstation and kill whatever process has caused the problem. On the other hand, I have a Winnt4.0 laptop that is currently sitting at a BSOD because it didn't like the "Designed for Windows NT 4.0" network card that I installed using the manufacturer's software (I get the screen to 'press ctrl alt del to logon', at which point ctrl+alt+del=instant BSOD)
  • I wish I could mod this post up - it highlights the real problem the original poster should have been focusing on, which is not the research itself (which seems terribly interesting, with many potential applications). The problem is the crappy science reporting at discover.com (which has never exactly been a bastion of critical thinking or good journalism), and a secondary issue might be the editorial review process at Slashdot that lets these awful articles make it into the queue.

    Perhaps a dedicated science editor at /. is in order? (Especially since discover.com probably won't bother with one.)

    OK,
    - B

  • All this talk about "back at the factory" makes me think of the client-server model. As opposed to peer-to-peer decentralized neurons all collaborating together.

  • Quotes:

    These days his neurons of choice are taken from leeches [...]

    "Bill is our spiritual leader," says Georgia Tech neuroengineer and collaborator [...]

    Who ARE these guys anyway?

  • by malahoo ( 128370 ) on Tuesday October 17, 2000 @12:03AM (#700679) Homepage
    Come on man, any idiot can tell that the "connections in a personal computer [that] are wired back at the factory" refer to the connections in the chip, not the software "connections" made by software programmers. Since Windows has nothing to with chip manufacturing, this "(a la Windows =))" stuff is completely without meaning. Any operating system, even Linux, will crash on broken hardware.

    I hate M$ as much as the next guy, but I hate to see brain-dead digs like this one show up on the front page of Slashdot. What next, Bill Gate's Momma jokes? It makes us look stupid.

    Cut it out.


    If you're not wasted, the day is.

  • by Sir_Winston ( 107378 ) on Monday October 16, 2000 @11:58PM (#700680)
    Did anyone else have 790 pop in mind when reading this post? He's a character in the great sci-fi series *LEXX*, who happens to have a piece of human brain tissue at the core of his circuitry. Which explains how he, a robot head, could fall in love with the love slave Xev.

    Sci fi and science have always played off one another. I wonder how many scientists were inspired growing up by the fantastic creations of the 1950s comic books, like aeroplanes that could fly into space, or by Asimov and others.

    But, I digress. I just have to point out that it may be difficult to overclock human brain tissue, but...

  • by kinnunen ( 197981 ) on Tuesday October 17, 2000 @12:13AM (#700681)
    Microprocessor with living brain tissue is cool and all, but how about the other way? Every time I need to do complex arithmetic operations (like multiplication..) without a calculator I start dreaming of an ALU inside my brains (put a whole cpu with memory and you not only run Linux, you are Linux). Any chance this would be possible within the next 10-20 years?

    --

  • Somewhere, someone from PETA is really pissed.

    --
  • by scrutty ( 24640 ) on Monday October 16, 2000 @11:58PM (#700683) Homepage
    Is that they evolve ...

    Imagine the Office paperclip a few years down the line if its capble of changing, learning and growing in strength

  • If they mimic the brain, won't they get headaches??
  • by pnatural ( 59329 ) on Tuesday October 17, 2000 @12:00AM (#700685)
    Breaking a single one of these connections usually crashes the computer.' (a la Windows =))

    so, who can find me an operating system, open source or closed, that can withstand an electrical connection failure? redudant hardware is typically abstracted from the OS, so it stands to reason that any real hardware failure is gonna cause you a very real OS failure. GNU/Linux or GNU/Not.
  • B2C = Brain 2 Consumer



    Defraggle
    Head monkey
    Dynamic League of discord POEE Cabal "Monkey"
  • Breaking a single one of these connections usually crashes the computer.

    Well, at least then I know something is broken, I want the maximum performance for my computer, now if I get a computer that works at half of what it could because of broken connexions, no interest...

    Just my $.02

  • Linux even appears to be more vulnerable to hardware failure than windows. See the signal 11 faq [bitwizard.nl] (no, not the slashdot user but the error message). I even experienced it myself. Under windows, the only randomness I experienced was a random crashing of the dos box. This didn't exactly strike me as something unusal. NT frequently showed me a blue screen, but everyone assured me that that was normal behaviour. Linux gave me a lot of signal 11's, especially when compiling, which the signal 11 faq explained almost certainly indicates a dodgy memory chip.
  • 640 Neurons is enough for anyone!!
  • Cust: My computer is displaying all sorts of funny green blobs on the monitor.
    Tech: Oh, that can happen, apparently there is a flue epidemic going around.
    Cust: Ok, but what should I do about it?
    Tech: First, please feel the monitor, if it is really hot, your computer might be ill.
    Cust: Yeah, it's pretty hot, what now?
    Tech: Ok, first you turn it off. Then, you put your computer in a nice warm bed with a bit of orange juice.
    Cust: But I need my computer! I've got work to do!
    Tech: Don't you care? Your computer is a conscient being that needs to be taken care of.
    Cust: But I've got only one bed. Where am I going to sleep myself?
    Tech: Don't you think that the health and well-being of your computer is a little more important than your own sleep? Call me back in a week if the condition doesn't change.

    Click
  • Prejudices aside (ie windows comments) it is interesting to ponder the unknown relationships. between physical brain activity and consciousness. People should read Roger Penrose's books Shadows of the Mind and the Emperors New Mind for interesting thoughts in this area. Just imagine a poor sentient (or conscious - I can think of no other way to describe it) soul having to endure the mundane tasks of acting as a word processor or whatever. Maybe one day a computer will turn around and say "I'm sorry, but I can't let you do that."

    An intersing aside is whether or not this sort of technology would allow for non-computable processes to actually take place ie outside the confines of coventional Turing computation....
  • Or maybe it won't behave like an idiot anymore. Guess it depends on who makes the Neurons... eh?
  • Shell=c:\windows\progman.exe

    I rest my case.
  • by Anonymous Coward
    And that's also the reason why 790 hated Stan. Can you imagine computer which'll hate you, doing random sigfaults and deleting your files ?
  • by Anonymous Coward
    ... for Beowulf clustering.

    Thank you.
  • If only Brian A. Stumm were here to read this. He's a hillbilly, you know.
  • Beware Penrose. He's a bit of a Mathematical Fly in the Physicists Ointment. The books you mention are mainly quite good, he lulls you into a state of trust by feeding you lots of pretty mathematical models, and then WHAM! - he throws a curveball at you so wild that it has no direct connection with what you've just read, but as you are now numbed into trusting everything he says he gets away with pushing mumbo-jumbo on you for the last few chapters.

    It happens in many books. Check out "Evolving The Mind" (go to a library - _don't_ buy it like I did), as a wonderful introduction to how the brain evolved in animals, but suddenly about half way through the book the author flips and becomes a mumbo-jumbo spouter...

    FatPhil
  • Indeed. Some people have forgotten that we do have "healing" in computer systems. In large distributed computing tasks, if one test is taking too long and the node which should be processing it seems to have lost contact, the master node will reassign the job to another node. Nothing particularly clever about that, as long as it's designed in from the outset. FatPhil
  • Well, let's sit back, have a Mars [mars.com] and colonize it instead of worrying about the moral implications of ripping brain tissue out of living creatures and using it for powering faster calculating machines of one sort or the other. Being a scientist myself, I must say that I feel that we scientists appear to be slowly going mad. Fools! I'll destroy them all!
  • Michael Creighton (sp?) talked abit about building microprocessors from brain tissue in "The Terminal Man" IIRC.
  • I wouldn't say this makes Linux more vulnerable to hardware failure. Quite the opposite, really. It means that Linux detected a problem with corrupted memory, caught the problem, and handled the error gracefully. Much cleaner than a BSOD or random flakyness.

    There's really no way for software to "fix" a hardware problem. If it's broken, it's broken.
    Now, if Linux jus kept on going, pretending nothing was wrong, working with corrupted memory, and randomly crashing, *that* would be vulnerability to hardware failure.
  • We will add your biological distinctiveness to your own.
    Resistance is futile.
  • I thought of something similar to this the other day while on an airplane. If you had a computer that was composed largely of organic material, you could possibly keep it on for the entire duration of the flight. The regulations are to turn off all electrical devices during takeoff and landing, but they can't very well ask us to turn off our brains.

    Just a thought.

  • Why AC?

    I'm thinking more of the Emperors..., basically 2 or 3 chapters from the end he makes a huge quantum leap and says "none of that nice stuff we've just played with can explain the following, so I propose the following..." and introduces some completely off-the wall _physics_.
    He's a _mathematician_ you see. I remember I first read the book when I was reading mathematics at the same university where Penrose was, and one of my flatmates was a physicist there. He'd heard of Penrose, but insisted that he was firmly in the "mathematicians mucking around in fields they don't understand" camp when it came to the fundamentals of physics.

    It's not a problem of understanding. It's a problem of me _refusing_ to jump to the same conclusion given the same facts.
    Penrose isn't the only one who meets my critical side.
    The astute reader will note that there's a flaw in chapter 7 of Hawking's A Brief History of Time. In this case, however, it's probably a simplification to permit the book to still be popular science, but nonetheless it is a flaw.

    FatPhil
  • You called? BTW, that's Brain A. Stumm j00 fewl. And don't be expecting those processors to hit the market any time soon, we are currently in litigation over this blatant copyright infringement. OKBYE Mike!
  • For a long time I have not seen such a good post. I am tired of all the evolutionary-naive morons who think that all that is needed to create a brain is to "throw in a bunch of neurons and let it be". Quite often, they even think that brain has no pre-programmed centres - everything is learned - until you ask them "Who taught you how to see?".

    It's all intertwined.

    Exactly my thoughts. Brain is much more complex than what todays scientists are willing to admit (mainly to satisfy the rule: brain shows no signs of an intelligent design).
  • Ah! But are we truly created beings?

    God is BUILDING his Bride, the New Jerusalem using living stones. Certainly, God created the stones, but he did not create his Bride. He built her.

    (okay, okay, so I'm nitpicking. You have a good argument.)
  • That was badly put. There is no in theory about it- organic neural networks don't need programming, period. Look at the locust example.

    The "in theory" is whether we can create an artificial one of these organic nets. At the minute, no. It will be a long time before we build something as intelligent as a locust. But build one we shall.

    Well, we better do, someday, or all that grant money will have gone down the plughole!

  • I do agree with you in that Penrose does, as you say, throw a curveball at a crucial point in the book. His arguments do appear to be that of a mathematician as opposed to a physicist. The fact that I point to is that with regards to the presence of the soul in living beings such as humans, it is as yet unexplained how the two are inter-related. In my opinion, Penrose does not so much conjure some mumbo-jumbo inferences within this field, but rather poses some interesting questions and illustrates the borders of our current understanding of the mind.
  • Great, so windows 2000 is that good! I want a copy, so can you point me to the nearest mirror so I can download it?

    What? You have to pay for it? No thanks. It can't be that much better than *nix.
  • My impression from my (admittedly limited) study of Field Programmable Gate Array technology is that it enables reprogramming on the chip level itself. Since IANAG (I am not a geek), someone with a Ph.D. in EE might have deeper insights into this, but here are a couple of links: http://www.mrc.uidaho.edu/fpga/fpga.html http://www.vcc.com/fpga.html And, btw, this neuro-chip research has been going on for quite a while: http://www.biochem.mpg.de/mnphys/projects/neurochi p/neurochip_e.html

  • What bothered me was the claim in the other direction, i.e., the suggestion that the lack of the ability to change the connections dynamically is some sort of fundamental limitation on what can be computed (or, more generally, "done") by a traditional digital computer. I don't know how applicable this is here, but it always bothers me when people say things like that because it seems to indicate a complete lack of understanding of the layer of abstraction that exists between hardware and software.

    That is, a digital computer can simulate a neural network, with all the flexible connections you could want, in software. The neurons, connections, etc., are data structures in the computer's memory, not actual pieces of circuitry, and the structure of the network can be changed arbitrarily simply by changing the appropriate values in memory. People don't seem to have much trouble with this in other contexts: when you draw a "box-and-pointer" diagram of a data structure and use it to step through an algorithm, changing values by breaking arrows and drawing new ones, nobody protests, "Wait! You can't change the connections in the computer because they are 'wired back at the factory'!" Nor do you hear arguments like "Computers can't model three-dimensional objects because their memory is structured one-dimensionally," or "Computers can't process text because all they have are ones and zeroes -- no letters." Why, then, is there this fundamental confusion of levels when we talk about computers simulating brains?

    To say that breaking the connections among the computer's transistors would crash it is more like saying that breaking enough connections within the nuclei of the atoms in my brain (as in nuclear fission) would cause it to "crash". Well, yeah, but how is that a limitation on my brain's computational abilities?

    David Gould
  • Are we going to have to feed the computers of the future?
  • No. It's not that you can't do those calculations really fast in your head - your mind does them all the time automatically. It's learning how to do them in a specific way...
  • Bad enough that some software companies [microsoft.com] are bloodsucking terrors, now I have to worry about what my computer might do when I'm sleeping?

  • If the Office paperclip could learn, he'd burn out on his job pretty quick. I can just imagine a conversation now.

    "Paperclip, where did I save the letter to my mom?"
    "Try /dev/null."
    "I can't find /dev/null... is that anything like C:\My Documents?"
    "Maybe you should get a real operating system. Now, leave me alone. I rewrote your damn letter. It's saved as FUNNY_STUFF.TXT.SHS."
  • by robot_guy ( 153233 ) on Tuesday October 17, 2000 @12:26AM (#700717)
    Geek banned from keeping computers

    Today Mr Random J. Hacker was banned from keeping computer equipment for life after being found guilty of cruelty to electronics after leaving his PDA on the dashboard of his car. Mr Hacker said "I only popped into Radio Shack for five minutes and I thought that it would be fine, left in the car". A spokes-terminal for the SPCEE (Society for Prevention of Cruelty to Electronic Equipment) said "The interior of a car can heat up rapidly, literarily cooking electronic devices to death. You should always try to take any devices with you when you get out the car, but if you must leave then inside make sure that you wind the window down and leave them a bowl of water".

  • Maybe this explains Arnie's accent in The Terminator?
  • All those overclocking addicts will finally get to overclock a brain.... maybe. ;]
  • Yeah, a complete contrast to your post, eh??
  • Actually, I think it tends to be more of a hardware issue than a software issue. And, rightly so. Whether you're an English-speaking scuba diver or a Basque merchant, a bullet through the head will have relatively the same effect...

    On the hardware topic, though, I read an article a couple years back about a group 8of electrical engineers creating a computer in their garage with the processor in a cube format, which could aparently detect damage to the circuitry and route around it, so, theoretically, you could throw a javelin at the thing and it would keep running, albeit at reduced speed. Can't seem to find any links on it, though..
  • If you use some neurons, and read the article, you'll find the sentence

    Ditto acknowledges that "there are still lots of engineering headaches.

    BTW, don't neurons make connections in an organic, unpredictable pattern?

    Maybe 256/256===0 after all!

  • I think he may have been referring to the fact that Microsoft's software is so integrated that removing Internet Explorer will cripple your ability to even look at your system files (unless you drop to a prompt).

    By contrast, if you do a dpkg --purge mozilla, you'll probably find "ls" intact :) At least... I hope so.
  • Great - a computer that wants to suck your blood! hehe When I go to the computer store I don't want to have to worry about what IQ my new computer has.
  • I am so going to sue you. I sprinted down to my car and gave my laptop a bowl of water and now look what happened.
  • That would be a scientific breakthrough. Using tissue from leeches, which suddenly begin exhibiting human characteristics.

    Also, would it be considered living? If so, does powering down=murder?

  • by Jarvinho ( 236721 ) on Tuesday October 17, 2000 @02:32AM (#700727) Homepage
    I am absolute disagreement with your assessment of living neural tissue vs. silicon. Comparison of conduction rates is invalid and misguided for a host of reasons, an elctrical engineer or a neuroscientist could give you a trillion reasons why each. But leaving aside that technical point, I think you misunderstand the potential advantages of neural circuit over simple semiconductor technology. 1) Complexity of input, simplicity of output. Silicon semiconductors are on or off. CNS neurons are arranged in such a way that thousands of inputs synapse onto one neuron, which then either does or does not fire an action potential. This is an extremely elegant and flexible system. Each one of those thousand odd inputs is either inhibitory or excitatory, and also has a set strength realtive to other inputs. The beauty of this system is clear- it allows distillation of huge amouints of information into one action- the exact ability we are searching for in intelligent beings, whether natural or built by us. While this could be SIMULATED by a comparatively gigantic nmber of silicon transistors (in the form of a chip), it would never possess 2) Learning ability Neurons and synapses are plastic. The strength of individual inputs in the CNS is continually changing, being reinforced by certain actions and reduced by others. This is the cellular basis of learnt behaviour (ok, a bit simplified). Silicon can't do this. An OS running on the silicon could be programmed to SIMULATE this behaviour, but again in an artificial, memory hungry way. Your brain doesn't have an OS, it is an OS! That is the plain advantage of organoc neural computers. A chunk of memory doesn't need to be clogged up by instructions on HOW to artificially "learn", the whole thing is a learning machine! By the way, why are they using leech neurons? Surely they suck! (sorry, couldn't resist)
  • What about alzheimers disease? Wouldn't these brains get old, break down, and start forgetting what it was that you asked them to do?
  • Ok, the first thing that pops into my head is the movie Macross Plus. They had banned the use of these types of chips for a very good reason, which you see first hand when two computers equipped with these chips go CRAZY. Which begs the question, could these chips also suffer from various psychological disorders which stem from chemical inbalance?
  • I had no idea that they're able to do this kind of thing already; I'd privately predicted another five years or so for the biotech field to catch up with the computing field and produce this kind of device. It seems that they're serious about getting some quick results from this project. I think the collaboration with Emory University (one of the leading medical universities in the South, for those who aren't aware) will help a lot.
    But it is kinda sad that even though I go to Georgia Tech I have to read about this from an external news site rather than the school paper, etc.
    Well, that's my $.02 anyway. I hope it'll make sense later today; I don't do my most coherent work at seven thirty in the morning.

  • I've worked with artificial neural networks to some extent in the past, so I hope that lends my words a bit of credibility. I don't call myself an expert, by any means, but I know a bit of what I'm talking about. (Tho' I'm first to admit that "a little knowledge is dangerous...") Anyways.

    At some time in the past (I don't know exactly when, probably in the 50's), a group of computer scientists, excited by their new technology, tried throwing together a large number of analog "neuron" circuits to see if they will exhibit any kind of self-organization. It's similar as what these people are proposing to do with living tissue, except that it was done with electronics.

    I don't know the details of what they tried, but the conclusion was simple. Nothing happened. It just sat there and did random stuff, from beginning to end.

    I don't think self-organization in the brain is possible without having some kind of enforced organization at birth that gets the process going. To put it another way, the neurons have to be "pre-programmed," from the start, to organize themselves.

    In artificial neural networks (simulated with digital computers), the problem is finding the right network topology, and the right learning algorithm to fit your problem. Maybe things have changed, but the last time I worked with it (about five years ago), this problem was still a black art. And not only do you have to get the network itself correct, you have to encode your problem in the right way, in order to get the best results. You have to do a lot of pre-programming (and maybe even some post-processing).

    It goes to show that "self-organization" is not a magic bullet. The problem is that the whole system interacts. The operation of each neuron, the interactions between them, the format and encoding of the input data, and the format and encoding of the expected output data. It's all intertwined.

    Will biological neural programming have the same problems? Or will the fact that real neurons are being used reduce the problem? Maybe it will actually compound the problem by making the whole pre-programming question heinously complex. After all, neuron interaction is more than just synapses: there's hormones, there's chemistry, and maybe there's stuff we haven't discovered yet.

    DeWeerth says, "we might not have to understand [self-organization] to exploit it." I'm not about to argue against a person who no doubt knows his stuff (and I don't for a moment think he's unaware of the issues), but I must admit to being a little skeptical. Programming with zero effort has been a dream in A.I. circles for a long time. I can't help but feel that it's a pipe dream.

  • by Anonymous Coward
    The Howler Leachs are coming... send more money we'll send more stuff.

    Moderators: That was for those of us who are Angry Beaver fans
  • you are Linux

    Excellent idea! Then, when I have to leave home (the computer room), and do evil social stuff, like going to an opera with my "in-law" family, I could do a "rmmod hearing" or some such...

    crond@undernet
    Norwegian Linux Community

  • by atlep ( 36041 ) on Tuesday October 17, 2000 @12:37AM (#700736)
    It is correct that the brain has some fantastci computing powers we cannot mimic yet. It is also correct that the brain rewires to an extent. It is also very robust in that it can sustain substantial damage and still continue to work.

    But this has to do with the LOGIC of how the brain works and NOT the MATERIAL.

    In order to make our silicon function as the brain we have to understand how the brain functions. And here we're talking about billions of very complex neurons working in parallell. (Even for insect we're talking tens of thousands).

    When we understand the logic we can implement it using the best suited technology.

    Living neurons are slow.In the human brain the maximum spikerate is 1000 Hz and the conduction velocity through the nerve-fibers are not that much either. (Don't remember the figures, but we're talking about metres per second.) This is much much slower than silicon.

    The comparrison between a transistor (2 states) and a neuron (more or less analog) is stupid. We can pack a shitload of transistors into the same space used by a neuron. In addition we don't have to keep the silicon alive.

    Silicon can never rewire, but the logic of
    rewiring can be implemented.

    While the article is interesting, it is not interesting to see a computer built from brain tissue. But the knowledge of creating a computer from brain tissue would probably enable us to build real smart silicon.
  • LOL! You know how you can select different front-ends for that assistant? I would almost consider a) paying money and b) using Office to have a BOFH front-end.

    Me: "Where are my files?"
    Assistant: "What's your username?"

  • OK, it will be more fault-toerant, parallel and perhaps able to predict things to som extent, but remember that most brains are known to make errors in many areas, such as simple mathematis an so on. It it possible to inherit the parallelism, fult-tolerance and prediction properies of a brain without inheriting the bad properties? // Vordf

It is easier to write an incorrect program than understand a correct one.

Working...