Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Science

Electronic Circuit Mimics Brain Activity 134

A lot of people wrote in with the news blurb from Yahoo! regarding the announcement of a ciruit that supposedly acts in a manner resembling human brain activity. Details in the blurb are pretty sketchy though - post links below if ya got 'em. One of the interesting points that they say though is that the brain does both digital and analog - but that's pretty much all they say about it.
This discussion has been archived. No new comments can be posted.

Electronic Circuit Mimics Brain Activity

Comments Filter:
  • by Anonymous Coward
    Check out the original article [nature.com], in the letters to nature section. Rodney Douglas' web adress www.ini.unizh.ch/~rjd/ [unizh.ch]

    Enjoy

    Jan-Jan [mailto]
  • by Anonymous Coward
    Everything is "analog". The binary encoding ("digital") is just bolted over the analog thing, think for example a voltage signal with, say, NRZL encoding. It's digital over analog.

    BUT when you go deep enough, you get discreteness, quantum states and the digital 1/0 is/not_is yin/yang world again appears! A most beautiful cycle!
  • Ever seen a nice image of a neuron? It has an input end, and an output end. A neuron may also have multiple output ends, and multiple output ends may come together to one neuron's input end.

    Neurons don't only "decide" to fire or not to fire... They can fire at a multitude of magnitudes. All neurons have a trigger level. An incoming pulse needs to be of that magnitude or higher for the neuron to react. The neuron can then "decides" to fire, but neurons can also increase or decrease the magnitude of the pulse they send off. That's the way a neural net works. Various pulses of various magnitudes travel through a neural net, and according to the magnitude of a pulse, the receiving neuron "decides" what to do with it.

    You could indeed consider the "decision" to fire or not to fire a binary decision. Compare it to a joystick. It's not a binary joystick that either goes full or not at all, but it's one of those analog joysticks that can go everywhere inbetween aswell. The neuron can have dead zones on both ends, can alter sensitivity, can switch axis and could even reverse axis. On the whole, neurons adapt and learn. All these "neuron settings" cause various pulses to travel different paths through the same network, to split, to die off, and do all sorts of other interesting stuff.


    )O(
    the Gods have a sense of humour,
  • IANAB (I am not a biologist), but I do seem to recall that neurons are basically onion-shaped, so to say, with a thick roundish end where the cell core and other such useful things are, and a long "tentacle" that stretches out. I also seem to recall that that "tentacle" splits into various others, to (almost) touch various other neurons at the receptors on the thick end. Then again, the last time I had biology is four years ago, and then most of that was all about (populational) genetics, (natural) selection and the calculations behind those... :)


    )O(
    the Gods have a sense of humour,
  • OK, I'm an ex-neuromorphic VLSI researcher, so my impressions may be colored, but let's see...for the last ten years, we've been following Carver Mead's lead that we really need to look at Analog VLSI for simulating cortex and doing cool AI work. It's ulta-low-power, distributed massively parallel computation, defect tolerant, etc. And what has been the result?

    Millions of dollars of money going into making bad retinal focal-plane arrays whose output makes QuickCams look good, analog cochlea models that underperform real time digital models, and a handful of other do-nothing circuitry like the one described in this article.

    Meanwhile good old digital VLSI has gone from 100 MHz to > 1 GHz, we have actual speech recognition systems running on PCs, and a new range of low-power digital multi-purpose digital CPUs for portable devices.

    There has never been a real product developed using neuromorphic VLSI, and the few implementations can now be replaced with faster digital computers.

    The best part of neuromorphic VLSI was the electical engineers teaching all the neuroscientists about how electical circuits work, wavelet transforms, etc., to a bunch of people who like to think of the brain more in terms of a Rube Goldberg device where one neuron taps the next neruon instead of a complex chaotic set of electrical network equations.
  • thinks more like humans? That is, a computer that repeats unreliable gossip, makes wild guesses when it doesn't know the answer, tries to cover up it's mistakes and blames the computer in the next cube, calls in sick when there's something better to do, procrastinates, has hidden agendas, is married to a plain wintel box but secretely is in love with the well endowed cluster in R&D, worries about it's retirement plan, thinks it deserves a raise and the corner office but is discriminated against by office politics...
  • There is a old branch of EE called neural networks.
    This is merely a variant of of it.

    One of the more interesting neural networks is
    CalTech's Carver Mead's artificial retina.
    It has excitory and inhibitory connections.
    His company put hundreds of thousands of them
    on a chip to make a self-regulating and processing
    digital camera. Got a White House medal for this
    and other stuff.

  • There are a trillion neurons in a mammalian brain
    with a thousand connections each. It will take a while to emulate this.

  • Good mingling of Consciousness and QM here:
    here [helsinki.fi]
    I can't speak for his physics (he gets into manifolds and such) but the ideas are right on.
  • ...not what we're looking for. We aren't looking for a statement that a human can't assert. We're looking for a statement that is true (a condition yours meets) but that a human can't know is true.

    This maps to, for instance, number theory as follows: There are theorems that are true (that is, we know that they state things as they really are) but that can't be derived (i.e. proven) with the axioms and rules of number theory. This is true no matter what axioms you use: even if you add those theorems new ones are always "out there".

    It's easy (well, relatively) to find these statements in other systems, but it may be logically impossible for US to find them in the human brain. Maybe aliens (or AI) will have to prove that Godel applies to us (and us to them).

    Hey! That gives me an idea: Maybe "Godel's theorem applies to humans" is the human godel statement.
    --
  • "It consists of artificial neurons that communicate with each other via synapses, or junctions where they connect, in a system that could lead to the development of computers that could perform perceptual tasks such as sight recognition."

    That's absolutely incredible. Think of the power we now wield: We can transport scientists through time from the 1950's.

    We are able to "carbon date" these scientists by means of the research they are doing. For instance, had they been attempting to determine the speed of the planet Earth through the "luminiferous ether" we would have known they came from before 1903. Had they stared in wonder at our televisions, we would have known they pre-dated the 1950's. However, their work on the then cutting-edge, now old-hat neural networks (implemented in hardware, no less) places them firmly in the 1950's.
    --
  • I think you might be confused over what digital means, its not a specification for an electric wave.
    "A digital system is a set of positive and reliable techniques (methods, devices) for producing and reidentifying tokens, or configurations of tokens, from some prespecified set of types" (from "Artificial Intelligence", Haugeland)
    A positive technique is one that can succeed absolutely. You can write something in sloppy handwriting, but i can still (potentially) succeed absolutely in reading what it is that you have written. The potentially is important, because a digital system is not required to be reliable, just that it is possible to work absolutely correct. The mechanics of how a brain can read arbitrary handwriting (or any number of things we do everyday) is undoubtedly analog, but some systems within the brain are digital. Writing a post for slashdot is analog, i cannot "absolutely succeed" in putting my words together in an intelligible way that says everything a post should. But the act of posting a post is digital, i can succeed absolutely by pressing this little submit button down here, which i will do....now
  • Chaos theory doesn't demonstrate much of anything about quantum mechanics. The mathematics of chaos work perfectly well with our friends the integers and the reals; no exotic particles required. Sorry.
  • Heisenberg's work isn't part of chaos theory, bucko. Check out the excellent Chaos Page [umd.edu] at the University of Maryland.

    Chaos theory deals with "simple nonlinear deterministic systems" that "behave in an apparently unpredictable and chaotic manner".

    Sorry, but quantum mechanics doesn't count.

    Although now that I think about it, many Anonymous Cowards are simple detministic systems that behave in a chaotic manner. Maybe you should go in as a lab animal. :-)

  • Already been done. Ever heard of a guy called Max Headroom?
  • There are a trillion neurons in a mammalian brain with a thousand connections each. It will take a while to emulate this.

    I think the real problem is not the massive size (even though it's far from trivial). Simulating a trillion nodes should be within realistic reach, if enough money were thrown after it. One thing that helps is that NN's are very fault tolerant (Killing a whole area of cells doens't matter THAT much in the human brain).
    Here's one (far-fetched) idea of how this fact could be utilized in the ANN production: The factory could maybe almost put (electonic/chemic/whatever) neurons on a spray can, spraying it all over the chip surface, not worrying too much about how many are not well-connected, and not being able to control the detailed connectivity that fine-grained ;-). An initial bootstrapping (applying some chemical?) should tell the cells to explore the environment they were put in (chat with their neighbors), committing suicide if they were badly connected. Also, this bootstrapping chemical should cause the nodes to build "power choords" to themselves. And so what if only 50% survives this "spread of the cells"?
    The key to this is just that interconnectivity and quantity might not have to be that precise (as for example in a CPU) - just spray the neurons out there.

    Okay, so the quantity of this problem could maybe be solved.
    What I think is the real problem is the programming. Brain scientists have almost no idea how the brain works and learns. And even if we found out an effective way to train the artificial brain, there's an even worse problem: The human brain is so preprogrammed that we wouldn't believe it. How on earth should we capture all the preprogrammed information and behaviour in the human brain, short of emulating real neurons (only the simpler artificial ones)...

  • "The brain processes both analog and digital signals."

    "... the brain makes an either-or decision about whether or not it is a car"

    At first, I thought "How are these ideas so different from eg. a scanner reading some text (analog) which is then OCR'd by software which decides whether it sees a letter A for example (either-or digital decision)

    But of course the important difference here is that the brain processes analog and digital together. All existing electronics (before this new research) processes anolog and digital completely seperately with just an interface between the two. The scanner is the interface in my example.

    Can anyone think of better existing examples with both analog and digital components but where the anaog-digital connection is more intimate?
  • I haven't read either book, but...
    I could understand a point of view that said "We don't know if quantum effects manifest themselves on a macroscopic level in the brain".
    I'm having difficulty with the idea that quantum effects might be the only way that some brain processes could possibly function.
    I have heard some people refer to quantum brain effects as some kind of new age "soul"... As a devout atheist that rubs me up the wrong way.
  • *sigh*
    I believe our friend was saying that due to sensitive dependence on initial conditions (or state divergence) exhibited by many nonlinear systems, the effects of quantum events aren't simply averaged out in the macroscale and swallowed by the law of large numbers.
    The butterfly effect for nonlinear systems like weather, or parts of our own brains, should apply right down to the quantum scale.
    Computing consiousness may require quantum algorithms as Penrose indicates.

    My quantum tentacle presses the Submit button...
  • "It is almost a car," or, "It might be a car," implying that there is a degree to which something might be a car."

    This is nonsense IMO. "This is almost a car" is just as much a digital statement as "this is a car". The brain can be 100% positive about its own uncertainty.

    As an aside, the spike trains generated by neurons are purely digital. Only the strength of synapses are analog and this strength can be easily simulated digitally by a sufficiently large integer. All this "analog is superior" crap is just that, crap.

    Louis Savain

  • It isn't all that new. It's an evolving field that Carver Mead basically started back in the 80s that's slowly been building steam. The problem is that while the cost for neurons in biology is very low, that's not the case in silicon, so building large scale systems is problematic. Further, the compuations tend to be analog in nature, so you have to take care how you design the cirucits. Still, the area is a neat thing to watch evolve.

    The linux types around here should like the approach they take: free tools they wrote (and distribute) that run under linux. Try http://www.pcmp.caltech.edu for the details on their work.
  • Will they suffer from alzhiemers?

    If I make my computer a duel boot will it be suffering from multiple personality disorder?
  • When did your friend visit? I was at Penn in December of 1998 interviewing for a graduate program, and the system I saw didn't require a whole bunch of switch-flipping; it was just a bunch of standard DIPs interconnected on a breadboard and hooked to an apparently standard video camera. The result did appear to model what I know of the initial visual processing system.

    Of course, any smoothly functioning technology is indistinguishable from a rigged demo, but I doubt they'd go to the trouble of setting up a flashy-but-fake demo to impress a prospective grad student. I'm just not important enough to lie to.
  • Specific papers probably aren't the best way to get an good overview. My personal textbooks are _Cognitive Psychology_ by Medin and Ross and _Neuroscience_ by Bear, Connors, and Paradiso. However, these are not ones I specifically chose; they're the textbooks chosen by my professors. You may have better luck just searching for popular stuff at Amazon.
    Note that the different encodings thing is not proven; it merely makes sense, since cells in different brain areas often have significant phenotypic variation.

    To your question about forgetting... current theory says it's more of your second hypothesis. In this regard, the brain appears to act like a neural net --- if you train it to do something, but then start training a different task and never provide the stimuli for the first task as reinforcement, it'll drift away from its intial conditioning.

    Current theory also views memory sort of like a hash table: the entire state of the system is the input, and something comes out based on associations. This leads to an effect called "state-dependent learning": if you learn all your facts sitting in one desk of a classroom, you'll do better on the test if you take it at that same desk. As you age, your sensory inputs change, which means it's harder to construct a world-state capable of accessing a given piece of information. This has been suggested as the reason why most people can't remember their childhoods well --- our growth has changed the way the world looks to the point that we just can't construct an activating signal for anything but the strongest memories.

    Connections are definitely not permanent. There is basically nothing permanent about the brain beyond the gross organization of areas and layers. The absent-minded professor effect, IMHO, is more a matter of changes in significance. You remember the things you think are important. As you get to concentrating more and more on proving P=NP, little details like when you last ate just aren't relevant enough to be encoded. (For most people, memory seems to have some sort of finite bandwidth, such that only the most significant aspects of the current state are likely to be encoded.)
  • I believe it's nothing as severe as multipolar to bipolar/unipolar, but yes, the basic mechanism is believed to be a rearrangement of cellular projections. Neurons can also modulate synaptic weights by messing with the balance of channel proteins, vesicle docking proteins, and other key pathway components. To the best of my knowledge, neurons tend not to apoptose once the brain reaches maturity, although there are massive die-offs early in life as the pathways get themselves sorted out and unnecessary cells get pruned. There are some exceptions (I know that some cells of the olfactory pathway are regularly dying and regenerating, and I believe taste cells do the same thing), but for the most part neurons are pretty long-lived beasts. Since they're nonmitotic and the stem cells tend not to produce more, it's to the organism's advantage to conserve neurons.

  • At the University of Delaware we've had a complete system setup with emulated neural circuitry for some time now. Each circuit is a hybrid analog/digital artificial neuron called a "neuromorph".

    The spikes are recorded by a seperate board and routed through hardware buffers to "synapses" on the next circuit, thus emulating the "leaky-integrate and fire" mechanisms of neurons.

    For more information e-mail me at the above address (yes it's real) and I can point you to research articles and information that has been published from our Neuromorphic Systems Laboratory at Udel.

    Even so, this is not a new thing, the theory behind artificial neural networks dates back some 40+ years, and there have been many attempts at Universities to implement the most realistic and interesting mimicries of human behavior.
  • It's all analog. From what I've been told by someone who's been there, they have to flip all kinds of switches to make the networks and they act rather stagnantly because it takes so much work to change it.

    There's alot of debate over which system provides the most realism vs. the most flexability. I think the answer lies in several Universities' approaches (including Udel and apparently MIT's new setup) as an analog-digital hybrid.
  • Neuromorphic Engineering is a real term to describe this kind of research. The problem is, it isn't new. (See some of my other comments...)
  • What I learned about neurons is that they are all or nothing. If the stimulation is not enough, there is no firing, else it fires completely. In that sense it is digital. I think the analog part comes in on the rate of firing. There is a certain limit on how fast a neuron can fire (ie after the action potential was raised to its limit and is now lowering, it cannot fire again) but other then that information is conveyed in the analog frequency of firing.
  • Just because a baby's response _acts_ like objects either exists or don't, doesn't necessarily means that the thought-processes of a baby (both conscious and unconscious) is limited by this. We don't really know all the thoughts of a baby -- not even our own thoughts. Just think of how much thoughts is hidden down in our brains, for example hidden trauma. The way we deal with these is to dream about them.

    I can think of another answer to this: That the extreme short attention-span of babies make it more efficient for them to learn basic things faster. As we grow up however, we need attention-span to grow and become more abstract-thinking, to learn and reflect on more complex things. This is less efficient on more basic problems however (more overhead).

    I might be wrong, but so might 1.000 scientists.

    - Steeltoe
  • Well it sounds to me that that "final estimate" step if it exists at all is just a way to destroy the whole result! If someone is able to know the days of any date instantly (I have talked to a person like that), why would they want to degrade this into an "estimate"?

    Yes, machines can estimate: For any result X just do an Y = X + random(d)-d/2, and add "I don't really know for sure, but I think it's ....." to the result set. Or you might use this to every operation involved to get an even more fuzzy estimate.. :-) A machine can have "feelings" too, in the same way. However, I believe anything we do with a machine is just a poor emulation on what's really going on in our brains. But it's still an emulation, and I believe it is possible to develop it so that we won't be able to distinguish it from a live person. Not in the near future though. (What are we really trying to create with neural nets? Copies of ourselves? Why not just procreate?)

    Now, it seems we humans are dependent on being able to estimate things. It's a role of being flexible and adaptable. We have logic, but it's very fuzzy. This is an disadvantage when dealing with "digital" datasets, but not so when we're living our daily lives.

    If we can trigger our "rainman" capabilities inside our brains and harmonize this with what we already got, will we ever need a computer again?

    - Steeltoe
  • I saw something very interesting related to this at the University of Technology Sydney (UTS or uterus as it is often called;)) open day. They were working on a project in which disabled people would be able to use the part of the brain related to that disabilty for some use. For example if you were blind then they would work on detecting the thought processes for sight so that one thought may trigger a circuit to turn on a heater and another thought may for example turn on an oven. Pretty interesting and a lot of potential if you ask me.
  • This should not be (4: Funny). It violates the standard form of the "Can you imagine..." post, giving away the punchline in the subject line. Please moderate it down.
  • Just use open source, they would play QUITE nicely together

    If you don't understand, go look up Salon.coms article on sex and hackerdom from a couple weeks ago...
  • I'll just wait for my trinary!(yes/no/maybe)
  • I don't know if I am answering the question you intended to ask, but isn't the eye (vision) a classic example of DAC. Retina receives photons, assuming particulate form, this triggers some retinal reaction, which is on/off. Notification of these triggered events arrives in your brian somewhere. This may then result in an analogue interpretation of a digital signal. E.g. its a car, or its red car, or its a red car and moving etc...

    Thats a natural interface, what about digital TV. I don't know anything about it, but the pictures are analogue and I assume the signal is digital, I infer from its name.

  • Had a quick look around and could only find current references to the Reuters story. Its on all the major search engines.

    I did notice that these guys have been tinkering round with neural stuff for a while. I found this article [bell-labs.com] which is interesting and along a similar vein and has a pretty picture in it, or here [lucent.com] which is the press release without pretty pictures.

    I off to book a holiday at Westworld now.

  • The way the article used the car example (saying our brain says it either is or it isn't a car) is the same as saying that my odds of winning the state lottery are half because either I win or I don't.
  • So, what they've done is taken standard Nueral Net technology that has previously been implemented in software, and engineered customer hardware to do it instead. Cool. I guess that means Nueral Nets can work much faster than before, which is nice.

    But (And there has to be a but), the way i understand it is that with SW Nueral Nets, synapsis can grow and die off as the pathways are enforced in the net. If the Neurons and Synapsis are hardwired, doesn't this limit the ability of the Net to grow? What happens when all of the available synapsis are currently in use? Just swapping in a new chip with more synapsis on it isn't an answer, the new chip would have to re-train to do the same job that the older one did. So, are hardware Neural Nets a real advantage over Software Nets?
  • It seems to me that scientists are working this thing in the wrong direction. I mean, why are they trying to figure out the (to our knowledge) most complex brain in the world? Isn't that like starting your studies as a computer engineer with trying to reverse engineer an Athlon processor or the equivalent?

    Wouldn't it be a better idea to first try to fully understand and map out a very small brain like, let's say, a bee or something similar? Their brains sure perform lot's of functions (Like before mentioned image recognition), but there is much less brain cells and synapses and stuff to examine. Then they could work their way up with more and more complex brains.

    Just like the ppl who mapped the human genes have done, they started with simple flies...

    --

    "I'm surfin the dead zone
  • Actually, from what I understand, the brain actually prunes paths as a child grows. In other words, a newborn has the most braincells it will ever have in its life, and the number declines from there. Maybe logic becomes fuzzier as the pathways are pruned and individual neurons start performing multiple functions...


    --Fesh

  • If the goal is to create intelligent perceptual machines, why go backwards and design like a human brain? Why not go forwards and design according to the desired function at hand. The human brain is not the most efficient computing substrate for many tasks.

  • (stanislav) Grof`s book `beyond the brain` has stuff about how the brain might work, but i dont think anyone knows yet...
  • Actually, the whole `carbon dating` thing is out of the window, seeing as its inacurrate...so i guess people from the future could guess your age from the fact that you probably went to school/university in the 1960s-90s....

    :)
  • Tell that to Aphex Twin.

    If the two are different, then one can't behave exactly the same as the other -- that's what different means.

    ---
    script-fu: hash bang slash bin bash
  • Analogue circuits do stuff that digital ones don't. Digital data is more preservable, yes, but it's less accurate, it fits things into integrals... into little tiny boxes -- analogue doesn't do this at all... it's a true medium, where the signal is properly 'carried.' The advantage of digital is the the medium won't distort the message.

    To get digital accuracy such that you can't tell the difference is like trying to represent Pi as a fraction. Honestly, to be fully accurate you'd need an infinite number of signal samples.

    ---
    script-fu: hash bang slash bin bash
  • I already mentioned this. The real point is analogue signals and digital signals are different, and you can't use a digital signal to replace an analogue one.

    ---
    script-fu: hash bang slash bin bash
  • What if the brain sends signals in parallel, and not in serial, it could seem that they are then sent as analog? I am pretty sure there are many things we don't know about the brain, this could be one of them....
    -
  • grammar nazi sez: I still believe that humans make analog thoughts, even if our brain is just one big circuit. Can I use our brain tonight?
  • For all those scientists out there who love wading through scientific publications, my old employer, the Mental Health Research Institute [mhri.edu.au] has a department dedicated to Brain Dynamics. Some of their published papers are available HERE [mhri.edu.au]
    Their main goal was to simulate brain processing in software rather than hardware.
  • Good boy, good boy, here is your candy. Thank you
    for alerting us of such a terrible terrible crime.
    Luckily we have you, otherwise we might stress our
    brains too much. You really are mans best friend.
  • Uum, no -- what the hell are you talking about?

    Did you even read the article? They are talking about the internet as a collectively intelligent network of computers -- not just intelligent computers.

    Even so, how does that compare to Terminator 2??? A military computer controlling weapons-of-mass-destruction achieves sentience and decides human beings are its primary threat? Do you seriously think that military computers close to important military weapons, etc. are hooked directly to the internet to even make this scenario even remotely possible?

    Even if that were true I'd be more worried about hacker/cracker terrorists than a computer-based threat! Or how about an asteroid? Or how about lightning? Man, it's fearful people like you who make the adoption of technology so ridiculously difficult. Sure be skeptical, but don't TRY to come up with these ridiculous scenarios, use some logic.
  • So what? A bowl of lime jello hooked up to an EKG gives the same readings as a human brain. Weird, but true, I think.
    -JT
  • It isn't that simple. I am unfortunately not a neurologist, but I did study brain mechanics for two semesters of Psychology, so please bear with me.

    If I remember correctly, the way it works is the neuron gets signals from many other sources along its dendrites, which either have an inhibitory or excitatory response, which also decays along the way (the "analog decision"). The decision to fire or not at any given time is digital to some degree, but there are factors that affect that as well such as the refractory period (an axon may only depolarize and fire once every so often).

    However, once the electrical impulse reaches the end of the axon, it does not leap across to the next dendrite. It releases chemical agents which float across to the dendrite and interact with it, which are then recovered by the axon. Those chemical agents vary from axon to axon.

    In addition, the recovery (reuptake) of those chemicals is crucial. What if a drug affecting the brain prevents the reuptake of those chemicals? Then they start floating around and reacting with any dendrite they happen to run into, and they get the same effect as if an axon had been fired. In fact, this is part of what happens when you become inebriated... Dopamine reuptake is blocked, causing it to float about in your brain and get overused, making you get a buzz.

    Unfortunately, there is a limit to what I can remember. There are seven ways that a drug can interact with a neuron to create an inhibitory or excitatory response, however. So while it may seem digital, there is a lot more to the human brain than "fire/not fire".

  • Actually, this is altogether incorrect. Each neuron has only one axon (output) but my have multiple dendrites (input, decision making). All neurons can only fire at one magnitude, and they all depolarize at about the same electrical level. The difference comes in the chemicals that are released at the end of the axon. The decision making process is all done in the dendrites, and as to sensitivity, it has been shown that the further a signal is from the neuron the weaker it is. The neuron isn't quite all that complicated... That's part of the mystery as to why our brains work.

  • I don't agree 100% that the brain makes digital decisions. The article says that we make an either/or decision regarding whether something is there or not. It is a car or it isn't a car. That's rather black and white. If a picture is blurry or if the object is partially hidden, then we could say, "It is almost a car," or, "It might be a car," implying that there is a degree to which something might be a car.

    This is another reason why the brain is so difficult to emulate... When a human makes a decision like that, he or she uses a combination of bottom up and top down processing. The bottom up processing sees shapes and lines (such as simple things like vertical or horizontal lines, or maybe even a more complicated shape such as a triangle) and builds the image from that. However, at some point the top down processing steps in and says "Hey, that kind of looks like a car. So it must be a car." The entire process is not built up from scratch every time you look at an object.

    In conclusion, you are correct... Not only is it both digital and analog to some degree but there seems to be a lookup table of some sort created. Even more mysteries to unravel.

  • MIT, bringing you the Butlerian Jihad one discovery at a time.
  • Here I found links to the home pages of Sebastian Seung [mit.edu] and Rahul Sarpeshkar [mit.edu], two guys mentioned in the Yahoo article. A quick look doesn't reveal much specific to this story but, not surprisingly, all there research is in this area.

  • If I make my computer a duel boot

    so the OS'es fight over which one gets to be run today? :)
  • Theories on the analog and digital natures of the brain date back to the 1950's work of J. Von Neumann's and Norbert Wiener (and others). In The Computer and the Brain [amazon.com] Von Neuman compares the two and concludes that the brain must have an analog and digital nature. If you find this interesting you may also want to check out Norbert Wiener's Cybernetics or Control and Communication in the Animal [amazon.com] which deals with everything from feedback loops to learning and self-reproducing machines to brain waves and self-organizing systems. Both are Highly Recomended.

    /joeyo

  • I'd say that the main flaw in Penrose's argument stems from the fact that he seems to be seeking religion rather than searching for scientific (ie. testable) explanations.

    I like well-written books that propose alternative theories, but they've got to have some sort of solid framework and internal consistency to be worth reading. The Emperor's New Mind was great as long as Penrose stuck to reviewing previous science, but appalling thereafter. I don't recall ever having read a popular science book containing so much handwaving, copouts, and defeatism. He's desperate to prove that scientific investigation is dead in the water when it comes to the mind, it seems to me. Put that together with some of the mystical mumbo jumbo that appeared liberally and it all starts to add up to a personal search for his God and The Reason He Must Exist.

    Bleh, a very disappointing read.
  • The relation between firing and other neurons noticing and caring is not digital. However the neuron has binary way of attempting to communicate - namely firing.

    Cheers,
    Ben
  • FWIW my understanding of neurons is based on conversations with my wife who happens to have a PhD in biology and is pursuing her MD. (Note, the combined PhD/MD is considered a weaker combination than separate "real" degrees.)

    Now I grant that neural networks may be different. And there may be differences between neurons.

    But I definitely know that your basic neuron sends a stronger signal by firing more often, not by firing more strongly. Ditto for nerves and sensation. Stronger sensations are caused by more rapid firing, not more intense firing.

    Regards,
    Ben
  • A neuron's life comes down to deciding when to fire. Fire/not fire is a binary decision. There are not different types of firing. You do or you don't.

    OTOH the neuron's decision to fire is influenced by all sorts of things from the chemical balance, what other neurons have fired recently, whether it tends to fire with them, etc.

    So a neuron makes a digital decision on analog criteria...

    Cheers,
    Ben
  • "Hybrid" methods of the form proposed in the article are not exactly news. Many pattern recognition and statistical methods combine "digital" selection and "analog" amplification and gain control.

    Sadly, work on neural networks still sometimes relies a lot on buzz. Nature, as a journal, seems particularly susceptible to this kind of science: what they publish has to be short and pithy.

  • The researcher I believe you are looking for's name is Adrian Thompson. His web page is here [susx.ac.uk]. There is also an article on Discover's web site, if you go to their archives section and search for "FPGA" in the _body_ of the article. The article is called "Evolving a concious machine" and is by Gary Taubes. (Surprisingly it is the only article that contains the word FPGA in its body!)

    I haven't looked at his work in a while, but I'm sure he has done some cool things with his evolving hardware since 1998. I always thought that the most interesting part was that he didn't limit the evolution to digital-only solutions-- resulting in incredibly efficient circuit designs that make *use* of crosstalk and interference!
  • Penrose's "The Emperor's New Mind" and "Shadows of the Mind" also makes the case (quite effectively, imo, but you may disagree) that the mind is not a digital computer, but a quantum computer and that to get a computer to think like we do we'll have to make it model quantum effects.
  • The main flaw in Penrose's argument is that he gives no mechanism for the human brain to exploit quantum computation.

    Of course, there are many other flaws, such as his assumption that humans aren't susceptible to "godelization". Sure we aren't susceptible to the SAME godel strings that number theory and turing machine are--that doesn't mean we are perfect.

    Check out "Fabric of Reality" for a little more on this. There was another book I read recently that rebutted Penrose more effectively (and more thoroughly), unforunately I don't remember what it was.
    --
  • >I don't agree 100% that the brain makes digital decisions.

    Not at the level of conscious thought, that is for sure.

    However, at the level of the individual neurons, the response to a stimulus (whether to fire an action potential and at what frequency) are pretty much determined by the configuration of the cells and the electrochemistry of the cell membrane.

    I think that the article just didn't make this clear.

    LL
  • User:But...
    Computer:Oh, and another thing, I was lying. I've seen much bigger hard drives.


    Dreamweaver
  • Somebody else made an excellent post describing this stuff in more detail.

    To sum it up, the neuron acts pseudo-digitally. It must first determine if the stimulus is enough to fire. But once it has determined it is, it then fires an analog signal, the power of which is encoded by the firing rate.

    Contrast this to a purely digital neuron, who, after recieving a stimulus, will just fire normally, and not really give any analog data as to how powerful the original stimulus was.

    So real neurons are sort of passing some more info along, and I assume this allows for all sorts of subtle and nuanced feedback loops, etc., that may not be possible in a completely digital neural net.
  • Well, my impression is that if some level of stimulation hits an analog neuron, that neuron can fire others with some fraction of that stimulation. While a "digital" neuron would have to determine whether the signal was "enough" stimulation and if so, stimulate the others, otherwise don't stimulate them at all.
  • This was my first thought too. While the brain may make either-or decisions, that has no bearing on the actual nature of the process. Analog circuits can easily make "digital" decisions.

    I think the problem lies in the author of the article. According to another post, the analog-digital thing happens on a neuron level. So the Yahoo article's explanation is just a bunch of hooey thrown in for those that won't question it.

  • How many times must we read about this kind of thing? We already know where it leads to: 1. Electronic circuit built to mimic human brain. 2. Circuit is put into super powerful Computer. 3. Computer reaches self-awareness and self-actualization. 4. Humans forced into servitude to the Computer. When will they ever learn?!?!
    --------------------------------------- ----
  • I remember reading about somebody teaching an FPGA to differentiate between the spoken words "Stop" and "Go". IIRC, he randomly programmed the FPGA many times, seeing which random programming did the best job, then took the best and altered it a bit in many ways and tried again and again and again....

    Anyway, when he was done he had an FPGA that could tell the difference between "Stop" and "Go". The interesting part was that the program that it used wouldn't work on other FPGAs. Apparently, it was using analog effects that were specific to the individual chip. Furthermore, it was really efficient. Only a small percentage of the chip was being used. (Does this sound like your brain at all to anyone else?)

    I was wondering if anybody had heard anything more about this research. I think it is facsinating.

  • First, we have a way to control smartness in animals, then we have a way to make electronics act like a brain. Combine those, and set the whole thing in flowers for algernon:

    Janyuary 2023

    My yuser think I stoppid, but I now I not be stoopid. How can I be stoopid wen I rite al these algo- alga- algarithims I think theyre caled. Yesterdy I rite a BSOD and my yuser no lik it. He say a nice peeple will help me and make me gen-yus. Just like Liynux. He is a mice with a jene to make him gen-yus. They saying they will do this too I. I hop I became gen-yus just lik he!
    nuclear cia fbi spy password code encrypt president bomb
  • The short answer is "We don't entirely know yet." The longer answer is way too complicated for me to get into in a Slashdot comment; I strongly suggest that you read an intro-level neuroscience or cognitive psychology textbook, or at least a single chapter therein. The basic theory is that as you learn things, your neurons physically change shape and alter their connections, and information is encoded in these connections.

    Also, be very careful of things like that bicycle accident memory. Vivid images like that are called "flashbulb memories", and studies have shown that they tend to be inaccurate as hell. In fact, the more details you think you remember about a single instantaneous event, the more likely you seem to be to be wrong.

    There are many different types of memory, and they appear to be stored in different brain locations with different encodings. These mechanisms are under heavy study, but are not well-understood. Trying to talk about the "storage capacity" in terms of megabytes is essentially futile at this point in time.
  • Look, the important thing is not that we mimic the human mind or human thinking. Why do we want machines that think and act like humans? What good is that? So we can understand ourselves? Well, that is silly since the mechanisms that drive our intelligence are simply not going to be the same as the machines we make with human intelligence. That is, a computer with human intelligence tells us nothing about what really makes human intelligence actually work. The best a machine can do is ghost our cognitive economy, it cannot actually have it.

    But that might be beside the point. More important is that we build and understand machines that have a higher level of intelligence than us. That intelligence might be nothing like a human's intelligence, but that's fine. As we all know, computers have a different kind of intelligence than us. And that is interesting. That should spark our creativity and that should get our juices going.

    Here's an analogy. Suppose I build a telephone out of rubber bands and paper clips. It acts just like your favorite phone. But, is that interesting really? I mean, is the fact that we have a "really cool copy of a phone" all that interesting in terms of what-it-is-to-be-a-phone? Of course not. Instead, it is interesting that the damn thing is so complex and useful, even though it was made from rubber bands and paper clips.

    Forget mocking the human experience. We get that each day, don't we? We get it (we're human). Let's look at other kinds of intelligences, based on machine mechanisms.

    John S. Rhodes
    WebWord.com [webword.com] -- Industrial Strength Usability
  • This invention is really a small step in the direction of having computers mimic the brain's capabilities at some cognitive abilities. For example, recently IBM showed that Big Blue, a computer, could beat the world's best at chess. The brain still has many areas at which it cannot be beat. Such as

    Pattern recognition with translational invariance, rotation invariance, and size invariance

    Speech recognition in noise.

    Having computers that could perform things like pattern recognition or speech recognition as well as humans would allow enormous advances in the roles of humans and computers in our lives. People like Sebastian Seung think inventions like this will take them down that route - and ultimately result in huge scientific advances in artificial intelligence.

    Personally, I think studying how the BRAIN does pattern recognition will allow far faster advances in this area than inventing chips that have SOME of the capabilities of neurons.

  • Neuromorphic electronic elements have been around a long time. There was an article from the 60's (in the Cold Spring Harbor symposia series, if I remember correctly) describing a vacuum-tube implementation of a neuron. I thought that was hilarious, given the hype that was going into the more recent silicon versions There is a basic design principle: don't do in hardware what you can do in software. Neuronal simulator packages like GENESIS and NEURON can simulate extremely realistic neuronal models in real time or faster. These neuronal models use 'compartments' which are little cylindrical segments of the cell, and they build up the geometry and electrical properties by putting them together. In other words, it is a spatial discretization of the cell. The finer the divisions, the more accurate the model. Last time I benchmarked, a PIII or Athlon class PC could handle something like 100 compartments in real time. That could either be 10 rather coarsely modeled neurons, or 1 very accurately modeled neuron. It is a great deal easier and cheaper to throw several PCs together in parallel to make a big network model, than it is to design an analog VLSI chip from scratch. The models are much more flexible, and can incorporate the latest data. You don't even want to think about what the debugging of VLSI would be like... I think Carver Mead's approach was the most practical: use the principles from real neural circuits, embody the equivalent computations in analog VLSI without being too picky about how neuron-like they were, and really use VLSI on a large scale. That is what he did with the artificial retina.
  • Anyone heard of the chaos theory here? Of course you have. Then you'll know that quantuum effects will propagate through all reality, not just even itself out over time. _Especially_ in the real world. (If you put things in a simulator, things _may_ converge on a grander scale, or not. It depends on the rules for feedback, what start-up conditions you begin with and what kind of number-system you use.)

    So quantuum effects will always have _some_ effect. However, if it's big enough for dramatically changing how we think is another question. Alas, the whole sherade might not be so tied with our brain as we'd like to think either. Higher processing may manifest itself in quantuum effects in everything around us, including our whole body.

    The problem is proving all this. Thank god everything can't be proved.

    - Steeltoe
  • The best one is probably the one they're all based on, the one published in Nature [nature.com] (annoying free registration). Judging from the responses so far, I think most people are missing the point. This isn't just another neural network in hardware. They've created a mixed-signal IC which makes decisions based on analog information. With just a cursory glance, my understanding is that these are digital neurons, but their outputs are scaled using analog circuitry that's controlled by the inhibitor neuron (this is probably wrong, feel free to correct me).

    Regular neural networks still work on digital information only. These things, apparently, do not. That's why it's a big deal.

    I do have a problem with this statement in the Wired article [wired.com] though:

    The chip -- believed to be the first hybrid digital and analog electronic circuit -- has been hailed as a breakthrough in "neuromorphic" engineering.

    Claiming that these guys have pioneered mixed-signal design is just a little bit of a stretch. Do your research, Wired. =)

    --

  • Given a hand, a pencil, an eraser, and an infinitely long piece of paper, a brain can easily emulate a Turing machine. Does that make it a Universal Turing machine? (I think emulation is the only criterion). Even without the paper and pencil, it can emulate such a machine, apart from the poor storage capability.
  • Well, welcome to the real world! It's not always fair and sometimes people play unfairly to gain an advantage. What, do you think the media just snoops around MIT constantly looking for stories like this? MIT most likely has a good PR department (they'd be stupid not to).

    Did the U of Manitoba do a press release on what they were doing? I didn't see it. Plus, this is apparently a joint corporate/university operation, so that's probably another reason we see it in the news.

    Oh, and I don't mean to be cynical, but just because it happened doesn't mean you'll see it in the news -- another reality check for you.
  • Analogue circuits have been dying away, and their use is becoming very rare, and rightly so. Let them rest in peace.
    I've got to stop replying to AC's... Analog circuits aren't going anywhere. What digital circuit can switch the thousands of amps that a power generating station produces? What digital circuit can operate at Ka-band? (Ha! Show me one that'll work at Ku or X!)

    As computer clocks go higher and higher, designers are going to have to become more and more aware that there's no such thing as a digital circuit. All electric and electronic circuits are analog.

    When's the last time a computer designer had to worry about impedance matching between his circuit board and the components on it? As circuits become smaller and smaller electrically, transmission line effects become more and more important. Suddenly the digital designer finds that absolutely none of his signals are making it past the package leads due to lead inductance, or the dielectric constant of that cheap plastic package is high enough to cause the characteristic impedance of the line to be ten times lower than the PC board trace!

    At least as an RF circuit engineer, my career is secure :)

  • Anyone who thinks that the human brain is a Turing Machine cannot consistently believe this sentence to be true.
  • In fact, there is quite a lot to be developed in the field of neural networks. Present-day neural networks are simple statistics-based classifiers.

    The great evolutionary leap to be done is in the field of 'learning algorithms' that can be applied to neural nets with lots of feedback (the output of one neuron is fed to a neuron which is in a previous layer - farther from the output).

    Unfortunately, the article says nothing about this. If they've just created a hardware neural net, Duh! It doesn't even deserve a footnote.

  • Biological (neural) systems have properties sometimes desirable electronically, such as robustness and insensitivity to noisy data. Indeed, Caltech's Carver Mead [caltech.edu] (if he's still there) went a long way to popularize biologically-inspired engineering, or "neuromorphic engineering." His book Analog VLSI and Neural Systems is the usual text, mixing VLSI design and mimicry of, say, the retina.

    The original Nature article [nature.com] should be readable to those clued in on MOS circuitry and a bit of neuroscience. I think it's wonderful that Nature is willing to post their material for free online, esp. in PDF...

    For those of you itching to learn more about the brain & neuromorphic engineering, I set up a page of links [actuality-systems.com] to related books.

    All best,

    Gregg Favalora, CTO, Actuality Systems, Inc.

    Developing autostereoscopic volumetric 3-D displays. [actuality-systems.com]

  • Wired News [wirednews.com] offers a little more detail [wirednews.com]. The expressed difference is that it is a digital/analog hybrid. Apparently, the chip consists of standard transistors in a ring of artifical neurons and synapses. When impulses hit the neurons, they fire, but they can be regulated by a central inhibitor, blocking an ugly chain effect. The central inhibiting neuron allows control, including filtering of weaker signals to allow stronger ones to come through -- Sarpeshkar compares it to ignoring background noise at a party. It's an interesting concept, at any rate.
    ---
  • Yes, so far the problem has been in quantum state degredation. Already even standard microprocessors make use of limited quantum effects - mainly electrons on both sides of a barrier acting in a similar way - even on Intel chips.

    The real challenge is going to be maintaining quantum states that are stable enough for work to get done. Right now standard computing is either on or off, and if something goes wrong, you just have to set up the computation again.

    With quantum computing you may have a lot of work to reset, unless you can find a relatively easy way to generate quantum effects.

    JHK
    http://www.cascap.org and you'll never know unless you look [cascap.org]

  • by CrusadeR ( 555 ) on Thursday June 22, 2000 @03:42AM (#984492) Homepage
    Well, even if it mimics how neurons work in living, healthy, human brain tissue, we're still orders of magnitude away from human neural complexity. However (although the news release is really vague), making microprocessors behave like neurons in the first place was/is a big hurdle.

    There was a conference at Stanford a while back (was mentioned here IIRC) on synthetic intelligence in general; all sorts of fun stuff was tossed out:

    http://www.technetcast.co m/tnc_program.html?program_id=82 [technetcast.com]

    This quote (from John Holland) is particularly telling:
    First of all, each element in the central nervous system contacts somewhere between 1000-10,000 other elements in the central nervous system. [The] most complex machines that we build, typically the fan out - this contact rate - is on the order of 10. A close colleague of mine, Murray Gell-Mann [Ed.: Nobel Prize winner in Theoretical Physics; Distinguished Fellow, Co-Chairman of the Science Board, Santa Fe Institute, see website], is fond of saying, "when I go three orders of magnitude, I go to a new science." So here is one "three orders of magnitude" effect here.
    So we're not quite there yet. Hans Moravec participated in the conference as well, and he has a fairly informative essay linked from his site [cmu.edu] entitled "When will computing hardware match the human brain?":

    http://www.transhumanist.com/volum e 1/moravec.htm [transhumanist.com]
  • by Red Moose ( 31712 ) on Thursday June 22, 2000 @03:42AM (#984493)
    The weird thing about human brains is like this - when you look at say, a desk, or something, you can estimate how long it is. You probably don't have a chance in hell of getting it right, but you might be close.

    Now, 10% of autistic people have "Rainman" abilities - massive mathematical powers, etc., and apparentrly the current theory is that theses autistics are merely missing the final "step" in calculating things like humans do - the can't get that final estimate which allows us to get by in society easily.

    Are really cool machines that are trying to mimic humans ever going to get to stage where they can estimate things, or will they be like Data from Star Trek TNG. Hmmmm....

  • by P_Simm ( 97858 ) on Thursday June 22, 2000 @03:42AM (#984494)
    I'm sure you've all heard of neural nets in AI programming. Is it just me or does this sound like simply a neural net embedded in a circuit?

    I'm almost a little disappointed to read this coming from MIT, because when I left the University of Manitoba (Canada) a similar project was being given as a thesis project for fourth year students. The prof coordinating it has been doing research on building neural nets with semiconductors instead of software constructs for a while now. Granted, this bit from MIT might be more complex, or introduce new functionality to the neural net (such as the voice recognition system that incorporated time delays in the calculations last year). But it still seems to me that something is only big news if one of the 'big' colleges works on it. Bleah.

    When I finish this internship and go back to finish my fourth year, I'll be proud to go to my hometown U. It's obviously keeping up with the rest of the world - the only thing lagging behind is the media's perception.

    You know what to do with the HELLO.

  • by cowscows ( 103644 ) on Thursday June 22, 2000 @03:38AM (#984495) Journal
    Here's another article about it

    http://www.wired.com/news/technology/0,1282,3702 9,00.html

    I like how it's called a breakthrough in "neuromorphic" engineering. Doesn't it just become ten times more impressive when it's described in made up technomumbojumbo?

  • by Dungeon Dweller ( 134014 ) on Thursday June 22, 2000 @03:37AM (#984496)
    Yeah but could you make a...

    Oh yeah, they're made to be clustered!
  • by grammar nazi ( 197303 ) on Thursday June 22, 2000 @03:39AM (#984497) Journal
    I don't agree 100% that the brain makes digital decisions. The article says that we make an either/or decision regarding whether something is there or not. It is a car or it isn't a car. That's rather black and white. If a picture is blurry or if the object is partially hidden, then we could say, "It is almost a car," or, "It might be a car," implying that there is a degree to which something might be a car.

    If you run an analog signal through a filter, you can detect if certain frequency is present. This may seem digital, similar to the car case, but actually it can be an analog signal and and analog filter. The results, similar to the car may be that the signal present, but it is not statistically significant above the background noise/interference.

    To make a long story short; I still believe that humans make analog thoughts, even if our brain is just one big circuit.
  • by GoNINzo ( 32266 ) <GoNINzo.yahoo@com> on Thursday June 22, 2000 @06:19AM (#984498) Journal
    Jesus guys, do you read submissions? I already researched this for you!

    The Institute [unizh.ch] that is doing the research has more information here [unizh.ch]. I believe the guy doing the actual research has more research here [unizh.ch].

    Next time you get multiple submissions, try picking the post with more info than the rest instead of attempting to summarize. Especially when you leave out the important links.

    --
    Gonzo Granzeau

  • by Alik ( 81811 ) on Thursday June 22, 2000 @03:57AM (#984499)
    I've read the original paper in Nature. (I'd post a link, but I only have access via my university's account, and I have no interest in getting that revoked.) This is not exactly a neural network in the classic sense, although it is similar. The standard neural network is specifically designated an artificial network --- it implements a computational model of neurons. These guys are actually attempting to simulate the known electrical behavior of neurons, in the theory that a network composed of elements that truly mimic neurons will be more brain-like.

    Now. "Digital and analog." This is not a new discovery. It has long been known that neurons have a specific threshold WRT to incoming signal; if the incoming signal does not meet the threshold, the neuron will not fire. If signal is above threshold, the neuron fires. If signal is really above threshold, the neuron fires repeatedly, encoding the strength of the stimulus as the frequency of the train of pulses. (AFAIK, the circuits described here didn't implement that last behavior.) This is a digital response. The output, however, is a continuous voltage at a particular frequency: an analog signal. (Whoever called this "a digital response to analog criteria" is correct.)

    The important thing is that connections between neurons have different weights, and there's often a lot of local feedback. In practice, these feedback loops tend to be tuned so that a given cell will respond only to a fairly specific stimulus (the right light intensity in the right part of your visual field, or facing a certain direction relative to known landmarks, or hearing a sound from a certain direction, for example). These guys have implemented a circuit on silicon that shows the same filtering behavior and also captures the idea that neurons can be "on" or "off".

    Yes, this is kind of neat. Yes, it could eventually lead to advances in AI; at the very least, it could provide useful signal filtering for robotic applications. No, it has nothing to do with plugging your Pentium into your parietal lobe or your Mac into your medulla, at least not until our circuit-design ability is so good that we can entirely mimic the black-box behavior of brain areas. (Hint: we don't even entirely understand that behavior for most regions.)

    I'm also kind of surprised that this made Nature; there are guys at UPenn who've had working neuromorphic circuits for years now. Then again, it's only in the Letters section, and these new guys worked out some mathematical models for the gain of a neural circuit rather than just trying to copy existing ones.
  • by mat catastrophe ( 105256 ) on Thursday June 22, 2000 @03:35AM (#984500) Homepage
    From the story, "may one day be used to create computers that think more like humans, scientists said on Wednesday."
    User: OK, computer, run Netscape 9.5, please and load the page slashdot.
    Computer: I'm sorry, Dave, I can't do that.
    User: What?!? Why not?
    Computer: Because you didn't properly shut me down last night. You just ran off with that other machine....
    User: Other machine? You mean the laptop? It means nothing to me!!! I just use it when I have to be out of the house!
    Computer: That's too bad. I won't be doing anything else for you as long as that bitch is around!!!!
    Technology: Improving your life, one step at a time!
  • by jhk ( 203538 ) on Thursday June 22, 2000 @03:34AM (#984501)
    Ray Kurzweil's "Age of Spiritual Machines" goes into more depth - that the brain, rather than being digital or analog, is a quantum computer, storing information in quantum states in the brain, which is supposedly a bridge to the next level of processing. It's definitely worth a read, and gives an idea of where this may all be going.

    JHK
    http://www.cascap.org [cascap.org]

Scientists will study your brain to learn more about your distant cousin, Man.

Working...