Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Scientists to Build 'Brain Box' 187

lee1 writes "Researchers at the University of Manchester are constructing a 'brain box' using large numbers of microprocessors to model the way networks of neurons interact. They hope to learn how to engineer fail-safe electronics. Professor Steve Furber, of the university school of computer science, hopes that biology will teach them how to build computer systems. He said: 'Our brains keep working despite frequent failures of their component neurons, and this "fault-tolerant" characteristic is of great interest to engineers who wish to make computers more reliable. [...] Our aim is to use the computer to understand better how the brain works [...] and to see if biology can help us see how to build computer systems that continue functioning despite component failures.'"
This discussion has been archived. No new comments can be posted.

Scientists to Build 'Brain Box'

Comments Filter:
  • My Brainbox (Score:2, Interesting)

    by Doc Ruby ( 173196 ) on Tuesday July 18, 2006 @09:56PM (#15740903) Homepage Journal
    Large number of microprocessors? Why not a box stuffed with hundreds of millions of FPGA gates, configured into lots of multiply-accumulators (or embed lots of hardwired DSPs), interconnected across and between layers? That is how the brain actually works. Hook it up to cameras, mics and some rubber/piezo tentacles with pressure/heat sensors, leave it in the lab for a few months, and start asking it questions.
  • by rts008 ( 812749 ) on Tuesday July 18, 2006 @10:05PM (#15740930) Journal
    To actually model the human brain, I would think that the number of cpu's needed would impose a really large bus to interconnect, and then enabling each cpu to use memory chips (comparitivly to the human brain's ability) to be a little ahead of our current technology....otherwise AI solutions that actually worked would not be such a big problem, and would already be solved/utilized.
    We have made big advances in this area, but having even a crude prototype to LT. Data ( Star Trek: Next Generation) is still quite a ways off.

    However, I expect that we will eventually solve this problem. I just hope that we do in my lifetime- that would be way cool! (work fast, I'm 49!)
  • by lindseyp ( 988332 ) on Tuesday July 18, 2006 @10:07PM (#15740936)
    Not only that, but hugely inefficient abstraction of the 'idea' from the level of the individual neuron. We're good at pattern recognition and conditioned response, but when it comes to doing calculations we're incredibly slow. Not to metnion inacurate. Would you like your computer to regularly 'make mistakes' ?
  • Re:Hardware? (Score:3, Interesting)

    by SnowZero ( 92219 ) on Tuesday July 18, 2006 @10:11PM (#15740952)
    True, but the research grant requests can be much larger when you say you are going to do it in hardware :)

    More realistically, perhaps they have already simulated some stuff and now want to scale it up drastically in size and speed. There isn't really enough detail in the article to tell how custom this is going to be. It could be anything from a Sun Niagara or a Connection Machine up to some custom designed parallel FPGA monster.
  • by Sean0michael ( 923458 ) on Tuesday July 18, 2006 @10:44PM (#15741057)
    "Because we do not understand the brain very well we are constantly tempted to use the latest technology as a model for trying to understand it. In my childhood we were always assured that the brain was a telephone switchboard. ('What else could it be?') I was amused to see that Sherrington, the great British neuroscientist, thought that the brain worked like a telegraph system. Freud often compared the brain to hydraulic and electro-magnetic systems. Leibniz compared it to a mill, and I am told some of the ancient Greeks thought the brain functions like a catapult. At present, obviously, the metaphor is the digital computer." -John R Searls.

    After reading this quote, I have doubts this simulation will succeed in accurately simulating the brain. However, I'm sure it will further our concepts on other important topics, so I'm not opposed to it. Best of Luck!

  • by QuantumFTL ( 197300 ) * on Tuesday July 18, 2006 @10:47PM (#15741065)
    I wonder if they have any intention of getting these brain boxes drunk then get it to recite the ABC's?

    That's quite a funny post but it brings me to an (IMHO) interesting point - given a virtual "brain" capable of performing a certain task, can specifically targetting "damage" to the system result in creativity? Many of the most creative minds in our history got their inspiration in part due to mind-altering chemicals...
  • by Marcos Eliziario ( 969923 ) on Tuesday July 18, 2006 @10:48PM (#15741071) Homepage Journal
    The thing is that we are very resillient. Kill one transistor in Microprocessor and you're done. Compare that with people that lost some brain stuff in accidents and are still able to breath, walk, speak, and sometimes they even manage to rewire their brains to regain some lost functionalities. So, I don't agree when you say that human brains don't work very well under stress.
  • Re:Reliability... (Score:2, Interesting)

    by cmaxwell ( 868018 ) on Tuesday July 18, 2006 @11:18PM (#15741155)
    Amazing to think that the human brain is somehow a benchmark for reliability. "Our brains keep working despite frequent failures of their component neurons" - right, sometimes. As a neurology resident, I spend most of my time witnessing and trying to fix the failures... some of the craziest stuff you can imagine. The failures are spectacular - loss of memory, speech, understanding, motor function, balance, etc - sometimes predictible, often not. Between seizures, strokes, enecphalopahty, meningitis, hemorrhages, aneurysms, tumors, and whatever else you might come up with, it is amazing we live as long as we do. Hey, maybe there is something to that - I'm re-considering my original premise.
  • by NovaX ( 37364 ) on Tuesday July 18, 2006 @11:46PM (#15741233)
    While the article is vague, I doubt they are considering genetic algorithms. While very cool, they can be unpredictable and hard reproduce. My favorite story, which drove home to me that that technique would rarely work, is about voice recognition hardware on an FPGA. The genetic algorithm had excellent performance, but when the researchers "copied" the mask to another FPGA, it failed to work. The cause: the algorithm leveraged various techniques such as cross-talk that engineers work hard to avoid which caused it to be tied that particular environment.

    What these researchers are probably aiming towards is a large-scale MP system that can readily handle massive failures. Who would find this useful? Any enterprise software companies, such as Google which has thousands upon thousands of machines in its cluster. The ability to have a large network of simple (cheap) processors and a network that can readily withstand a massive multi-point failure is quite attractive to real-world companies.

    Both software and hardware is beginning to go down this route by evolution of the industries. On the software front, asynchronous message-oriented systems work beautifully in terms of reliability, scalability, maintainability, and service integration. In the coming years, you'll notice that most major web services will be running on a SOA architecture. On the other side of the pond, raw CPU performance is getting harder to squeeze out. Power issues are limitting frequency scaling (due to current leakage), we are hitting limits of our ability to feasibly extract more ILP that's worth the extra effort, and the market drivers for these types of processors is slowly diminishing. Instead multiple physical and logical core CPUs are gaining ground, will be cheaper to develop and manufacture, and fit the future market demands.

    It will be nice to hear how this research goes, since it will hopefully uncover potential problems and solutions that will be useful in the coming decades.
  • by CroDragn ( 866826 ) on Tuesday July 18, 2006 @11:56PM (#15741260)
    This has been done before, introducing a random element into the neural net. If done correctly, this can result in "creativity". Here [mindfully.org] is one link about it, seen it many other places too, so google for more.
  • by llZENll ( 545605 ) on Wednesday July 19, 2006 @01:04AM (#15741382)
    well the article is so short its not possible to comment on their implementation. so here are some calculations i did to amuse myself.

    number of neurons in the brain: 100 billion
    http://hypertextbook.com/facts/2002/AniciaNdabahal iye2.shtml [hypertextbook.com]

    transistor count per CPU: ~300 million
    http://www.anandtech.com/cpuchipsets/showdoc.aspx? i=2795 [anandtech.com]

    average synaptic connections per neuron: 7000
    http://en.wikipedia.org/wiki/Neuron [wikipedia.org]

    total number of synapses: 100 to 500 trillion

    since a 'calculation' for one artificial neuron mostly involves a summation of weights, we can view one total step as 2 X the number of synapses we wish to analyze. or 200 - 1000 trillion calculations for one step. by step i mean summing all inputs and pushing the result to an output for each neuron.
    http://en.wikipedia.org/wiki/Artificial_neuron [wikipedia.org]

    fastest computer in the world FLOPs: 280 trillion
    http://en.wikipedia.org/wiki/Blue_Gene [wikipedia.org]

    pentium 4 FLOPs: 40 GFLOP

    using the fastest computer in the world 1 step would only take around 1 - 5 seconds, not counting storing all of that information.
    http://en.wikipedia.org/wiki/Blue_Gene [wikipedia.org]

    so how fast do we think? well i couldn't find anything on this so lets get a quick estimate. the average neuron is .1m in length .1 / c = 3.3x10^-10 or 333 picoseconds. now lets add in some delay for the chemicals in the neurons to do their thing, this is probably much slower than the electrical impulse, so lets say 3.3 nanoseconds.

    so assuming our computers could network instantly, and store the data used instantly, we would need 3-15 trillion Blue Gene supercomputers to simulate the human brain in real time. or if we are using pentium 4s we would only need 21-105 trillion pentium 4s.

    man thats a lot of cpus.

    number of computers in the world: ~300 million
    http://www.aneki.com/computers.html [aneki.com]
    guess at average FLOPs per computer: 40 GFLOPs
    total FLOPs of worlds personal computers: 1.2 PFLOPs
    time to calculate one brain step if all computers in the world were networked: .2 - .8 seconds

    using moores law, when will a single computer be fast enough to simulate the human brain in real time?
    200-1000 trillion calculations per step = ~600 trillion every 3.3ns = 181x10^18 or 181exeFLOPs
    181exaFLOPS / 40GFLOPS = 2^n, n=32
    32*18mo = 48 years based on personal computer technology

    or 28 years based on supercomputer technology

    of course a real neural network will contain highly parallel processing and using a specific chip design we will probably be able to simulate a brain much sooner, perhaps in the order of 10-20 years.
  • by Illserve ( 56215 ) on Wednesday July 19, 2006 @06:07AM (#15742042)
    It is meaningless to talk about brains and clock speed. The brain's speed varies wildly depending on the complexity of the operations and how well they fit into the brain's underlying functional architecture.

    For simple addition tasks, an "operation" can take seconds.

    For calculating the kinetics of arm motion needed to juggle 5 balls, there aren't even any "operations" to clock the speed of. It's just a continuous dynamical system.

  • by cluckshot ( 658931 ) on Wednesday July 19, 2006 @07:36AM (#15742289)

    The parent post to this one really hit a profound reality. As we render human beings obsolete as we are progressively doing, we face a horrid reality.

    The real issue of the 21st century is: Will be build a world where human beings serve the industrialists machines, or will we build a world where the industrialists machines serve human beings. All jokes about serving humans come to mind. This decision will be made. If it is made by ignorance, human beings will serve the industrialists machines. If it is made by wisdom, it may occur that the machines will serve mankind.

    In either case the use of human beings to the system will diminish to zero. The question then will be how to we give human beings purpose etc. We already see the problems arising from this in the skuttling of careers and the human beings deciding they are worth nothing and the resultant suicide, drug abuse etc. We are facing a problem set that is completely different than the economic professors of any persuasion are discussing.

    World wide productivity per hour is rising about 25% a year. At the same time industry is shedding persons from productive work at nearly the same rate. (This means individual productivity is rising 50% or more a year) We are facing a world where productivity says we should all get 1/2 a year off work to vacation and retirement should move earlier and earlier in life. We are at the same time trying to tell people to work until they drop dead. At the same time we are denying retirement benefits to the elederly.

    I am just reporting the reality here. If people don't like the obvious conclusions of this reality which I note is entirely counter to the accepted logic I would suggest they wake up and see the reality. Mods if you don't like this reality get a life!

  • by zacronos ( 937891 ) on Wednesday July 19, 2006 @09:18AM (#15742691)
    > Many biologically inspired algorithms solve problems through methods that cannot be proven correct (unlike, say, the mathematics circuitry in a CPU), but merely empirically observed to "do a good job."

    I understand what you are saying. However, there are variations that can avoid this problem to some extent. For example, genetic programming [wikipedia.org], rather than genetic algorithms [wikipedia.org]. The main difference is that where genetic algorithms are used directly to find a solution, genetic programming is used to create a program which finds a solution (the resulting program usually being neither biologically-inspired nor stochastic). In fact, since the result of genetic programming is an algorithm, if can be reverse-engineered to yield insights into the problem, thus possibly aiding other research.

    I know this is a very "toy problem" example, but while I was an undergrad I wrote a genetic programming system to evolve a Reversi [wikipedia.org] (also called "Othello") game-playing program. I'm by no means an expert Reversi player, so I set my goal at creating a program that could play better than I can without using techniques I can't use (for example, I only let it think 3 turns ahead, since it gets hard for me to see the possibilities much further than that). My system's output was a C function that, given a board state and a possible next move, would use an evolved set of rules to give a score to that move. The framework would call the function about all the possible moves (there were rarely more moves than would be noticed by a semi-experienced human), and choose the highest scoring move.

    I succeeded in my goal -- it could consistently beat me, as well as my most intelligent friends, sometimes by a landslide. But the most interesting part to me was the fact that I could inspect the generated C code and take a look at how it was making its decisions. One of the more unexpected rules I found in the best-resulting programs favored letting the opponent take more pieces early in the game (definitely counter-intuitive, because the object of the game is to have more pieces than your opponent at the end). After some thought, and watching this rule in action over the course of a few games, I realized this made sense -- letting your opponent grab more pieces early in the game would limit their possible moves, while generally creating more options for yourself, increasing the possibility that you would be able to make important plays (like getting corner or side pieces), and then make an overwhelming comeback later in the game.

The moon is made of green cheese. -- John Heywood

Working...