Scientists to Build 'Brain Box' 187
lee1 writes "Researchers at the University of Manchester are constructing
a 'brain box' using large numbers of microprocessors to model the way networks of neurons interact. They hope to learn how to engineer fail-safe electronics. Professor Steve Furber, of the university school of computer science, hopes that biology will teach them how to build computer systems. He said: 'Our brains keep working despite frequent failures of their component neurons, and this "fault-tolerant" characteristic is of great interest to engineers who wish to make computers more reliable. [...] Our aim is to use the computer to understand better how the brain works [...] and to see if biology can help us see how to build computer systems that continue functioning despite component failures.'"
My Brainbox (Score:2, Interesting)
# of neurons needs to equal # of cpu's (Score:3, Interesting)
We have made big advances in this area, but having even a crude prototype to LT. Data ( Star Trek: Next Generation) is still quite a ways off.
However, I expect that we will eventually solve this problem. I just hope that we do in my lifetime- that would be way cool! (work fast, I'm 49!)
Re:Redundent department of redundancy. (Score:3, Interesting)
Re:Hardware? (Score:3, Interesting)
More realistically, perhaps they have already simulated some stuff and now want to scale it up drastically in size and speed. There isn't really enough detail in the article to tell how custom this is going to be. It could be anything from a Sun Niagara or a Connection Machine up to some custom designed parallel FPGA monster.
"How The Brain Works" (Score:4, Interesting)
After reading this quote, I have doubts this simulation will succeed in accurately simulating the brain. However, I'm sure it will further our concepts on other important topics, so I'm not opposed to it. Best of Luck!
Re:Testing for fault tolerance (Score:3, Interesting)
That's quite a funny post but it brings me to an (IMHO) interesting point - given a virtual "brain" capable of performing a certain task, can specifically targetting "damage" to the system result in creativity? Many of the most creative minds in our history got their inspiration in part due to mind-altering chemicals...
Re:Two Separate Goals (Score:2, Interesting)
Re:Reliability... (Score:2, Interesting)
Re:Downside of biological computing (Score:4, Interesting)
What these researchers are probably aiming towards is a large-scale MP system that can readily handle massive failures. Who would find this useful? Any enterprise software companies, such as Google which has thousands upon thousands of machines in its cluster. The ability to have a large network of simple (cheap) processors and a network that can readily withstand a massive multi-point failure is quite attractive to real-world companies.
Both software and hardware is beginning to go down this route by evolution of the industries. On the software front, asynchronous message-oriented systems work beautifully in terms of reliability, scalability, maintainability, and service integration. In the coming years, you'll notice that most major web services will be running on a SOA architecture. On the other side of the pond, raw CPU performance is getting harder to squeeze out. Power issues are limitting frequency scaling (due to current leakage), we are hitting limits of our ability to feasibly extract more ILP that's worth the extra effort, and the market drivers for these types of processors is slowly diminishing. Instead multiple physical and logical core CPUs are gaining ground, will be cheaper to develop and manufacture, and fit the future market demands.
It will be nice to hear how this research goes, since it will hopefully uncover potential problems and solutions that will be useful in the coming decades.
Re:Testing for fault tolerance (Score:5, Interesting)
some amusing calculations (Score:5, Interesting)
number of neurons in the brain: 100 billion
http://hypertextbook.com/facts/2002/AniciaNdabaha
transistor count per CPU: ~300 million
http://www.anandtech.com/cpuchipsets/showdoc.aspx
average synaptic connections per neuron: 7000
http://en.wikipedia.org/wiki/Neuron [wikipedia.org]
total number of synapses: 100 to 500 trillion
since a 'calculation' for one artificial neuron mostly involves a summation of weights, we can view one total step as 2 X the number of synapses we wish to analyze. or 200 - 1000 trillion calculations for one step. by step i mean summing all inputs and pushing the result to an output for each neuron.
http://en.wikipedia.org/wiki/Artificial_neuron [wikipedia.org]
fastest computer in the world FLOPs: 280 trillion
http://en.wikipedia.org/wiki/Blue_Gene [wikipedia.org]
pentium 4 FLOPs: 40 GFLOP
using the fastest computer in the world 1 step would only take around 1 - 5 seconds, not counting storing all of that information.
http://en.wikipedia.org/wiki/Blue_Gene [wikipedia.org]
so how fast do we think? well i couldn't find anything on this so lets get a quick estimate. the average neuron is
so assuming our computers could network instantly, and store the data used instantly, we would need 3-15 trillion Blue Gene supercomputers to simulate the human brain in real time. or if we are using pentium 4s we would only need 21-105 trillion pentium 4s.
man thats a lot of cpus.
number of computers in the world: ~300 million
http://www.aneki.com/computers.html [aneki.com]
guess at average FLOPs per computer: 40 GFLOPs
total FLOPs of worlds personal computers: 1.2 PFLOPs
time to calculate one brain step if all computers in the world were networked:
using moores law, when will a single computer be fast enough to simulate the human brain in real time?
200-1000 trillion calculations per step = ~600 trillion every 3.3ns = 181x10^18 or 181exeFLOPs
181exaFLOPS / 40GFLOPS = 2^n, n=32
32*18mo = 48 years based on personal computer technology
or 28 years based on supercomputer technology
of course a real neural network will contain highly parallel processing and using a specific chip design we will probably be able to simulate a brain much sooner, perhaps in the order of 10-20 years.
Re:some amusing calculations (Score:3, Interesting)
For simple addition tasks, an "operation" can take seconds.
For calculating the kinetics of arm motion needed to juggle 5 balls, there aren't even any "operations" to clock the speed of. It's just a continuous dynamical system.
Re:Pray to god that they fail. (Score:3, Interesting)
The parent post to this one really hit a profound reality. As we render human beings obsolete as we are progressively doing, we face a horrid reality.
The real issue of the 21st century is: Will be build a world where human beings serve the industrialists machines, or will we build a world where the industrialists machines serve human beings. All jokes about serving humans come to mind. This decision will be made. If it is made by ignorance, human beings will serve the industrialists machines. If it is made by wisdom, it may occur that the machines will serve mankind.
In either case the use of human beings to the system will diminish to zero. The question then will be how to we give human beings purpose etc. We already see the problems arising from this in the skuttling of careers and the human beings deciding they are worth nothing and the resultant suicide, drug abuse etc. We are facing a problem set that is completely different than the economic professors of any persuasion are discussing.
World wide productivity per hour is rising about 25% a year. At the same time industry is shedding persons from productive work at nearly the same rate. (This means individual productivity is rising 50% or more a year) We are facing a world where productivity says we should all get 1/2 a year off work to vacation and retirement should move earlier and earlier in life. We are at the same time trying to tell people to work until they drop dead. At the same time we are denying retirement benefits to the elederly.
I am just reporting the reality here. If people don't like the obvious conclusions of this reality which I note is entirely counter to the accepted logic I would suggest they wake up and see the reality. Mods if you don't like this reality get a life!
Re:Downside of biological computing (Score:2, Interesting)
I understand what you are saying. However, there are variations that can avoid this problem to some extent. For example, genetic programming [wikipedia.org], rather than genetic algorithms [wikipedia.org]. The main difference is that where genetic algorithms are used directly to find a solution, genetic programming is used to create a program which finds a solution (the resulting program usually being neither biologically-inspired nor stochastic). In fact, since the result of genetic programming is an algorithm, if can be reverse-engineered to yield insights into the problem, thus possibly aiding other research.
I know this is a very "toy problem" example, but while I was an undergrad I wrote a genetic programming system to evolve a Reversi [wikipedia.org] (also called "Othello") game-playing program. I'm by no means an expert Reversi player, so I set my goal at creating a program that could play better than I can without using techniques I can't use (for example, I only let it think 3 turns ahead, since it gets hard for me to see the possibilities much further than that). My system's output was a C function that, given a board state and a possible next move, would use an evolved set of rules to give a score to that move. The framework would call the function about all the possible moves (there were rarely more moves than would be noticed by a semi-experienced human), and choose the highest scoring move.
I succeeded in my goal -- it could consistently beat me, as well as my most intelligent friends, sometimes by a landslide. But the most interesting part to me was the fact that I could inspect the generated C code and take a look at how it was making its decisions. One of the more unexpected rules I found in the best-resulting programs favored letting the opponent take more pieces early in the game (definitely counter-intuitive, because the object of the game is to have more pieces than your opponent at the end). After some thought, and watching this rule in action over the course of a few games, I realized this made sense -- letting your opponent grab more pieces early in the game would limit their possible moves, while generally creating more options for yourself, increasing the possibility that you would be able to make important plays (like getting corner or side pieces), and then make an overwhelming comeback later in the game.