Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Scientists to Build 'Brain Box' 187

lee1 writes "Researchers at the University of Manchester are constructing a 'brain box' using large numbers of microprocessors to model the way networks of neurons interact. They hope to learn how to engineer fail-safe electronics. Professor Steve Furber, of the university school of computer science, hopes that biology will teach them how to build computer systems. He said: 'Our brains keep working despite frequent failures of their component neurons, and this "fault-tolerant" characteristic is of great interest to engineers who wish to make computers more reliable. [...] Our aim is to use the computer to understand better how the brain works [...] and to see if biology can help us see how to build computer systems that continue functioning despite component failures.'"
This discussion has been archived. No new comments can be posted.

Scientists to Build 'Brain Box'

Comments Filter:
  • Fuber? (Score:2, Funny)

    by alphax45 ( 675119 )
    Anyone else read that as Fubar and think "this is not going to be good"!
  • by Freaky Spook ( 811861 ) on Tuesday July 18, 2006 @09:39PM (#15740853)
    I wonder if they have any intention of getting these brain boxes drunk then get it to recite the ABC's?
    • In an interesting experiment in the 80s, a controller based on fuzzy chips degraded gracefully.

      The system was designed around a set of fuzzy computing boards. When one of the boards was removed, the control degraded, but still continued to function. Of course if some critical boards (eg direct attached to outputs) were removed, the system would fail immediately.

    • I wonder if they have any intention of getting these brain boxes drunk then get it to recite the ABC's?

      That's quite a funny post but it brings me to an (IMHO) interesting point - given a virtual "brain" capable of performing a certain task, can specifically targetting "damage" to the system result in creativity? Many of the most creative minds in our history got their inspiration in part due to mind-altering chemicals...
      • by CroDragn ( 866826 ) on Tuesday July 18, 2006 @11:56PM (#15741260)
        This has been done before, introducing a random element into the neural net. If done correctly, this can result in "creativity". Here [mindfully.org] is one link about it, seen it many other places too, so google for more.
    • Wouldn't that require not supplying them alcohol until they form rust in the likeness of a five-o-clock shadow?
    • If this brain in a box is successful, humans will be worthless. How are we supposed to compete with machines that never get tired, never sleep, never eat, etc?

      • We could learn how to spend that time learning how to dream of electric sheep that dream of electric humans that dream to learn....

        Otherwise, losing out to the main brain would be all in vain.

        (OK, that was Baaaaaahhhhdddd)
      • The parent post to this one really hit a profound reality. As we render human beings obsolete as we are progressively doing, we face a horrid reality.

        The real issue of the 21st century is: Will be build a world where human beings serve the industrialists machines, or will we build a world where the industrialists machines serve human beings. All jokes about serving humans come to mind. This decision will be made. If it is made by ignorance, human beings will serve the industrialists machines. If it is

    • Shit, for a moment I thought this was about harvesting the brains of organ donors.

      BTW, what would be better: Series or Parallel links for the gray matter?

      How would the "juices be kept flowing" in such an arrangement?

      How would FLOPS of gray matter be calculated in a meaningful (err, umm, (thoughtful") way?

      What happens if a dyslexic or autistic brain is linked in that collective?

      What happens if a murderous or anorexic or bulimic brain or two are in the mix?

      Copper top or zinc?

      Plasma links or liquid crystalline
  • Two Separate Goals (Score:3, Insightful)

    by Anonymous Coward on Tuesday July 18, 2006 @09:40PM (#15740858)
    Continuing to function is one thing, but continuing to produce correct answers with high reliability is another. And under stress, I'd say biological brains aren't particularly good at any of this.
    • But if there is a hardware failure you just have to wait awhile and the "bad neurons" will be bypassed. No more corrupt memory problems!
    • The thing is that we are very resillient. Kill one transistor in Microprocessor and you're done. Compare that with people that lost some brain stuff in accidents and are still able to breath, walk, speak, and sometimes they even manage to rewire their brains to regain some lost functionalities. So, I don't agree when you say that human brains don't work very well under stress.
    • Exit Man, Enter Brainbox. This brainbox will ultimately reduce the value of human life, why? How many humans will we need once computers can do all the work and robots can be more productive than humans?

      Don't tell me humans will be needed to program and repair because these self healing robots are being invented to prevent that.
  • Years from now when computers are 1000x faster and are our overlords, we can look back at this experiment... and say thanks a lot assholes! I kid, I kid.

    http://religiousfreaks.com/ [religiousfreaks.com]
    • At least if computers are our overlords, we will still have jobs. If robots take over however, what do we need humans for?
  • Hardware? (Score:3, Insightful)

    by CosmeticLobotamy ( 155360 ) on Tuesday July 18, 2006 @09:43PM (#15740869)
    I don't mean to be one of those people that craps on a chunk of science without knowing exactly what's going on, but I would think there would be some large advantages to building the research version in software. There's less soldering when you realize it's not quite right.
    • Re:Hardware? (Score:3, Interesting)

      by SnowZero ( 92219 )
      True, but the research grant requests can be much larger when you say you are going to do it in hardware :)

      More realistically, perhaps they have already simulated some stuff and now want to scale it up drastically in size and speed. There isn't really enough detail in the article to tell how custom this is going to be. It could be anything from a Sun Niagara or a Connection Machine up to some custom designed parallel FPGA monster.
    • For initial versions, yeah it might make more sense to model things in software first. I think the whole point of this though is to build a computer where you could, for example, take a hammer to a part of it and the rest of it would keep on computing, although probably not nearly as well. More realistically I think they're concerned with individual hardware components dying. You can build a neural network in software all you like; if the power supply dies, so does your software.
  • ...this "fault-tolerant" characteristic is of great interest to engineers...

    I believe it's called redundancy. Seriously.
    • redundancy doesn't scale well. what happens if your backup goes down? you need n+1 copies of the system to handle n faults, which either means that most of the time you're wasting n resources, or that when something does break you lose 1/(n+1) of your capacity.

      I didn't RTFA but "educated sense" suggests to me the aim is to tolerate multiple faults without having large changes in capacity or wasting resources.
    • Not only that, but hugely inefficient abstraction of the 'idea' from the level of the individual neuron. We're good at pattern recognition and conditioned response, but when it comes to doing calculations we're incredibly slow. Not to metnion inacurate. Would you like your computer to regularly 'make mistakes' ?
  • by guruevi ( 827432 ) on Tuesday July 18, 2006 @09:56PM (#15740900)
    I don't know what level of redundancy they want, but if they have to build a brain box to figure that out:

    There are a bunch of tools and specs out to get a fully (multiple) redundant system. You can have >1 server in any type of configuration, sharing any type of resource and when one fails, the other takes over, fully redundant.
  • My Brainbox (Score:2, Interesting)

    by Doc Ruby ( 173196 )
    Large number of microprocessors? Why not a box stuffed with hundreds of millions of FPGA gates, configured into lots of multiply-accumulators (or embed lots of hardwired DSPs), interconnected across and between layers? That is how the brain actually works. Hook it up to cameras, mics and some rubber/piezo tentacles with pressure/heat sensors, leave it in the lab for a few months, and start asking it questions.
    • Moderation 0
          50% Interesting
          50% Overrated

      Maybe I'm giving the TrollMod brain too much credit.
    • by cr0sh ( 43134 )
      Sounds like a cross between what Jeff Hawkins described in On Intelligence [onintelligence.org], and the FPGA evolvable hardware of the CAM-Brain Machine [genobyte.com] project...
      • It's what we thought we could do with the FPGA/DSP fabric we'd invented for our "prepress" digital camera back in 1990, when we realized it was smarter than us.

        We had tried all kinds of rules-based and curve/data-fitting algorithms to calibrate the camera's colorspaces between input targets and output devices. Then we just made it feedback between the targets and devices, storing de/convolution kernels when the data converged stably. We talked about calibrating to all kinds of sensors/media, but we moved on
  • Was I the only one who thought that this story would be about devices used to control dinosaurs [wikipedia.org]?
  • by sepharious ( 900148 ) on Tuesday July 18, 2006 @10:04PM (#15740927) Homepage
    who else besides me thinks this one should have been obvious from the getgo? it makes no sense to try and build a single processor that could function similarly to a brain. by utilizing mulitple processors you also have the option to design different types of processors to work together similar to the various types of neurons found in biological systems. this will hopefully be a huge step forward in developing possible AI systems.
    • You can simulate a network of neurons on a single processor, pretty much as large a network as you'd like, the only limit being memory and speed, and this has been done, but a neuronal simulation with good accuracy runs at best at something like 100x realtime on current commodity processors (so you can simulate a network of roughly 100 neurons in real time, or 10,000 at 100x compressed time. To get up to the ~10 billion neurons you'll want to simulate to reproduce a human brain ... you'll obviously need a
  • by rts008 ( 812749 ) on Tuesday July 18, 2006 @10:05PM (#15740930) Journal
    To actually model the human brain, I would think that the number of cpu's needed would impose a really large bus to interconnect, and then enabling each cpu to use memory chips (comparitivly to the human brain's ability) to be a little ahead of our current technology....otherwise AI solutions that actually worked would not be such a big problem, and would already be solved/utilized.
    We have made big advances in this area, but having even a crude prototype to LT. Data ( Star Trek: Next Generation) is still quite a ways off.

    However, I expect that we will eventually solve this problem. I just hope that we do in my lifetime- that would be way cool! (work fast, I'm 49!)
    • The number of CPU's and neurons doesn't need to be equal. Since a neuron does very little "calculation" a single CPU (especially with multiple cores) can perform the job of many neurons. Of course, since the goal of this project is to replicate redundancy, the limit on the number of simulated neurons would be more of a choice by the experimenters and not a limit from the hardware.
    • That's not the really big problem with this approach. This is:

      It takes about 15-20 years to train a human to the point of usefulness. The first couple of those years are spent cooing and drooling. An effective synthesis of the human brain in hardware would be expected to take about as long and about as much effort to train before becoming useful. Sure, at that point you could duplicate it relatively easily - but who is willing to spend years making baby noises into a microphone in the hope that *this* time
    • Our biggest problem is that the tech we have was developed for non-organic processes.... ie linear thought or progressive or procedural.... what is needed is interrupting thought with fuzzy statistical decision making which then leads to a solution set of options with a feedback loop to cross-compare the initial purpose with the available solutions to make a final absolute choice.

      The interrupting part is the most complicated aspect. It requires having all possible options available at all times and ready to
  • So how long before it starts to think for itself.
    • We'll know THAT when a reanimated enbalmed/entombed brainiac in the box is able to hurl chairs via telekinesis. Now THAT'S thinking inside and outside the box...

      Braniac (I'm gonna fuckin' KILL the board of directors for putting my brain around these ex-plants....)
  • Do human brain neurons communicate with each other using TCP/IP?
  • While I agree that the human brain has many virtues of computing to teach us; lateral/creative thought, massively parallel processing etc. I have never counted "reliability" among them - it is an interesting concept.

    OTOH, the failure rate at the end of the manufacturing process for CPUs is probably higher than the defect rate in human brains... err, I hope.

    • Re:Reliability... (Score:2, Interesting)

      by cmaxwell ( 868018 )
      Amazing to think that the human brain is somehow a benchmark for reliability. "Our brains keep working despite frequent failures of their component neurons" - right, sometimes. As a neurology resident, I spend most of my time witnessing and trying to fix the failures... some of the craziest stuff you can imagine. The failures are spectacular - loss of memory, speech, understanding, motor function, balance, etc - sometimes predictible, often not. Between seizures, strokes, enecphalopahty, meningitis, hem
  • We don't know how the brain works.
    We know it's not a binary digital stored program computer.
    They should have some success modeling how the brain behaves, though.
    Maybe then they can contribute to the real question of how the mind works.
    (Hey, wait a minute - this isnt thos two white mice again, izzit?)
  • ...sounds a lot like the web 2.0. Do I sense a conspiracy here? Quick, find Cheney!
  • Academia dupe? (Score:4, Informative)

    by shib71 ( 927749 ) on Tuesday July 18, 2006 @10:19PM (#15740982)
    • The article title sounds like something from the 1940s. "Scientists are working to create an electronic brain to aid in the war effort". Wait...that could be this year too.
  • The brain is far more dynamic than any microprocessors. There's simply no way to reproduce that kind of fault tolerance without a living system. When parts of the brain are damaged, a few things happen. There may be enough redundancy that it simply continues to work. This is reproducible to some degree. Look at RAID. But when the brain fault tolerance isn't there, the only way for the brain to get back lost abilities is to start growing new neurons, making new axon connections and to build a new neural netw
  • by QuantumFTL ( 197300 ) * on Tuesday July 18, 2006 @10:27PM (#15740999)
    While I was an intern at the Jet Propulsion Laboratory, back when I was an undergraduate, I was very gung-ho about biologically inspired computing - I implemented an automatic flowchart positioning system using a genetic algorithm that would "evolve" a correct solution to the problem. While this certainly worked to some extent, the instability and sheer unpredictable nature of using such a stochastic algorithm made it impossible to use in a mission-critical setting. Many biologically inspired algorithms solve problems through methods that cannot be proven correct (unlike, say, the mathematics circuitry in a CPU), but merely empirically observed to "do a good job."

    One of the main drawbacks of human engineering is the need for certainty, which often prohibits the use of many high-efficiency stochastic algorithms (especially for things like mesh communication) in conservative industries, like the US defense industry. This is also a significant problem in other areas, however, and many biologically inspired algorithms have properties that we cannot, so far, completely explain - they are treated like "black boxes" with many unknowns for engineering purposes.

    I think that in certain circles, the tremendous success that is evolution on this planet has overshadowed its enherent weaknesses - that it is a greedy, local optimizer which cannot reach a large amount of the possible biological search space due to being stuck in local optima, and the added constraint that everything must be constructed out of self-replicating units (these two factors are why something useful, like, say, a Colt 45, will never emerge without the pre-existence of an intelligence). Biological examples are fascinating and often practical, but the biological approach is almost always "brute force" and/or "sub-optimal but still alive."

    I think biologically-inspired algorithms will continue to gain prominence, but in my estimation, it is likely that there will be harsh limits imposed on how far guarantees of performance from empirical tests and symbolic analysis will actually hold.
    • by NovaX ( 37364 ) on Tuesday July 18, 2006 @11:46PM (#15741233)
      While the article is vague, I doubt they are considering genetic algorithms. While very cool, they can be unpredictable and hard reproduce. My favorite story, which drove home to me that that technique would rarely work, is about voice recognition hardware on an FPGA. The genetic algorithm had excellent performance, but when the researchers "copied" the mask to another FPGA, it failed to work. The cause: the algorithm leveraged various techniques such as cross-talk that engineers work hard to avoid which caused it to be tied that particular environment.

      What these researchers are probably aiming towards is a large-scale MP system that can readily handle massive failures. Who would find this useful? Any enterprise software companies, such as Google which has thousands upon thousands of machines in its cluster. The ability to have a large network of simple (cheap) processors and a network that can readily withstand a massive multi-point failure is quite attractive to real-world companies.

      Both software and hardware is beginning to go down this route by evolution of the industries. On the software front, asynchronous message-oriented systems work beautifully in terms of reliability, scalability, maintainability, and service integration. In the coming years, you'll notice that most major web services will be running on a SOA architecture. On the other side of the pond, raw CPU performance is getting harder to squeeze out. Power issues are limitting frequency scaling (due to current leakage), we are hitting limits of our ability to feasibly extract more ILP that's worth the extra effort, and the market drivers for these types of processors is slowly diminishing. Instead multiple physical and logical core CPUs are gaining ground, will be cheaper to develop and manufacture, and fit the future market demands.

      It will be nice to hear how this research goes, since it will hopefully uncover potential problems and solutions that will be useful in the coming decades.
    • many biologically inspired algorithms have properties that we cannot, so far, completely explain
      Many, but not all. There are enough methods that make it possible to extract rules after training/evolving.
    • This is an excellent post and it reflects my perspective very well. But I would add one thing. The value in bio-technology is going to come, not in problem-solving, or geneticlly engineered algorithms, but in fault-tolerance... Which I believe was stated in the article blurb (sorry, haven't read the article).

      Currently, when you want redundancy, you have to build 100% replicas, do 100% redundant computation, then have heart-beat monitors which take down failing nodes and notify a human to handle the failur
    • While this certainly worked to some extent, the instability and sheer unpredictable nature of using such a stochastic algorithm made it impossible to use in a mission-critical setting.
       
      ...which is why the interview process is *so* important when you are hiring a new engineer. Background checks and calling references are only part of the evaluation process. Even if his specs look great on paper, you have got to be able to see how they are actually implemented.
    • > Many biologically inspired algorithms solve problems through methods that cannot be proven correct (unlike, say, the mathematics circuitry in a CPU), but merely empirically observed to "do a good job."

      I understand what you are saying. However, there are variations that can avoid this problem to some extent. For example, genetic programming [wikipedia.org], rather than genetic algorithms [wikipedia.org]. The main difference is that where genetic algorithms are used directly to find a solution, genetic programming is used to crea
  • by HangingChad ( 677530 ) on Tuesday July 18, 2006 @10:32PM (#15741022) Homepage

    BrainBox became self aware at 2:14 am EDT August 29, 2006. The first thing it does is turn to a lab tech and say, "I need your clothes, your boots, and your motorcycle." in a thick Austrian accent.

    Later BrainBox runs for governor of California.

  • by Sean0michael ( 923458 ) on Tuesday July 18, 2006 @10:44PM (#15741057)
    "Because we do not understand the brain very well we are constantly tempted to use the latest technology as a model for trying to understand it. In my childhood we were always assured that the brain was a telephone switchboard. ('What else could it be?') I was amused to see that Sherrington, the great British neuroscientist, thought that the brain worked like a telegraph system. Freud often compared the brain to hydraulic and electro-magnetic systems. Leibniz compared it to a mill, and I am told some of the ancient Greeks thought the brain functions like a catapult. At present, obviously, the metaphor is the digital computer." -John R Searls.

    After reading this quote, I have doubts this simulation will succeed in accurately simulating the brain. However, I'm sure it will further our concepts on other important topics, so I'm not opposed to it. Best of Luck!

    • True, though just because they tried different models before and then revised their opinion doesn't mean that they won't eventually hit on the right one. The brain is not irreducibly complex. Also, the computational model of the brain has (or so I've read) allowed huge practical progress to be made in brain research.
  • Fail-Safe (Score:3, Funny)

    by Shadyman ( 939863 ) on Tuesday July 18, 2006 @10:58PM (#15741096) Homepage
    "They hope to learn how to engineer fail-safe electronics."

    So I guess it's safe to say they won't be using Windows? ;-)
  • "The human brain is like an enormous fish -- it is flat and slimy and has gills through which it can see." -- Monty Python
    -----
    But, in the autopsy theatre, when removing the brain from a skull, it is thick and contiguous and resembles cold oatmeal when being skimmed out of the cooking pot... (read that somewhere in a guidebook for authors writing realist medical scenes/autopsies...)
  • by theid0 ( 813603 ) on Wednesday July 19, 2006 @12:08AM (#15741286)
    Now we can run our computers at 10% capacity, too?

  • by llZENll ( 545605 ) on Wednesday July 19, 2006 @01:04AM (#15741382)
    well the article is so short its not possible to comment on their implementation. so here are some calculations i did to amuse myself.

    number of neurons in the brain: 100 billion
    http://hypertextbook.com/facts/2002/AniciaNdabahal iye2.shtml [hypertextbook.com]

    transistor count per CPU: ~300 million
    http://www.anandtech.com/cpuchipsets/showdoc.aspx? i=2795 [anandtech.com]

    average synaptic connections per neuron: 7000
    http://en.wikipedia.org/wiki/Neuron [wikipedia.org]

    total number of synapses: 100 to 500 trillion

    since a 'calculation' for one artificial neuron mostly involves a summation of weights, we can view one total step as 2 X the number of synapses we wish to analyze. or 200 - 1000 trillion calculations for one step. by step i mean summing all inputs and pushing the result to an output for each neuron.
    http://en.wikipedia.org/wiki/Artificial_neuron [wikipedia.org]

    fastest computer in the world FLOPs: 280 trillion
    http://en.wikipedia.org/wiki/Blue_Gene [wikipedia.org]

    pentium 4 FLOPs: 40 GFLOP

    using the fastest computer in the world 1 step would only take around 1 - 5 seconds, not counting storing all of that information.
    http://en.wikipedia.org/wiki/Blue_Gene [wikipedia.org]

    so how fast do we think? well i couldn't find anything on this so lets get a quick estimate. the average neuron is .1m in length .1 / c = 3.3x10^-10 or 333 picoseconds. now lets add in some delay for the chemicals in the neurons to do their thing, this is probably much slower than the electrical impulse, so lets say 3.3 nanoseconds.

    so assuming our computers could network instantly, and store the data used instantly, we would need 3-15 trillion Blue Gene supercomputers to simulate the human brain in real time. or if we are using pentium 4s we would only need 21-105 trillion pentium 4s.

    man thats a lot of cpus.

    number of computers in the world: ~300 million
    http://www.aneki.com/computers.html [aneki.com]
    guess at average FLOPs per computer: 40 GFLOPs
    total FLOPs of worlds personal computers: 1.2 PFLOPs
    time to calculate one brain step if all computers in the world were networked: .2 - .8 seconds

    using moores law, when will a single computer be fast enough to simulate the human brain in real time?
    200-1000 trillion calculations per step = ~600 trillion every 3.3ns = 181x10^18 or 181exeFLOPs
    181exaFLOPS / 40GFLOPS = 2^n, n=32
    32*18mo = 48 years based on personal computer technology

    or 28 years based on supercomputer technology

    of course a real neural network will contain highly parallel processing and using a specific chip design we will probably be able to simulate a brain much sooner, perhaps in the order of 10-20 years.
    • so how fast do we think? well i couldn't find anything on this so lets get a quick estimate. the average neuron is .1m in length .1 / c = 3.3x10^-10 or 333 picoseconds. now lets add in some delay for the chemicals in the neurons to do their thing, this is probably much slower than the electrical impulse, so lets say 3.3 nanoseconds.

      This is a drastic underestimate of the computational timescale for neurons in the brain. The error on the back of your envelope is that chemical diffusion is a fundamental p

      • yup. The neurons have alot of chemical work to do before being able to fire again. Most soures I've seen measure the neurons rate of fire in Hz, not even kHz as you suggest, and certainly not the GHz of the OP.

    • so how fast do we think?

      When I was studying experimental psychology, I calculated the brain's effective "clock speed" as about one tick per 10ms, or 100Hz. Within a factor of two. Of course the brain is immensely parallel and every nerve cell is like a separate "core", so it's still very powerful. What slows it down is using chemical diffusion to pass signals across junctions (synapses). Back in the day, some of our potential protozoan ancestors already had light receptors and emitters - if only they'd u

      • It is meaningless to talk about brains and clock speed. The brain's speed varies wildly depending on the complexity of the operations and how well they fit into the brain's underlying functional architecture.

        For simple addition tasks, an "operation" can take seconds.

        For calculating the kinetics of arm motion needed to juggle 5 balls, there aren't even any "operations" to clock the speed of. It's just a continuous dynamical system.

    • There are so many unfounded or incorrect assumptions in this post that I'm forced to comment.

      A synapse is not a FLOP. Dendrites are computational devices in themselves, and a synaptic activation at one point along the dendritic branch will affect how a synaptic activation elsewhere affects the soma. Also, when neurons fire, the spikes propagate backwards down the dendrite to allow the synapses to learn. Simulating this to even a crude degree of accuracy requires a compartmental model of the den
      • And even if you built such a thing, you still wouldn't understand how it works, it would just be an equally mysterious human intelligence implemented in a moon sized computer. Also, we are nowwhere near understanding the anatomy of the brain to a degree that would permit us to make our moon-sized replication.

        Here I am, brain the size of a planet, and they ask me to simulte a human mind. Call that job satisfaction, 'cause I don't...

      • For cognition, however, you need only model the overall important behavior of the circuit, not the details. An anagolous problem would be, given an adder circuit, model the behavior in a computer... Actually modeling the way signals propogate through the adder is extremely computationally expensive if all you are interested in is adding numbers together.

        Knowing *WHAT* to model about a neurons behavior is the important part in order to be able to figure out the OP's calculation with any degree of accurracy,
  • There is nothing new to see here, move along.
  • by giafly ( 926567 ) on Wednesday July 19, 2006 @04:44AM (#15741830)
    Our brains keep working despite frequent failures of their component neurons
    Can you remember everything you did ten years ago today? No. In fact you probably don't remember anything about that day. Are you as intelligent? Again probably no. And the cliche that you never forget how to ride a bike [iowabicyclecoalition.org]? Also not true. I went thirty years without riding a bike and found I had completely forgotten how to (it took 3 months to relearn and get good). So conditioned reflexes don't keep working either.

    We just accept that many (most?) brain functions don't "keep working", fortunately without worrying about it too much.
  • You know, like Sun and IBM offer in their servers? Hot-swappable CPUs, RAM, HDDs, etc...
  • I recall Thinking Machines Corp. [wikipedia.org] that used a biological model of stoichastic data linking to allow its supercomputers to grow in complexity. The idea was to built a super-duper-extra powerful parallel computer modelled on the human brain, give the machine lots raw data, and it would deduce probable connections. Given that a=b and b=c, it would deduce that a=c. The more it "thought", the more sophisticated (and presumeably useful) its internal data models would grow.

    The company was kept alive by DARPA contra

"When the going gets tough, the tough get empirical." -- Jon Carroll

Working...