Do you:
1) Slavishly reimplement millions of models in the new medium's physical construction, to emulate the quirks and behaviors of the target system's physical construction, wasting huge amounts of energy and making a system that is actually *MORE* complex than the original....
OR
2) Deconstruct all the mechanisms at work in the physical system that currently performs $BAR to get $FOO, evaluate which of these are hardware dependent, and can be removed/adapted to high efficiency analouges in the new hardware platform-- and produce only the components needed for $BAR to be accomplished, to generate $FOO?
The former will most certainly get you $FOO, but is HORRIBLY INEFFICIENT, and does not really shed light on what is actually needed to get $FOO.
The latter is MUCH HARDER to do, as it requires actually understanding the process, $BAR, through which $FOO is attained. It will however, yeild the higher efficiency synthetic system, AND the means to prove that it is the best possible implementation.
Basically, it's the difference between building a rube-goldberg contraption, VS an efficient machine.
We've been trying, in various ways, to do #2, but can't do it yet. So, we're trying to do #1, analyse it, and then do #2. You say that we should 'produce only the components needed', but really, that's the crux of the matter. We don't know what the components needed are. We can't even simulate a worm yet at either the individual cell OR functional level; see the OpenWorm project (http://www.openworm.org/) for an attempt at the former. We can use that sort of model organism to figure out what the important features are, model those, and move forward, but it seems unreasonable to complain that full nervous system modeling is the wrong approach, when the alternatives haven't worked yet.