Think about it this way--
You have a system that does $FOO.
you arent sure how it does $FOO, exactly. You see that inputs go in, some magical process $BAR happens inside, and $FOO comes out.
Strong AI strives to reproduce this $FOO.
The issue, is that the process $BAR is very much dependent on what the system is built from. (In this case, complex organic molecules and saline ions). Understanding $BAR is insanely hard, because $BAR is carried out in a highly parallelized fashion, with many many subprocesses going on, many of which are highly dependent upon the method of construction of the system, and exist soley because of that method of construction.
So, you want to build an artificial system that takes the same imputs, does $BAR, and gets $FOO.
Do you:
1) Slavishly reimplement millions of models in the new medium's physical construction, to emulate the quirks and behaviors of the target system's physical construction, wasting huge amounts of energy and making a system that is actually *MORE* complex than the original....
OR
2) Deconstruct all the mechanisms at work in the physical system that currently performs $BAR to get $FOO, evaluate which of these are hardware dependent, and can be removed/adapted to high efficiency analouges in the new hardware platform-- and produce only the components needed for $BAR to be accomplished, to generate $FOO?
The former will most certainly get you $FOO, but is HORRIBLY INEFFICIENT, and does not really shed light on what is actually needed to get $FOO.
The latter is MUCH HARDER to do, as it requires actually understanding the process, $BAR, through which $FOO is attained. It will however, yeild the higher efficiency synthetic system, AND the means to prove that it is the best possible implementation.
Basically, it's the difference between building a rube-goldberg contraption, VS an efficient machine.