What you're proposing is basically a GA: Genetic algorithm.
Even when you give a system a biological analogy as its base, the results are unpredictable, un-interpretable, and don't confirm to any logical architecture.
There is a famous example of a chip designed to detect two different fixed frequencies of an input signal, and output which is active (if any). Designing the chip by hand results in a working, logical model of a certain size.
If you allow GA to run random "evolution" over the circuit contents, punishing it when it gets it wrong, and breeding from it when it gets it right, you end up with a circuit that appears to do the job.
Ironically, it even does it inside a smaller space than the human would have designed it. However, trying to interpret HOW it does that job is almost impossible and certainly not worth the effort. But the problem is, if you want to USE that chip, you have to do that effort. One day, there might be a corner case where it doesn't operate as you believe it might, and you won't know until you hit it.
At least with a logic circuit you can understand, you can in theory mathematically prove what it will do quite easily. With one that has multiple feedback loops and randomly-built interactions between parts, analysing it isn't worth the money you'd spend doing so, especially as it's quite likely that even after millions of generations of training, it could still contain quite prevalant bugs (i.e. when exposed to a real-world frequency close to the target ones that fluctuates differently to how whatever training inputs were used).
And GA's have proven themselves not quite as useful as we first hoping. Millions of generations later, you can still fall flat on your face and there's no real way to steer things differently without doing it all over again, and no reliable way to understand or adjust the output in even the smallest way.
Whenever you see that an AI has been "trained", you should be suspicious. It's like saying a dog has been trained. It's still an unpredictable, ever-changing, free-thinking animal that we don't understand but which usually gives us the output we want (sit, stay, heel). There's no telling, though, when it might decide to turn around and bite you, because it's range of inputs is not the only factor in how it makes a decision.
And that's a model of a system that, generally, abides by rules, accepts training, etc. and operates in certain logical ways to ensure survival after millions of generations of evolution. Anything we fabricate has even less guarantees.