That's an argument I can buy. Absolutely, with NN, the topology is static. Unless every node is connected to every other node, bi-directionally, you cannot emulate a dynamic topology. And that's assuming a fixed number of neurons. We know, in the brain, the number of neurons varies according to usage. So even a fully-connected NN would not be sufficient unless it started off at the maximum potential size.
I agree that to evolve, you've got to have an environment to evolve in, a means to evolve and a pressure to evolve. The AI field that looks at this sort of thing is "Genetic Algorithms", and there are a few systems in that area which look promising.
It's my thesis, though, that Strong AI must be more complex than even that. All higher life-forms have not only an external environment but an internal one as well. There is a simulation of the local "world" in the brain that is updated by the senses and this is the "reality" we perceive. The consciousness is not directly connected to any sense, which is why you can induce synaesthesia. The mind, therefore, evolves according to this simplified internal model. and not the external reality.
The idea of Emergent Intelligence is therefore very appealing. It is possible to construct a virtual world for the Artificial Life and a second virtual world maintained by the Artificial Life. This doesn't require knowing how to develop intelligence or how to define it. They're just virtual worlds, nothing more. All you need then is an initial condition and a set of rules. These would be more sophisticated than a conventional genetic algorithm, but based on the same idea. If you don't know what something will be, but know how to determine how close you are, herustics are sufficient for you to close the gap as much as you like.
This would not be "Artificial Intelligence" in the sense that the intelligence emerged with no human intervention past the initial state. It was not made, it's not an artifact, it's perfectly natural but in an artificial world running on an artificial computer. It is possible to determine if this universe is a simulation running on a computer running on a universe of the same size, but it is not possible if this universe is a simulation running in a larger universe. The decision on whether something is artificial or not cannot, then, be governed by the platform because we've no idea if this is top-level or not and we cannot. Nonetheless, we're indistinguishable from a natural lifeform, thus we have to say that it is this property that decides if something is natural.
An imitation of the whole human brain is planned in Europe. The EU is building a massive supercomputer that will run a neuron-for-neuron (and presumably complete connectome) simulation of the brain for the purpose of understanding how it works internally. I think that's an excellent project for what it is designed for, but I don't think it'll be Strong AI.
Let's say, however, you built a virtual world at a reasonably fine-grain (doesn't have to be too fine, just good enough), a second virtual world that was much coarser-grain and which used lossy encoding in a way that preserved some information from all prior states, a crude set of genetic algorithms that mapped outer virtual world to inner virtual world, and finally an independent set of genetic algorithms that decide what to do (but not how), a set for examining the internal virtual world for past examples of how, a set for generating an alternative method for how without recourse for memory, and a final set for picking the method that sounds best and implementing it, and an extensive set that initially starts off with reconciling differences between what was expected and what happened.
That should be sufficient for Emergent Intelligence of some sort to evolve.