From the article:
"There are too many things we don't yet know," says Caltech professor Christof Koch, chief scientific officer at one of neuroscience's biggest data producers, the Allen Institute for Brain Science in Seattle. "The roundworm has exactly 302 neurons, and we still have no frigging idea how this animal works."
That's the problem. Just because we can extract the wiring diagram doesn't mean the components are well understood yet. Also, if we understood the components and how to wire them up, it would be cheaper to just build hardware. Simulating neurons is slow. It's like running SPICE instead of building circuits. Works, but there's about a 1000x or worse speed, power, and cost penalty. GPUs are often simulated at the gate level before making an IC; NVidia uses twenty or thirty racks of servers to simulate one GPU during development.
What bothers me about claims of strong AI is that I've heard it before. Ed Feigenbaum, the "expert systems" guy at Stanford, was running around in the 1980s, promising Strong AI Real Soon Now if only he could funding for a giant national AI lab headed by him. He even testified before Congress on that. Expert systems were a dead end.
Rod Brooks from MIT went down this road too. His COG project had a robotic head and some arms, some facial expressions, and a lot of hype. Work ceased on that embarrassment in 2003. He'd done good artificial insect work, but the jump to human level was way too big.
This is the hubris problem in AI. Too many people have approached this claiming their One Big Idea would lead to strong AI. So far, not even close.
All the mammals have similar DNA and brain architecture. A mouse brain is about 1g; a human brain is about 1000g. So build a simulated mouse brain and demonstrate it works, or STFU.