This idea that in order to achieve intelligence you need to understand how the brain works is preposterous.
We don't understand how grandmasters play chess, and yet we can build machines that play chess better than any grandmaster. The same thing will happen with more and more skills, and we'll get to a point where it will be clear that machines are more intelligent than humans.
2029 sounds optimistic to me, but the arguments in TFA are very weak:
* "What exactly does as-smart-as-humans mean?" It means "as good as humans at most tasks". The precise definitions won't matter when you actually see the machine in action.
* "Human intelligence is embodied." But artificial intelligence need not be embodied. If we can make a machine as smart as Stephen Hawking, I think we have done OK. I don't think his embodiment is a key part of his intelligence.
* "As-smart-as-humans probably doesn’t mean as-smart-as newborn babies, or even two year old infants." Of course not, but there is no reason a machine would have to learn at the same pace we do, or from the same sources, or in a similar fashion. Going back to the computer chess analogy, a grandmaster requires years of experience to learn how to play well, while a program can parse a large database of games and learn from them in a matter of hours or days.
* "Moore’s Law will not help." This is retarded. The paragraph goes on to acknowledge that it will help, but computer power is not the whole story. Of course it's not the whole story! But it will certainly help.
* "The hard problem of learning and the even harder problem of consciousness." Machine Learning is a very active discipline, with many recent successes. I don't think learning is a serious obstacle. I don't see a problem of consciousness anywhere. "Consciousness" sounds like a new name for "the soul" to me: It's likely to be an attribute that we assign to people as part of the theory of mind, not an actual thing we need to produce and insert into our machines. In any case, it has very little to do with intelligence.
It won't matter if we know what makes humans intelligent, or what intelligence is, or what consciousness is: The proof will be in the pudding. When you see machines that surpasses humans at most tasks we think of as requiring intelligence, we'll have intelligent machines. And philosophers can continue to argue about definitions all they want.