Probably not - weak AI is typified by directly encoding domain knowledge on human capabilities into state machines, not typically meant to be neuroplausible or human-like. I believe the substrate here is wrong - real organisms learn (either as individuals or through generational building/encoding/selection towards instinct) how to do these things, and that knowledge is integrated. I don't think it'd be easy or likely that weak AI research methods will produce an integrated being with all these capabilities.
I'm sticking my neck out a bit here though; I'm not sure that weak AI research would be useless. Sufficiency versus usefulness is a complicated topic.
Also, my research was in neuroscience (led by cognitive modeling), not AI. It's a neighbouring field, but take what I have to say with at least a grain of salt.