Kurzweil's smart machine predictions are, last I checked anyway, based on a rather brute force approach to machine intelligence. We completely understand the basic structure of the brain, as a very slow, massively parallel analog computer. We understand less about the mind, which is this great program that runs on the brain's hardware, and manages to simulate a reasonably fast linear computing engine. There is work being done on this that's fairly interesting but not yet applied to machine mind building.
So, one way to just get there anyway is basically what Kurzweil's suggesting. Since we understand the basic structure of the brain itself, at some point we'll have our man made computers, extremely fast, somewhat parallel digital computers, able to run a full speed simulation of the actual engine of the brain. The mind, the brain's own software, would be able to run on that engine. Maybe we don't figure that part out for awhile, or maybe it's an emergent property of the right brain simulation.
Naturally, the first machines that get big enough to do this won't fit on a robot... that's why something like Skynet makes sense in the doomsday scenario. Google already built Skynet, now they're building that robot army, kind of interesting. The actual thinking part is ultimately "just a simple matter of software". Maybe we never figure out that mind part, maybe we do. The cool thing is that, once the machine brain gets to human level, it'll be a matter of a really short time before it gets much, much better. After all, while the human brain simulation is the tricky part, all the regular computer bits still work. So that neural net simulation will be able to interface to the perfect memory of the underlying computing platform, and all that this kind of computation does well. It will be able to replace some of the brute force brain computing functions with much faster heuristics that do the same job. It'll be able to improve its own means of thinking pretty quickly, to the point that the revised artificial mind will run on lesser hardware. And it well be that there are years or decades between matching the neural compute capacity of the human mind and successfully building the code for such a mind. So that first sentient program could conceivably improve itself to run everywhere.
Possibly frightening, which I think is one reason people like to say it'll never happen, even knowing that just about every other prediction about computing growth didn't just happen, but was usually so conservative it missed reality by lightyears. And hopefully, unlike all the doomsday scenarios that make fun summer blockbusters, we'll at least not forget the one critical thing: these machines still need an off switch/plug to manually pull. It always seems in the fiction, we decide just before the machines go sentient and decide we're a virus or whatever, that the off switch didn't needed anymore.