They aren't that smart if they think machines could ever be sentient. Machines are deterministic. They do what you tell them to.
And what happens if we tell them to behave randomly? A particle filter, for instance, uses randomness to generate a set of states for evaluation. Sensor fusion takes large numbers of highly error prone sensor readings and merges them into state estimates. Both methods introduce uncertainty into state estimation and, therefore, present non-deterministic foundations for reasoning. Even if the reasoning processes are strictly deterministic, you can still get non-deterministic behavior, and that's without even introducing any explicit behavioral randomness.
But, let's be honest: No one has ever provided a definition (that I've ever seen) for sentience which precludes deterministic response. Are you proposing irrationality as a fundamental identifier of intelligence? I'm not sure I'd call it a feature, but maybe it's an inevitable consequence.
Anyone with enough insight and humility knows there's still an extremely large piece of the puzzle missing in our understanding of life. And you need to understand how something works before you can create it.
I don't think there is an extremely large piece. I think there are hundreds of thousands of little pieces. Also, we create things all the time without understanding them. I mean most people don't have any idea how mitosis works and yet we don't have much problem reproducing. Anyway, the point is that I don't think we need a full understanding of human intelligence to create some kind of intelligent agent.