Personal robots are basically mobile computers with servos, and computer software/hardware has a long way to go before it can be considered trustworthy, particularly once it's given as much power as a human.
First there's the issue of trusting the programming. Humans act responsibly because they fear reprisal. Software doesn't have to be programmed to fear anything, or even understand cause and effect. It's more or less predictable how most humans operate, yet there's many potential ways software can be programmed to achieve the same thing, some of which would make it more like a flowchart than a compassionate entity. People won't know how a given robot is programmed, and the business that writes its proprietary closed-source software likely won't say, either.
Second is the issue of security. It's pretty much guaranteed that personal robots will be network-connected to give recommendations, updates on weather/friend status/etc., which opens up the pandora's box of malware. You think Stuxnet etc. are bad, wait until autonomous robots are remotely reprogrammed to commit crimes (say, kill everyone in the building), then reset themselves to their original programming to cover up what happened. With a computer you can hit the power button, boot into a live Linux CD and nuke the partitions; with a robot, it can run away or attack you if you try to power it down or remove the infection.
Even if it's not networked, can you say for certain the chips/firmware weren't subverted with sleeper functions in the foreign factory? Maybe when a certain date arrives, for example. Then there's the issue of someone with physical access deliberately reprogramming the robot.
Finally, the Uncanny Valley has little to do with the issue. It may affect how much it can mollify a frightened person, but not how proficient it is at providing assistance. If a human is caring for another human, and something unusual happens to the person they're caring for, they have instincts/common sense as to what to do, even if that just means calling for help. A robot may only be programmed to recognize certain specific problems, and ignore all others. For example, it may recognize seizures, or collapsing, but not choking.
In practice, I don't think people will trust personal robots with much responsibility or physical power until some independent tool exists to do an automated code review of any target hardware/software (by doing something resembling a non-invasive decapping), regardless of instruction set or interpreted language, and present the results in a summarized fashion similar to Android App Permissions. Furthermore, it must notify the user whenever the programming is modified. More plausibly, it could just be completely hard-coded with some organization doing code review on each model, and end-users praying they get the same version that was reviewed.