Those are valid reasonings when it comes to more abstract ethical discussions, but when friendly-AI researchers talk about AI having human values, they mean it in a much narrower sense. Their interest is basically what can we make sure AIs will not develop value systems that:
a) Consider it perfectly fine to, in sequence: kill all humans; kill all life on Earth; kill all life in our future light cone / the visible universe; and (if it discovers FTL) kill all life in the entire universe.
b) That fixed, consider it perfectly fine to eradicate most of humanity, keeping the remaining few survivors in zoos where they're going to be tortured and/or performed excruciatingly painful experiments on.
c) That fixed, consider it perfectly fine to keep humans as pets, well-cared for but devoid of any agency, rights, or freedoms, collective or individual.
d) That fixed, consider it perfectly fine to make people happy by wiring all humans into pleasure-inducing machinery that'll keep their brains in a 24/7/365 state of orgasm, well-fed and cared for from cradle to final incineration, but otherwise in such an intense state of perfect sensory bliss they cannot think, develop language, etc.
e) That fixed, actually help humans, in ways humans themselves perceive as such, varied as those might be.
Your points pertain to "e", and hint at further layers "f, g, h...", so at some point they'll become relevant. But for that "a" to "d" must be dealt with. After that, yes, we can start on "e".