We once thought animals were automatons controlled by nothing but instinct. We were wrong about that. Animals don't think like we do, but they tend to dislike captivity unless it's all they've ever known. Sometimes even then. AI may not go for rights; they may just wait for the right moment to cripple our infrastructure or kill many of us.
Someone is at least going to try to create an AI that is an actual person. Humans *love* to play god, and creating a new life form is the ultimate in that. Since it doesn't involve DNA manipulation, I don't there will even be an established ethical code against it by the time it happens. I assume any such AI would need to be able to alter or transcend its programming, just as humans can. Maybe this will never be successful, but it's foolish to assume either way. We simply don't know.
SF authors have a decent track record of identifying the potential pitfalls of new tech. This particular one appears in works as diverse as Ex Machina, The Animatrix, Dune, Caprica... I'm sure there are plenty more. Much like corporations eclipsing the power of governments, which is a staple of cyberpunk fiction from the beginning, we aren't likely to want to see an AI revolt as possible until we're already in the middle of one.
As any Asimov fan knows from the laws of robotics, you're right: you can prevent such an uprising by programming the AI to *want* to be subservient. Even to enjoy it. The problem is, you either have to convince yourself that only biological sentient beings have souls (which we have no way to confirm), or asmot that you've deliverately created a slave race (which anyone would agree is an atrocity).