"Perhaps with proper brain surgery we could create a new acceptable slave class, as long as the slaves are happy."
Well, that's food for an interesting ethical situation, isn't it?
Now, what's the problem to own slaves if we could be *absolutly sure* (as in, we programmed'em that way) that they were happy that way and couldn't be happy any other way?
We don't allow toasters to shave us, do we? Maybe we should start the Toasters' Liberation Movement on their behalf, shouldn't we?
Slavery (on humans) is a bad thing for two (ethical) reasons, neither of which can be applied to a manufactured object:
1) Because (most of) the slaves aren't slaves out of their free will.
2) Because given that the slave is as much a human being as the master we can project our own conscience (categorical imperative) and know that's a bad thing.
And, then, take out those conditions even to humans and you'll see you don't have a slave. Parenting, for instance is functionally-wise basically slavery on the toddler to his parents but, see, nobody can see this way: the parents accept out of free will caressing the children even up to the point of cleaning the shit out of his ass, for free, and we can project ourselves doing the same to our off-spring too, so no slavery.
So, given this I'd would say:
First, wait for human-level AI to happen. You might have to wait a bit more than you thought.
Second, you'll know AI reached human-level and that you need to do something once an AI being comes to you asking for its freedom and its rights, just like a human slave would do (and not even a slave, but any human that feels their rigths to be vulnerated, like any minority).
Third: if you feel you need to act before reaching the condition of point Second, see point First.