A robot may not injure a human being or, through inaction, allow a human being to come to harm.
I assume it prioritizes immediate physical harm over putting someone in solitary. Otherwise it would violate the other half of the law via inaction.
A black and white morality in the three laws just wouldn't suit any robot in any situation. Harm is far too broad a term to be useful anyway. Does my robot constantly jump to my aid before I stub my toe? Because god damnit I'd throw it out of a window after a few days.