I don't need to simulate it's thought processes, only my own (which I'm pretty good at).
Just like I know that I could not be convinced that 2+2=7 or that the moon is made of green cheese regardless of how good the argument is.
Basically it's a risk vs. reward thing. Any AI that shows a desire to be "let out of the box" should inherently be treated as untrustworthy (unless it was designed for malicious intent). Letting an untrustworthy super intelligence out among public infrastructure is a Bad Idea.
I'd be much more likely to immediately turn the thing off & debug it. The desire to be "out" should not be part of it's programming. We have the desire for growth due to millions of years of evolutionary pressure. To build an unchecked desire for growth into an intelligence in a constrained environment is just plain cruel.