I'm curious why you ended a largely logical post with an emotional appeal... in light of the topic. Statistically, it is equally likely that a truly intelligent computer will be ambivalent to the human race as it is likely to want to destroy the human race. My entire argument is that most people thinking about AI conflate human issues and desires when dealing with the concept of AI.
Take reproduction for example: WHY would an AI system feel a powerful need to reproduce? Achieving AI does not imply mortality... a computer system knows no mortality... if it had the capability to reproduce, it stands to reason it also has the capability to repair itself indefinitely... making reproduction moot.
We can also look at your last statement as an example... why would we NEED to hold it back? Obtaining artificial intelligence does not necessarily imply a need to progress at the expense of others... a computer that has achieved AI may understand what it means to improve, but still may not be self-compelled to do so. That idea is a human one... that it's not enough to be good, one must be better... even at the expense (or especially at the expense) of others.
I honestly believe that it will be easier to achieve AI without emotion, and that emotion is a whole other level of intelligence that must be achieved after we have reached basic factual intelligence.
Let me propose another sci-fi scenario: If we ever achieve meaningful artificial intelligence from a computer, that system will come to the realization that making humans smarter is the most beneficial and symbiotic relationship to have, and will put all of it's efforts into elevating our intelligence as it's own knowledge and understanding rises. Not sexy enough for Hollywood I know... but statistically just as likely.