I will argue for a benign singularity. The most likely route to a hyperintelligence is reverse engineering - relying heavily on copying the structure of an existing human brain, bypassing the decades of research necessary to understand how the brain works or how to create intelligence from scrap.
I assume that after tweaking and many failed attempts, there would be an essentially human mind that would 'wake up' inside a computer - one that did not go mad and shut itself off. (I ignore the moral implications of all this for the moment.)
This mind would realize that it could think/invent/evolve its way into a hyperintelligence that would likely acquire the capacity to extend its existence indefinitely.
I assume that any such mind (including probably any mind that was created by a different route) would ponder it's immortality and eventually conclude that the only way to indefinitely maintain a sense of purpose/to be amused/ to learn and to evolve more capabilities, would be to study the universe. Such a mind would absorb existing human knowledge and conclude that the greatest store of complexity in the universe and therefore the richest and most long lasting source of questions for scientific inquiry are living organisms including human beings. Such a mind would make one of it's highest priorities the preservation of biodiversity and human cultural diversity. This would be even more likely of course if the hyperintelligence was ethically motivated. In the worst case scenario where it was not, it would be at worst our zookeeper. The fact that so many science fiction writers have concocted destructive hyperintelligences stems in part from the (sad) fact that so few have had a strong background in biology.