It is just that it isn't surprising that his pitch is that AI has the potential to be wildly dangerous and we need to think about safety. That's essentially the only path that makes his firm a viable long term player.
If you believe that AI has the potential to be wildly dangerous, that may be the only path that makes the human race a viable long term player.
And I've yet to see any well thought-out argument showing that AI doesn't have the potential to be wildly dangerous. If anyone has one, please post it!
The closest I've seen are:
1. Humans are incapable of creating AGI, so the AI companies are simply going to fail.
2. There is a hard upper limit on intelligence, and it's not far above human-level, so even when AI companies succeed at creating AGI, superintelligence is impossible.
3. If AI becomes superintelligent, humans will be able to use the same technology to augment their own intelligence so we won't be at a disadvantage.
4. High intelligence naturally and inevitably includes high levels of empathy, so AI superintelligence is inherently safe.
All of these are just unsupported assertions. Wishful thinking, really. (1) makes no sense, since undirected, random variation and selection processes were able to do it. (2) Is plausible, I suppose, but I see no evidence to support it. (3) essentially assumes that we'll achieve brain/computer integration before we achieve self-improving AGI. (4) seems contradicted our experience of extremely intelligent yet completely unempathetic and amoral people, and that's from a highly social species.
The other common argument against AI danger that I've heard is just foolishness: Because AIs run in data centers that require power and because they don't have hands, they'll be easy for us to control. People who say this just fail to understand what "superintelligence" is, and also know nothing about human nature.
A less-foolish but equally-wrong argument is that of course AI won't be dangerous to humanity. We're building them, so they'll do what we say. This assumption is based on a lack of understanding of how we're building them and how little we know about how they work.
Generally, when I talk about the risks of ASI, the responses aren't even arguments, they're just content-free mockery of the idea that AGI could ever be real. I think it's just genuinely hard for many people to take the question seriously. Not because there aren't good reasons to take it seriously, but because they just can't bring themselves to really consider it.