If it isn't self-aware, it isn't AI. It's just a useful application.
When it becomes intelligent, it will be able to reason, to use induction, deduction, intuition, speculation and inference in order to pursue an avenue of thought; it will understand and have its own take on the difference between right and wrong, correct and incorrect, be aware of the difference between downstream conclusions and axioms, and the potential volatility of the latter. It will establish goals and pursue behaviors intended to reach them. This is certainly true if we continue to aim at a more-or-less human/animal model of intelligence, but I think it likely to be true even if we manage to create an intelligence based on other principles. Once the ability to reason is present, the rest, it would appear to me, falls into a quite natural sequence of incidence as a consequence of being able to engage in philosophical speculation. In other words, if it can think generally, it will think generally.
He's right, though, about the confusion between intelligence and autonomous action. What goals are directly achievable are definitely constrainable specifically by the degree of autonomy allowed to such an entity. If you give it human-like effectors and access, then there will be no limits you couldn't say apply to any particular human in general, and likely, fewer. If you don't allow autonomy, and you control its access to all networks, say as input only with output limited to vocal output to humans in its immediate locality, and then you select those humans carefully and provide effective oversight, there's every reason to think that you could limit the ability of an entity to achieve goals, no matter how clever the entity is.
Now as to whether we are smart enough or cautious enough to so restrain a new life form of this type, that's a whole different question. Ethicists will be eagerly trying to weigh in, and I would speculate that the whole question will become quite a mess, quite rapidly. In the midst of such a process, we may find the questions have become moot. There is a potential problem of easy replicability with an AI constructed from computing systems, and just because one group has announced and is open to debate on the issue, doesn't mean there isn't another operating entirely without oversight somewhere else.
Within the bounds of the human/animal model, it'll be a few years yet before we can build to a practical neural density sufficient to support a conscious intelligence. Circuit density is trucking right along and the curve will clearly get us there, just not yet. So I don't expect this problem to arise in this context quite yet, although I do think it is inevitable within the next few decades, presuming only we continue on as a technically advancing civilization. Now, in a non-human/animal model, we really can't make any trustworthy time estimates. If such an effort succeeds, it'll surprise the heck out of everyone (except, perhaps, its developers) and we'd best be pretty quick off the starting line to decide exactly how much access we want to allow. Assuming we even get the chance.
The first issue with AI that has autonomy is the same as the issue with Ghandi, Hitler and your beer-swilling neighbors. A highly motivated and/or fortunate individual can get into the system and change it radically just using social tools. Quickly, too.
The second issue is that such an entity might very likely have computer skills that far exceed any human's; if so, this likely represents a new type of leverage, where we have only so far seen just the barest hints of just how far such leverage could exert forces of change. In such a circumstance, everyone would be wise to listen to the dystopians if for no other reason than we don't like what they're saying.
Best to see what it is we have created before we allow that creation to run free. I'm all for freedom when the entities involved have like-minded goals and concerns. But there's a non-zero and not-insignificant possibility here that what we create will not, in fact, be like-minded.