This is absolutely true, and is related to the whole "what is intelligence?" question. We tend to equate it with intent and free will because we're used to dealing with humans or at least animals that have a survival instinct and thus evidence 'will' to go with it.
There's no reason to believe that artificial 'intelligence' will ever evidence the same sort of intent. There's very little reason to endow a stock trading application with a sense of self or a need to survive. Even if it had some rudimentary form of these things, for whatever reason, they need not take the form of concern about itself, it would be more likely that such a program would be designed to protect its funds!
I can imagine some far future where in some sense self-driving cars have a 'will to survive' that makes them 'want' to avoid crashes, but they're unlikely to have the kind of self-awareness or ability to generalize that would be required to turn that into a desire to do anything except avoid traffic accidents, which is all they will need to understand about the world. You could imagine an interstellar space probe built to the level of a full general AI simply on the basis of having virtually no idea of what it will run into, but we're centuries from being able to build such things. As long as it can phone home fairly quickly all it might need is self-driving-car level 'reflexes', that would be useful. Even Pluto Express gets by as a totally dumb remotely controlled instrument, as do the Mars rovers. They don't need to be true AIs, not even close.