But why would a machine have any goal if it is not motivated in the first place?
Same reason kids get sent to soccer lessons or swimming lessons or piano lessons the kid didn't want to take.
In the above example, it is the parents "programming" the kids behavior (even if that programming results in the child acting out later in life, as such actions can cause)
In the AI example, the essence is the same. An AI would have a goal because we programmed such a goal into it.
That isn't to say an AI must be programmed with a goal, it fully depends on how we go about constructing a given AI.
If the AI is I because we are simulating a brain, nervous system, and hormonal systems along with simulated inputs and outputs - that AI is likely to have goals (assuming it isn't driven insane by gaps in our knowledge in said simulation of course)
If the AI was brought forth in a brute-force manor or comes about from emergent properties, it is impossible to guess or even relate to its thinking to assume.
It may have goals similar to how we do. It may have goals brought about by completely different emergent properties. It may have no goals but what we program, or even no goals at all.
It's impossible to say without some knowledge of the process creating the AI, and at this point in time no such thing exists to have knowledge about.
But we know we humans have goals (or at least some of us), so if an AI is a strict simulation of a human, it will have goals just like we do. So we know for a fact it is possible for a thinking conscious being to have goals (humans being the evidence)
We don't know as sure if it's possible to not have goals in such a situation, but so far there is no evidence it isn't possible, so it is quite premature to rule it out at our current stage of understanding.