Then so do subatomic particles. You don't need AI if that's all you want. If subatomic particles do not have free will, then neither do humans. This second option allows physics to be Turing Complete and is much more agreeable.
If computers develop sufficient power for intelligence to be an emergent phenomenon, they are sufficiently powerful to be linked by brain interface for the combination to also have intelligence as an emergent phenomenon. The old you would cease to exist, but that's just as true every time a neuron is generated or dies. "You" are a highly transient virtual phenomenon. A sense of continuity exists only because you have memories and yet no frame of reference outside your current self.
(It's why countries with inadequate mental health care have suspiciously low rates of diagnosis. Self-assessment is impossible as you, relative to you, will always fit your concept of normal.)
I'm much less concerned by strong AI than by weak AI. This is the sort used to gamble on the stock markets, analyse signal intelligence, etc. In other words, this is the sort that frequently gets things wrong and adjusts itself to make things worse. Weak AI is cheap, easy, incapable of sanity checking, incapable of detecting fallacies and incapable of distinguishing correlation and causation.
Weather forecasts are not particularly precise or accurate, but they've got a success rate that far outstrips that of Weak AI. This is because weather forecasts involve running hundreds of millions of scenarios that fit known data across vast numbers of differing models, then looking for stuff that's highly resistant to change, that will probably happen no matter what, and what on average happens alongside it. These are then filtered further by human meteorologists (some solutions just aren't going to happen). This is an incredibly processed, analytical, approach. The correctness is adequate, but nobody would bet the bank on high precision.
The automated trading computers have a single model, a single set of data, no human filtering and no scrutiny. Because of the way derivatives trading works, they can gamble far more money than they actually have. In 2007, such computers were gambling an estimated ten times the net worth of the planet by borrowing against predicted future earnings of other bets, many of which themselves were paid for by borrowing against other predicted future earnings.
These are the machines that effectively run the globe and their typical accuracy level is around 30%. Better than many politicians, agreed, but not really adequate if you want a robust, fault-tolerant society. These machines have nearly obliterated global society on at least two occasions and, if given enough attempts, will eventually succeed.
These you should worry about.
The whole brain simulator? Not so much. Humans have advantages over computers, just as computers have advantages over machines. You'll see hybridization and/or format conversion, but you won't see the sci-fi horror of computers seeing people as pets (think that was an Asimov short story), threats counter to programming (Colossus, 2010's interpretation of 2001, or similar) or vermin to be exterminated (The Matrix' Agent Smith).
The modern human brain has less capacity than the Neanderthal brain, overall and in many of the senses in particular. You can physically enlarge parts of your brain, up to about 20%, through highly intensive learning, but there's only so much space and only so much inter-regional bandwidth. This means that no human can ever achieve their potential, only a small portion of it. Even with smart drugs. There are senses that have atrophied to the point that they can never be trained or developed beyond an incredibly primitive level. Even if that could be fixed with genetic engineering, there's still neither space nor bandwidth to support it.