I read the article just to see how it addresses this obvious objection, but it does not.
Almost everywhere in the article, you could replace the role of AI with nuclear weapons - it's basically just "what if technological development leads inevitably to self-annihilation." (And for now, nuclear weapons are a much stronger contender for this role than AI).
Right you are. You can get a publication out of a monocausal theory to explain the Fermi Paradox, so every time a real or (in this case) supposed danger of technology comes up it gets proposed as the explanation of the Fermi Paradox, all of which fail to understand Fermi's original insight.
To explain the apparent absence of extraterrestrial intelligence, under the assumption that the evolution of species similar in abilities to humans is common in the Milky Way, these "explanations" have to apply to every such species. This is what makes the Fermi Paradox a paradox. There cannot be a single one that escapes the supposed filter or otherwise it would fill the galaxy in a cosmically and geological brief time, even if you use a very slow model of the spread. Why would some particular filter happen to every species everywhere? The notion that every species would have a nuclear war that wipes it out, or creates "grey goo" nanotech, or is destroyed by super AI - and that does not itself become a space faring civilization - as if it was a physical law that cannot be escaped (like the speed of light limit) is not really a theory, it is a science fiction story premise. It would rise to the level of a theory if they could show a plausible reason why it would be universal, but no one pedalling these papers attempts to do that, most likely because they can't.
Then there is a trope that gets repeated when this subject comes up that we have only "one data point of evidence" bearing on this subject -- which sounds like a wise observation... until you think about it carefully. In terms of the development of human-like intelligence we have had an astronomical number of experiments conducted by evolution with animals here on Earth over the last 400 million years. Whether you use the number of separate animal-years, or animal-species-years, or number total number of animal species that have ever existed, and even if you throw out all the insect species, you still end up with an astronomical number of data points where human-style intelligence never developed. And if you take a "micro-look" just at the mammal lineages that finally led to humans there is no evident tendency to develop human-type intelligence. Our closest living relative species (the other Great Apes) developed dexterity, social organization, and intelligence similar to early Homo several million years ago and have shown no tendency to develop further toward Homo capabilities. Even with Homo what we suspect from the fossil evidence is that a peculiar combination of environmental events cause an abrupt progression to having much larger forebrains and the development of language and symbolic thought -- no indication of inevitability can be seen.
This evidence suggests that human style intelligence is a very low probability event, something that is astronomically rare. And at the same time the fact that we can now study a large number of exo-planet systems (over 4000 systems confirmed to date) has given us hard data to estimate how frequently Earth-like planets really are. The developing science here suggests that we are a "rare Earth" indeed. The combination of planet system configuration (with a single large Jupiter) and early large Moon formation, in the Goldilocks zone around a non-flaring single star is very uncommon -- thus far we do not have a single candidate system that matches the minimum requirements, which does not even include the improbable Moon filter. So Earth-like planets are rare, and human-like species are incredibly unlikely even when one occurs, starts making the case that the expected number of planets with civilizations is itself a very small number.