Most of these new AI tools have gained their new levels of performance by incorporating Transformers in some form or another, in part or in whole. Transformers is the backend of LLMs.
Even in cases where Transformers isn't used these days, often it's imitated. For example, the top leaderboards in vision models are a mix of ViTs (Vision Transformers) and hybrids (CNN + transformers), but there are still some "pure CNNs" that are high up. But the best performing "pure CNNs" these days use techniques modeled after what Transformers is doing, e.g. filtering data with an equivalent of attention and the like.
The simple fact is that what enabled LLMs is enabling most of this other stuff too.