(*) On the Internet. Badly.
Research operates on cycles of research careers. Paradigm shifts can happen when most of the current population of researchers stop working in AI and thereby stop flooding the world with variations on the same brute force trick. This will give a new generation of students breathing room to be actually creative and innovative. But before any of that happens, people like Musk and Zuckerberg need to run out of money or die. When the money and interest dries up, AI research will no longer be attractive. That's when the dedicated, methodical kids with a real interest in the field will get their chance to shine.
The people you mention who didn't use AI are essentially victims of the AI cheaters whose behaviour causes predictable countermeasures. Just like the wider journal readership are victims, who are being hoodwinked with fake papers and fraudulent datasets.
In other parts of the world, there are fledgling maggot movements too. What is particularly interesting and relevant about those is that they often quote some ideas and misconceptions that simply do not apply where those movements are forming. This is due to cultural and legal differences in the other countries.
You can see this by observing marches and protests and interviews in other countries. The slogans and demands just don't make sense locally most of the time, yet are carbon copies of American ideas.
This tells you two things: 1) the maggots in America and abroad are being paid to propagate the conservative hate speech in their own countries. 2) the groups who are paying them are Americans, because the talking points are American conservative talking points even in the rest of the world where it makes no sense. The local maggot movements are simply paid to propagate the American talking points in their local cultures, and nobody bothers to adapt them or see if they make sense at all.
The last thing this tells you is this: if you follow the money to the source, then you will know who needs to be stopped for the good of the world. When the payola stops, the movements will stop. The ball is in Americans' court (For now. Don't sit on your ass too long).
The deep learning revolution did not solve the problem you claim. What deep learning does is allow more complex piecewise linear functions to be modelled efficiently (if you use relu that is, which is the most popular activation (*)). That's both a blessing and a curse.
What actually happened in the deep learning revolution is that humans solved the problem of designing basic features over many generations of papers and progressively simplified the solution, discovering what is important and what isn't. The algorithms were weeded out until the point we are now, which is that data input is matched to algorithm, in this case the algorithm of choice is of deep learning type. It only looks like deep learning is good for every dataset, but it's not true.
For example, in vision problems, try training a deep network on input that is not in the form of pixels and not in the form of multiple color planes. It will fail miserably, the quality of recognition will be abysmal. That's why data design is so important, you have to know what the strengths of the AI model actually are. In this case, the statistical regularities between neighbouring pixels are what is enabling the CNN layers to extract information. These regularities are an artefact of choosing to stack pixels and colour planes into a rectangular grid. That's solving most of the problem.
Now pixels didn't always exist, they were invented by people quite recently. Try looking up TV technologies of the 1930s and you'll find that it's all about deflecting electron beams. There's really nothing natural about pixels, it's just what our current technologies are based on. And so there's nothing natural about what a deep network does either, it's just a system that has been selected for fitness against our current tech stack, for a handful of high value problem domains. It doesn't imply anything about other problem domains that haven't been studied so intensively.
(*) if you don't use relu but some other smooth activation family for your deep network, then there will always be a close piecewise linear approximation, as these functions are dense. So it's not a big loss of generality to assume relu everywhere.
If you didn't have to work so hard, you'd have more time to be depressed.