The people you mention who didn't use AI are essentially victims of the AI cheaters whose behaviour causes predictable countermeasures. Just like the wider journal readership are victims, who are being hoodwinked with fake papers and fraudulent datasets.
In other parts of the world, there are fledgling maggot movements too. What is particularly interesting and relevant about those is that they often quote some ideas and misconceptions that simply do not apply where those movements are forming. This is due to cultural and legal differences in the other countries.
You can see this by observing marches and protests and interviews in other countries. The slogans and demands just don't make sense locally most of the time, yet are carbon copies of American ideas.
This tells you two things: 1) the maggots in America and abroad are being paid to propagate the conservative hate speech in their own countries. 2) the groups who are paying them are Americans, because the talking points are American conservative talking points even in the rest of the world where it makes no sense. The local maggot movements are simply paid to propagate the American talking points in their local cultures, and nobody bothers to adapt them or see if they make sense at all.
The last thing this tells you is this: if you follow the money to the source, then you will know who needs to be stopped for the good of the world. When the payola stops, the movements will stop. The ball is in Americans' court (For now. Don't sit on your ass too long).
The deep learning revolution did not solve the problem you claim. What deep learning does is allow more complex piecewise linear functions to be modelled efficiently (if you use relu that is, which is the most popular activation (*)). That's both a blessing and a curse.
What actually happened in the deep learning revolution is that humans solved the problem of designing basic features over many generations of papers and progressively simplified the solution, discovering what is important and what isn't. The algorithms were weeded out until the point we are now, which is that data input is matched to algorithm, in this case the algorithm of choice is of deep learning type. It only looks like deep learning is good for every dataset, but it's not true.
For example, in vision problems, try training a deep network on input that is not in the form of pixels and not in the form of multiple color planes. It will fail miserably, the quality of recognition will be abysmal. That's why data design is so important, you have to know what the strengths of the AI model actually are. In this case, the statistical regularities between neighbouring pixels are what is enabling the CNN layers to extract information. These regularities are an artefact of choosing to stack pixels and colour planes into a rectangular grid. That's solving most of the problem.
Now pixels didn't always exist, they were invented by people quite recently. Try looking up TV technologies of the 1930s and you'll find that it's all about deflecting electron beams. There's really nothing natural about pixels, it's just what our current technologies are based on. And so there's nothing natural about what a deep network does either, it's just a system that has been selected for fitness against our current tech stack, for a handful of high value problem domains. It doesn't imply anything about other problem domains that haven't been studied so intensively.
(*) if you don't use relu but some other smooth activation family for your deep network, then there will always be a close piecewise linear approximation, as these functions are dense. So it's not a big loss of generality to assume relu everywhere.
This is not a problem with AI, it's the inherent design of the models.
Output is effectively a dart thrown at the dartboard, with a wide error distribution. Fixes are outputs thrown at the dartboard, with the same error distribution. It's a stationary process which must reproduce the constant variance throughout the iterations.
The outputs will come arbitrarily close to the target eventually but the number of iterations needed is exponential. In practice, the human asking for another iteration run out of patience and money way too soon.
Sort of, but not quite. AI is not *actually* good at finding patterns. The truth is that AI models depend on humans setting up the problem first, and humans creating the class of features that will uncover patterns easily. This has always been the case since the dawn of time, ca 1958.
To state this another way, AI cannot find patterns if the inputs don't show the patterns clearly. The wildly successful applications of AI to-date have used human insight and experience to narrow down and curate the input signals that have made these successes possible.
If you merely throw an AI model on a dataset that hasn't been carefully thought out, you'll just get garbage. AI models won't find any patterns that actually hold up outsample.
Now to your example: you cannot use just any X and Y coordinates, they have to have semantically meaningful connections to reality. That has to be achieved by the curation and selection of data sets. By the time the AI model sees the X and Y coordinates in your example, the problem is already 90% solved.
In effect, it was proposing to solve the spam problem by forcing every human being to have a second gig on the side (or pay someone to handle it for them). Either way, the recipients are paying to receive unsolicited messages, with the option of a rebate if they don't like the message.
Thanks for the clarification, I had forgotten some of the details.
Secondly, markets always operate on the assumption that participants do not want to pay more, it's an inbuilt assumption that precludes the kind of behavioural symmetry you are invoking in your example. It would be a crazy kind of market if that symmetry was commonplace, for one think it would probably break the theories of equilibrium.
I think those guys ended up at Facebook.
"Consistency requires you to be as ignorant today as you were a year ago." -- Bernard Berenson