There's a large gap here and essentially a false dichotomy. The person you are replying to didn't claim that humans are identical in their reading to how AIs read. They did note correctly that the claim that a human learns to read from a single book is obviously false.
That said, It is pretty clear that the AIs are doing a lot of things more inefficiently than humans in terms of training data, which shouldn't be surprising given that human brains have had millions of years of evolution to optimize learning. It also is clear that humans have other high bandwidth data systems, including vision hearing, and other sensory methods.
These are not "hard" problems. These are problems nobody competent has yet looked into
With all due respect, I'm a mathematician and you don't know what you are talking about. These problems are not "hard" in the sense of being the sort of problems people get fame for solving. They are hard in the sense that it would likely take days or weeks for an expert human to solve them and that isn't guaranteed.
. Hence it is entirely plausible OpenAI had that somebody competent solve them in secret and put that in the training data. They certainly have lied enough to far to make that entirely plausible.
Much of the work on these problems have not been by anyone affiliated with OpenAI or other AI groups. That would require them to have spent time having experts solve them, and then had enlisted people outside OpenAI to then try those problems. And that's even before we get to the problem of turnaround. This is happening as there's a new systematic attempt to list and attack all the Erdos problems. It isn't even obvious there would be enough time for humans to do this under your proposal. At this point, your entire "plausible" (and no, it isn't) scenario is you constructing a conspiracy theory to grasp at straws so you can ignore the evidence you don't want to hear.
Hahahha, no. You should maybe try to replicate this to see how incapable LLMs actually are. LLMs cannot to anything even mildly non-trivial.
I'm a mathematician. I've talked explicitly before on Slashdot about personal experiments using LLMs such as here https://slashdot.org/comments.pl?sid=23789930&cid=65646656 where I discussed that yes, it could do non-trivial work. And I'm not the only example. Terry Tao for example has used them, and that's a name you should have at least heard of https://mathoverflow.net/questions/501066/is-the-least-common-multiple-sequence-textlcm1-2-dots-n-a-subset-of-t But the fact that multiple mathematicians are now telling you that it is doing non-trivial work and you just ignore it says much more about LLMs than it does about you. But I suppose I shouldn't be surprised since the last time you and I discussed a similar topic, you claimed that what LLMs were doing could be done by software such as Mathematica or Maple and then refused to show that even after you were giving a direct incentive to show your case https://slashdot.org/comments.pl?sid=23748766&cid=65535428. I'm really struggling to imagine anything that would be sufficient evidence to change your mind, which says something about you, not about LLM systems.
The easiest way to figure the cost of living is to take your income and add ten percent.