We use AI to help with paper writing in my lab, mostly because there are only two native English speakers, and it relieves me, the lab head (and one of the two native speakers), of having to do extensive copy-editing in order to make stilted English more readable. I still read every word that gets published from the lab, but using AI for copy-editing is no different from using a human-based writing service to fix poor language. It's just cheaper and orders of magnitude faster.
So, for us, the response would be a big, "so what?" to this report.
But, if people are starting to use AI to write entire papers, that's a different story. My experience is that current models hallucinate ideas and, especially, references, at far, far to high a rate to be seriously useful as anything other than a tool that requires full, manual verification. I half-jokingly say that if a paper is hallucinated, that means the AI was unable to find the right citation, and it represents a gap in the field's knowledge that we could address. The amazing thing about the hallucinations is how convincingly real they sound: the right authors, the right titles, the right journals. These are publications that *should* exist, but don't, at least in my experience.
As a most recent example, when writing a grant application, I tried to find citations using an LLM for an idea that is widely-held in the field. Everyone knows it to be true. It's obvious that it should be true. And, yet, there have been no publications as of yet that have actually discussed the idea, so the LLM dutifully hallucinated a citation with exactly the author list you would expect to have studied the question, a title that hits the nail on the head, and a journal exactly where you might expect the paper to appear. I've told my staff that we need to get that paper written and submitted, immediately, to fill that obvious gap, before someone else does. It will likely be cited widely.