Comment Note to BeauHD ... (Score 1) 4
... remember to read the headlines of currently posted stories before starting work. It doesn't take long and forgetting to do it does not help repair the tarnished reputation of the Slashdot editorial crew.
... remember to read the headlines of currently posted stories before starting work. It doesn't take long and forgetting to do it does not help repair the tarnished reputation of the Slashdot editorial crew.
What, in your argument, is the difference between LLM copy-edited text, and for-hire human copy-edited text. The editorial services I have seen *sometimes* try to find editors that are kinda-sorta near the correct field of expertise, but there's no guarantee you'll get someone who even has a passing level of familiarity with your field, and for some services, all they have is a degree in English.
So, again, what's the difference between linguistic polishing by machine and linguistic polishing by semi-qualified human?
Following up on that idea, there are various copy-editing services that many non-native English speakers use, and are encouraged to use, to help improve their writing. The main difference from the perspective of forensic detection with AI-copy-edited text is that there are a very small number of such styles compared to the likely thousands of copy-editors' individual styles, making automated copy-editing easier to detect. I'll bet dollars to donuts that if you trained an LLM on the output of a single human copy-editor, you'd be able to identify all papers that used their services.
We use AI to help with paper writing in my lab, mostly because there are only two native English speakers, and it relieves me, the lab head (and one of the two native speakers), of having to do extensive copy-editing in order to make stilted English more readable. I still read every word that gets published from the lab, but using AI for copy-editing is no different from using a human-based writing service to fix poor language. It's just cheaper and orders of magnitude faster.
So, for us, the response would be a big, "so what?" to this report.
But, if people are starting to use AI to write entire papers, that's a different story. My experience is that current models hallucinate ideas and, especially, references, at far, far to high a rate to be seriously useful as anything other than a tool that requires full, manual verification. I half-jokingly say that if a paper is hallucinated, that means the AI was unable to find the right citation, and it represents a gap in the field's knowledge that we could address. The amazing thing about the hallucinations is how convincingly real they sound: the right authors, the right titles, the right journals. These are publications that *should* exist, but don't, at least in my experience.
As a most recent example, when writing a grant application, I tried to find citations using an LLM for an idea that is widely-held in the field. Everyone knows it to be true. It's obvious that it should be true. And, yet, there have been no publications as of yet that have actually discussed the idea, so the LLM dutifully hallucinated a citation with exactly the author list you would expect to have studied the question, a title that hits the nail on the head, and a journal exactly where you might expect the paper to appear. I've told my staff that we need to get that paper written and submitted, immediately, to fill that obvious gap, before someone else does. It will likely be cited widely.
It's almost certainly because you didn't do enough programming in college.
I agree entirely. I teach an intro to programming course at one of the well-known universities. It is a lab course with 2 hours of teaching contact time per week, 2 hours of reading time per week, and 8 hours of expected programming time per week. The students learn by doing.
Aw, c'mon. Basic is a horrible, horrible language that teaches poor habits and has almost no translation to serious programming.
If you want to have kids learn something easy that at least teaches good organization and thought processes, then teach them Scheme.
Respectfully, I don't think you understand the concept of "no reason." Just because there are other ways to do a thing doesn't mean there's "no reason" to do it a particular way.
So, specifically, from which scientific fields will we lose all of this talent
Microbiology, neuroscience, solid physics, particle physics, robotics,
and to which countries will these people be moving?
Canada, England, France, Germany, Switzerland, primarily. Portugal has gone on a hiring spree, as has Poland and Australia. I haven't seen any postings from Spain or Italy, but maybe that's my field.
Further, in what ways will the NSF counterparts in these supposed other countries benefit R&D by foreign researchers?
I guess you don't understand how IP works. When a researcher works at an institution, the IP they generate is owned by that institution. The society where that institution is located typically is the big winner, as a result. Have you ever looked, for example, how much the US government gets in royalties from PCR?
No scientific talent will be "lost to overseas competitors".
The issue is that it isn't just DEI funding that's being cancelled. DEI is just the focus of the most bitter ire. There is a broad anti-science, anti-knowledge tone to the current administration, and I have many colleagues who have already left the US because of it. The number of available post-docs far outstrips the current number of open positions, and that talent is quickly leaving the US shores for greener pastures.
I don't understand the use case.
Some things need to work when most everything on the network is broken. Think: out of band access to the DNS server (DRAC, ILO, IPMI).
So, the certificate tells me "Yes, this really is 42.42.42.42." But I knew that already.
No, you know that some machine out there responded to that IP address. You don't know whether it's the one you meant or, say, the hotel's captive portal.
About SpaceX in particular? I'm not. About the sheer number of companies that behave similarly? Yeah, I'm bitter.
Actually, I declined the interview. This was in the pandemic before the vaccine. During the phone screen the recruiter told me all work was required to be on site and asked if I was okay with that. I said: sure, but only if I have an office so I can set up an air filter and generally control my working environment. The recruiter said no one gets an office, not even Musk. I said thank you and goodbye.
What does "open office" mean in this regard?
No partitions. No walls. No doors. Just desks and chairs.
Some jobs simply require teams to work in the same physical location.
The complaint wasn't about being in the same physical location. It was about the compulsory open-office configuration, even back in the middle of the pandemic.
The complaint was Musk insisted that people work in open offices specifically or they cannot work for him.
Correct. Even in the middle of the pandemic he demanded on-site work and would not allow private offices in the building.
NASA hired women as scientists and engineers when that wasn't a thing. If her talents were worth it, that was that.
Musk won't hire people unwilling to work in an open office. And forget about telework. It doesn't matter what skills you bring to the table, Musk having his way is more important.
That's how NASA landed people on the moon while SpaceX's rocket keeps blowing up.
I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"