Comment Note to BeauHD ... (Score 1) 4
... remember to read the headlines of currently posted stories before starting work. It doesn't take long and forgetting to do it does not help repair the tarnished reputation of the Slashdot editorial crew.
... remember to read the headlines of currently posted stories before starting work. It doesn't take long and forgetting to do it does not help repair the tarnished reputation of the Slashdot editorial crew.
What, in your argument, is the difference between LLM copy-edited text, and for-hire human copy-edited text. The editorial services I have seen *sometimes* try to find editors that are kinda-sorta near the correct field of expertise, but there's no guarantee you'll get someone who even has a passing level of familiarity with your field, and for some services, all they have is a degree in English.
So, again, what's the difference between linguistic polishing by machine and linguistic polishing by semi-qualified human?
Following up on that idea, there are various copy-editing services that many non-native English speakers use, and are encouraged to use, to help improve their writing. The main difference from the perspective of forensic detection with AI-copy-edited text is that there are a very small number of such styles compared to the likely thousands of copy-editors' individual styles, making automated copy-editing easier to detect. I'll bet dollars to donuts that if you trained an LLM on the output of a single human copy-editor, you'd be able to identify all papers that used their services.
We use AI to help with paper writing in my lab, mostly because there are only two native English speakers, and it relieves me, the lab head (and one of the two native speakers), of having to do extensive copy-editing in order to make stilted English more readable. I still read every word that gets published from the lab, but using AI for copy-editing is no different from using a human-based writing service to fix poor language. It's just cheaper and orders of magnitude faster.
So, for us, the response would be a big, "so what?" to this report.
But, if people are starting to use AI to write entire papers, that's a different story. My experience is that current models hallucinate ideas and, especially, references, at far, far to high a rate to be seriously useful as anything other than a tool that requires full, manual verification. I half-jokingly say that if a paper is hallucinated, that means the AI was unable to find the right citation, and it represents a gap in the field's knowledge that we could address. The amazing thing about the hallucinations is how convincingly real they sound: the right authors, the right titles, the right journals. These are publications that *should* exist, but don't, at least in my experience.
As a most recent example, when writing a grant application, I tried to find citations using an LLM for an idea that is widely-held in the field. Everyone knows it to be true. It's obvious that it should be true. And, yet, there have been no publications as of yet that have actually discussed the idea, so the LLM dutifully hallucinated a citation with exactly the author list you would expect to have studied the question, a title that hits the nail on the head, and a journal exactly where you might expect the paper to appear. I've told my staff that we need to get that paper written and submitted, immediately, to fill that obvious gap, before someone else does. It will likely be cited widely.
It's almost certainly because you didn't do enough programming in college.
I agree entirely. I teach an intro to programming course at one of the well-known universities. It is a lab course with 2 hours of teaching contact time per week, 2 hours of reading time per week, and 8 hours of expected programming time per week. The students learn by doing.
Aw, c'mon. Basic is a horrible, horrible language that teaches poor habits and has almost no translation to serious programming.
If you want to have kids learn something easy that at least teaches good organization and thought processes, then teach them Scheme.
So, specifically, from which scientific fields will we lose all of this talent
Microbiology, neuroscience, solid physics, particle physics, robotics,
and to which countries will these people be moving?
Canada, England, France, Germany, Switzerland, primarily. Portugal has gone on a hiring spree, as has Poland and Australia. I haven't seen any postings from Spain or Italy, but maybe that's my field.
Further, in what ways will the NSF counterparts in these supposed other countries benefit R&D by foreign researchers?
I guess you don't understand how IP works. When a researcher works at an institution, the IP they generate is owned by that institution. The society where that institution is located typically is the big winner, as a result. Have you ever looked, for example, how much the US government gets in royalties from PCR?
No scientific talent will be "lost to overseas competitors".
The issue is that it isn't just DEI funding that's being cancelled. DEI is just the focus of the most bitter ire. There is a broad anti-science, anti-knowledge tone to the current administration, and I have many colleagues who have already left the US because of it. The number of available post-docs far outstrips the current number of open positions, and that talent is quickly leaving the US shores for greener pastures.
I hate Wayland. Still so frelling buggy. So many unfulfilled promises. So many things that just worked, and worked well, under X have been broken for so very, very long. I hope the teenagers who repllied "pffft" to the graybeards when they said "windowing is hard, secure remote windowing is really hard," have learned their lesson, who replied "X is just too complicated" have now recognized that they have something worse, who opined "the API is too obscure" have been brought to awareness.
Just because something is new does not mean it is better. Keep repeating that. If an old, working system appears to be complex, there just might be good reasons for it.
I used to be able to run remote windows on kinda slow cable with reasonable responsiveness, back in the day, under X. I could even run a browser. I haven't been able to do any of that under Wayland; opening a remote browser window now takes *minutes*, if it works at all, and I've got fat pipes now, compared to back in the day. Wayland, from the user's perspective, has been and remains an unmitigated disaster.
I'm all for bringing back X. Maybe those guys at MIT knew what they were doing.
*All* of the immunotherapy treatments can be considered vaccinations, not just the ones that we give as preventative medicine.
And there are some new ones that are just stunningly good. I've recently seen a presentation on a vaccination for hard cancers that get injected directly into the cancerous mass and don't just improve things, like most radio- and chemotherapies, but *eliminate* the cancer by activating the latent immune cells within the mass. It allows the body to cure itself by removing the cloak of invisibility that cancer creates. This fellow might just win a Nobel. The idea is simple, brilliant, and shockingly effective.
I read TFA, and it specifically says that this is the matter that's causing the gravitational effects that were attributed to dark matter. To me, as a layman, that means that dark matter is no longer required to make things come out right. If you don't agree, please explain why, preferably with citations so that people like me can understand it.
Here's a quote from the article:
This "missing matter" doesn't refer to dark matter, the mysterious stuff that remains effectively invisible because it doesn't interact with light (sadly, that remains an ongoing puzzle).
And there isn't a single instance of the word "gravity" or "gravitation" or "gravitational" on that page until you get to the comments and related readings after the editorial portion is done.
Maybe time to get the eyeglass prescription updated?
Just like an ever increasing amount of "news" in the US, there's some narrative scrambling for clicks rather than facts. US news is starting to push beyond political driven narratives straight into "lying because it's more profitable and we need marketshare".
Directly blame Google for that situation. They're the one who have been aggressively pushing the ad-revenue model. Back before Google, media was supported by a mixture of advertising and subscription. We even had a few media sources that were truly independent and both government-backed, and privately supported by individual subscriptions and contributions. Google poisoned the well, and we are all paying the price.
We disagree.
If I'm running an application on one desktop and explicitly select a different desktop to do something else, I want any new child windows from the first application that open when I've shifted to a different desktop to be created on the original desktop. After all, I explicitly directed a change in my attention to do something else while the first computation was churning on something. When I return to the first desktop, I want to have all of the windows from that application right there, rather than scattered across my N virtual desktops.
Years ago, things worked exactly as I described. Now, for whatever reason, there is no way to implement that behavior unless you hardwire specific applications to specific desktops which is unnecessarily restrictive.
The instances where I want a new window to open on a different desktop are exceedingly rare, and I'm happy to move them explicitly under those circumstances. Right now, I have to explicitly move windows all the time. It's just the wrong model for my use case, and, I argue, should at least be a selectable alternative default behavior.
No, it doesn't do what I want, as far as I have been able to determine. I want the new child window to always appear on the desktop that its parent is on, no matter what desktop that might be. What I can find is that I can tell it to appear on a specific virtual desktop. That's not what I want.
The default now, to open new windows on the current desktop is almost never the right action, at least for me --- it should instead be what I described: open on the desktop of the parent window.
Moreover, child windows should default to open on top of all other windows (sometimes they do, sometimes they don't, and for the life of me I can't tell why). If the new window is not a dialog, it should *never* steal focus; if it is a dialog, it should *always* steal focus (and there should be ways to set overrides, like for Window Rules). As far as I can tell, those settings are not possible with KDE.
There's a brand-new Animations page in System Settings that groups all the settings for purely visual animated effects into one place, making them easier to find and configure. Aurorae, a newly added SVG vector graphics theme engine, enhances KWin window decorations.
Oh, good, that makes it easier to turn all of them frelling off.
Now, don't get me wrong, I enjoy using KDE. It has been remarkably rock solid for my use cases. There are some settings that are always hard to find, but it mostly just works. Given that I can ignore some of the features that they try to push and have had better solutions for years (like Activities, which is better managed by having just a fixed number of desktops with simple keyboard shortcuts, which I've been doing for, literally, 30 years now, or KDE Wallet, or Dolphin, or
The one aspect of KDE that drives me nuts, however, is that when a process opens a new window, the default should be to open that window on the desktop that the process has been assigned to rather than the current desktop (who, in their right mind, thinks that latter behavior is the right choice?). That, and there's no setting for focus that matches what I want, and the descriptions, despite multiple revisions, remain opaque.
Not only that but both launch and re-entry are physically taxing, as long as there are rockets involved. For someone who has cancer, that's probably not a good idea.
All-in-all, someone wasn't thinking through the details. Cancer drugs don't dissolve in water well, and so microgravity is the answer, rather than finding chemical agents that solve that problem in normal gravity?
"Laugh while you can, monkey-boy." -- Dr. Emilio Lizardo