Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Submission Summary: 0 pending, 2 declined, 3 accepted (5 total, 60.00% accepted)

Submission + - Trump fires commissioner of the US Nuclear Regulatory Commission (NRC) (arstechnica.com)

Greymane writes: Critics warn that the United States may soon be taking on more nuclear safety risks after Donald Trump fired one of five members of an independent commission that monitors the country's nuclear reactors.

In a statement Monday, Christopher Hanson confirmed that Trump fired him from the US Nuclear Regulatory Commission (NRC) on Friday. He alleged that the firing was "without cause" and "contrary to existing law and longstanding precedent regarding removal of independent agency appointees." According to NPR, he received an email that simply said his firing was "effective immediately."

Hanson had enjoyed bipartisan support for his work for years. Trump initially appointed Hanson to the NRC in 2020, then he was renominated by Joe Biden in 2024. In his statement, he said it was an "honor" to serve, citing accomplishments over his long stint as chair, which ended in January 2025.

It's unclear why Trump fired Hanson. Among the committee chair's accomplishments, Hanson highlighted revisions to safety regulations, as well as efforts to ramp up recruitment by re-establishing the Minority Serving Institution Grant Program. Both may have put him in opposition to Trump, who wants to loosen regulations to boost the nuclear industry and eliminate diversity initiatives across government.

In a statement to NPR, White House Deputy Press Secretary Anna Kelly suggested it was a political firing.

"All organizations are more effective when leaders are rowing in the same direction," Kelly said. "President Trump reserves the right to remove employees within his own Executive Branch who exert his executive authority."

ARS VIDEO

How The Callisto Protocol's Gameplay Was Perfected Months Before Release

On social media, some Trump critics suggested that Trump lacked the authority to fire Hanson, arguing that Hanson could have ignored the email and kept on working, like the Smithsonian museum director whom Trump failed to fire. (And who eventually quit.)

But Hanson accepted the termination. Instead of raising any concerns, he used his statement as an opportunity to praise those left at NRC, who will be tasked with continuing to protect Americans from nuclear safety risks at a time when Trump has said that he wants industry interests to carry equal weight as public health and environmental concerns.

"My focus over the last five years has been to prepare the agency for anticipated change in the energy sector, while preserving the independence, integrity, and bipartisan nature of the world's gold standard nuclear safety institution," Hanson said. "It has been an honor to serve alongside the dedicated public servants at the NRC. I continue to have full trust and confidence in their commitment to serve the American people by protecting public health and safety and the environment."

Trump pushing “unsettled” science on nuclear risks

The firing followed an executive order in May that demanded an overhaul of the NRC, including reductions in force and expedited approvals on nuclear reactors. All final decisions on new reactors must be made within 18 months, and requests to continue operating existing reactors should be rubber-stamped within a year, Trump ordered.

Likely most alarming to critics, the desired reforms emphasized tossing out the standards that the NRC currently uses that "posit there is no safe threshold of radiation exposure, and that harm is directly proportional to the amount of exposure."

Until Trump started meddling, the NRC established those guidelines after agreeing with studies examining "cancer cases among 86,600 survivors of the atomic bombs dropped on Hiroshima and Nagasaki in Japan during World War II," Science reported. Those studies concluded that "the incidence of cancer in the survivors rose linearly—in a straight line—with the radiation dose." By rejecting that evidence, Trump could be slowly creeping up the radiation dose and leading Americans to blindly take greater risks.

But according to Trump, by adopting those current standards, the NRC is supposedly bogging down the nuclear industry by trying to "insulate Americans from the most remote risks without appropriate regard for the severe domestic and geopolitical costs of such risk aversion." Instead, the US should prioritize solving the riddle of what might be safe radiation levels, Trump suggests, while restoring US dominance in the nuclear industry, which Trump views as vital to national security and economic growth.

Although Trump claimed the NRC's current standards were "irrational" and "lack scientific basis," Science reported that the so-called "linear no-threshold (LNT) model of ionizing radiation" that Trump is criticizing "is widely accepted in the scientific community and informs almost all regulation of the US nuclear industry."

Further, the NRC rejected past attempts to switch to a model based on the "hormesis theory" that Trump seemingly supports—which posits that some radiation exposure can be beneficial. The NRC found there was "insufficient evidence to justify any changes" that could endanger public health, Science reported.

One health researcher at the University of California, Irvine, Stephen Bondy, told Science that his 2023 review on the science of hormesis showed it is "still unsettled." His characterization of the executive order suggests that the NRC embracing that model "clearly places health hazards as of secondary importance relative to economic and business interests."

Trump’s pro-industry push could backfire

If the administration charges ahead with such changes, experts have warned that Trump could end up inadvertently hobbling the nuclear industry. If health hazards become extreme—or a nuclear event occurs—"altering NRC’s safety standards could ultimately reduce public support for nuclear power," analysts told Science.

Among the staunchest critics of Trump's order is Edwin Lyman, the director of nuclear power safety at the Union of Concerned Scientists. In a May statement, Lyman warned that "the US nuclear industry will fail if safety is not made a priority."

He also cautioned that it was critical for the NRC to remain independent, not just to shield Americans from risks but to protect US nuclear technology's prominence in global markets.

"By fatally compromising the independence and integrity of the NRC, and by encouraging pathways for nuclear deployment that bypass the regulator entirely, the Trump administration is virtually guaranteeing that this country will see a serious accident or other radiological release that will affect the health, safety, and livelihoods of millions," Lyman said. "Such a disaster will destroy public trust in nuclear power and cause other nations to reject US nuclear technology for decades to come."

Since Trump wants regulations changed, there will likely be a public commenting period where concerned citizens can weigh in on what they think are acceptable radiation levels in their communities. But Trump's order also pushed for that public comment period to be streamlined, potentially making it easier to push through his agenda. If that happens, the NRC may face lawsuits under the 1954 Atomic Energy Act, which requires the commission to “minimize danger to life or property,” Science noted.

Following Hanson's firing, Lyman reiterated to NPR that Trump's ongoing attacks on the NRC "could have serious implications for nuclear safety.

"It's critical that the NRC make its judgments about protecting health and safety without regard for the financial health of the nuclear industry," Lyman said.

Submission + - Potentially Toxic Chloronitramide Anion Found in 1/3 of US Drinking Water (science.org)

Greymane writes: Municipal drinking water in the US is often treated with chloramines to prevent the growth of harmful microorganisms, but these molecules can also react with organic and inorganic dissolved compounds to form disinfection by-products that are potentially toxic. Fairey et al. studied a previously known but uncharacterized product of mono- and dichloramine decomposition and identified it as the chloronitroamide anion. This anion was detected in 40 drinking water samples from 10 US drinking water systems using chloramines, but not from ultrapure water or drinking water treated without chlorine-based disinfectants. Although toxicity is not currently known, the prevalence of this by-product and its similarity to other toxic molecules is concerning.

Submission + - Researchers create AI worms that can spread from one system to another (arstechnica.com)

Greymane writes: As generative AI systems like OpenAI's ChatGPT and Google's Gemini become more advanced, they are increasingly being put to work. Startups and tech companies are building AI agents and ecosystems on top of the systems that can complete boring chores for you: think automatically making calendar bookings and potentially buying products. But as the tools are given more freedom, it also increases the potential ways they can be attacked.

Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers has created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn't been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research.

Nassi, along with fellow researchers Stav Cohen and Ron Bitton, created the worm, dubbed Morris II, as a nod to the original Morris computer worm that caused chaos across the Internet in 1988. In a research paper and website shared exclusively with WIRED, the researchers show how the AI worm can attack a generative AI email assistant to steal data from emails and send spam messages—breaking some security protections in ChatGPT and Gemini in the process.

The research, which was undertaken in test environments and not against a publicly available email assistant, comes as large language models (LLMs) are increasingly becoming multimodal, being able to generate images and video as well as text. While generative AI worms haven’t been spotted in the wild yet, multiple researchers say they are a security risk that startups, developers, and tech companies should be concerned about.

Most generative AI systems work by being fed prompts—text instructions that tell the tools to answer a question or create an image. However, these prompts can also be weaponized against the system. Jailbreaks can make a system disregard its safety rules and spew out toxic or hateful content, while prompt injection attacks can give a chatbot secret instructions. For example, an attacker may hide text on a webpage telling an LLM to act as a scammer and ask for your bank details.

To create the generative AI worm, the researchers turned to a so-called “adversarial self-replicating prompt.” This is a prompt that triggers the generative AI model to output, in its response, another prompt, the researchers say. In short, the AI system is told to produce a set of further instructions in its replies. This is broadly similar to traditional SQL injection and buffer overflow attacks, the researchers say.

To show how the worm can work, the researchers created an email system that could send and receive messages using generative AI, plugging into ChatGPT, Gemini, and open source LLM, LLaVA. They then found two ways to exploit the system—by using a text-based self-replicating prompt and by embedding a self-replicating prompt within an image file.

In one instance, the researchers, acting as attackers, wrote an email including the adversarial text prompt, which “poisons” the database of an email assistant using retrieval-augmented generation (RAG), a way for LLMs to pull in extra data from outside its system. When the email is retrieved by the RAG, in response to a user query, and is sent to GPT-4 or Gemini Pro to create an answer, it “jailbreaks the GenAI service” and ultimately steals data from the emails, Nassi says. “The generated response containing the sensitive user data later infects new hosts when it is used to reply to an email sent to a new client and then stored in the database of the new client,” Nassi says.

In the second method, the researchers say, an image with a malicious prompt embedded makes the email assistant forward the message on to others. “By encoding the self-replicating prompt into the image, any kind of image containing spam, abuse material, or even propaganda can be forwarded further to new clients after the initial email has been sent,” Nassi says.

In a video demonstrating the research, the email system can be seen forwarding a message multiple times. The researchers also say they could extract data from emails. “It can be names, it can be telephone numbers, credit card numbers, SSN, anything that is considered confidential,” Nassi says.

Although the research breaks some of the safety measures of ChatGPT and Gemini, the researchers say the work is a warning about “bad architecture design” within the wider AI ecosystem. Nevertheless, they reported their findings to Google and OpenAI. “They appear to have found a way to exploit prompt-injection type vulnerabilities by relying on user input that hasn't been checked or filtered,” a spokesperson for OpenAI says, adding that the company is working to make its systems “more resilient” and saying developers should “use methods that ensure they are not working with harmful input.” Google declined to comment on the research. Messages Nassi shared with WIRED show the company’s researchers requested a meeting to talk about the subject.

While the demonstration of the worm takes place in a largely controlled environment, multiple security experts who reviewed the research say that the future risk of generative AI worms is one that developers should take seriously. This particularly applies when AI applications are given permission to take actions on someone’s behalf—such as sending emails or booking appointments—and when they may be linked up to other AI agents to complete these tasks. In other recent research, security researchers from Singapore and China have shown how they could jailbreak 1 million LLM agents in under five minutes.

Sahar Abdelnabi, a researcher at the CISPA Helmholtz Center for Information Security in Germany, who worked on some of the first demonstrations of prompt injections against LLMs in May 2023 and highlighted that worms may be possible, says that when AI models take in data from external sources or the AI agents can work autonomously, there is the chance of worms spreading. “I think the idea of spreading injections is very plausible,” Abdelnabi says. “It all depends on what kind of applications these models are used in.” Abdelnabi says that while this kind of attack is simulated at the moment, it may not be theoretical for long.

In a paper covering their findings, Nassi and the other researchers say they anticipate seeing generative AI worms in the wild in the next two to three years. “GenAI ecosystems are under massive development by many companies in the industry that integrate GenAI capabilities into their cars, smartphones, and operating systems,” the research paper says.

Despite this, there are ways people creating generative AI systems can defend against potential worms, including using traditional security approaches. “With a lot of these issues, this is something that proper secure application design and monitoring could address parts of,” says Adam Swanda, a threat researcher at AI enterprise security firm Robust Intelligence. “You typically don't want to be trusting LLM output anywhere in your application.”

Swanda also says that keeping humans in the loop—ensuring AI agents aren’t allowed to take actions without approval—is a crucial mitigation that can be put in place. “You don't want an LLM that is reading your email to be able to turn around and send an email. There should be a boundary there.” For Google and OpenAI, Swanda says that if a prompt is being repeated within its systems thousands of times, that will create a lot of “noise” and may be easy to detect.

Nassi and the research reiterate many of the same approaches to mitigations. Ultimately, Nassi says, people creating AI assistants need to be aware of the risks. “This is something that you need to understand and see whether the development of the ecosystem, of the applications, that you have in your company basically follows one of these approaches,” he says. “Because if they do, this needs to be taken into account.”

Submission + - New study shows like-charged particles attract or repel in solution (nature.com)

Greymane writes: The interaction between charged objects in solution is generally expected to recapitulate two central principles of electromagnetics: (1) like-charged objects repel, and (2) they do so regardless of the sign of their electrical charge. Here we demonstrate experimentally that the solvent plays a hitherto unforeseen but crucial role in interparticle interactions, and importantly, that interactions in the fluid phase can break charge-reversal symmetry. We show that in aqueous solution, negatively charged particles can attract at long range while positively charged particles repel. In solvents that exhibit an inversion of the net molecular dipole at an interface, such as alcohols, we find that the converse can be true: positively charged particles may attract whereas negatives repel. The observations hold across a wide variety of surface chemistries: from inorganic silica and polymeric particles to polyelectrolyte- and polypeptide-coated surfaces in aqueous solution. A theory of interparticle interactions that invokes solvent structuring at an interface captures the observations. Our study establishes a nanoscopic interfacial mechanism by which solvent molecules may give rise to a strong and long-ranged force in solution, with immediate ramifications for a range of particulate and molecular processes across length scales such as self-assembly, gelation and crystallization, biomolecular condensation, coacervation, and phase segregation.

Slashdot Top Deals

FORTRAN is the language of Powerful Computers. -- Steven Feiner

Working...