Forgot your password?
typodupeerror

Submission Summary: 0 pending, 5 declined, 1 accepted (6 total, 16.67% accepted)

Submission + - Militaries are going autonomous—but will AI lead to new wars? (foommagazine.org)

Gazelle Bay writes: The invasion of Ukraine in February 2022 has resulted in hundreds of thousands of casualties and provided a sickening laboratory for the development of the technology of war. Since then, major advancements have been made in unmanned drones and more generally, lethal autonomous weapon systems (LAWS), defined by the ability to search for and engage targets without a human operator.

Although the conflict has not yet birthed the first queasy sight of a fully autonomous battlefield, a conversion to fully autonomous forces is being actively pursued. "We strive for full autonomy,” said Mykhailo Fedorov, the deputy prime minister of Ukraine, to the Guardian in a June article. Others have long called for regulations or bans on LAWS. "Human control over the use of force is essential,” said the United Nations Secretary-General António Guterres at a meeting in May. “We cannot delegate life-or-death decisions to machines.” However, substantive, binding regulations have yet to be adopted by any nations that lead in the development of LAWS, as surveyed in a September 2025 book by Matthijs Maas.

Large-scale deployment of LAWS therefore looks increasingly likely to occur, even though researchers like Maas caution against seeing autonomous warfare as inevitable. "The military AI landscape at present is at a crossroads," Maas wrote. Regulations remain a possibility post-deployment, or in response to a stigmatization of the technology that it might cause.

Nonetheless, the reality that AI is likely to go to war has driven researchers to expand from a "prevailing preoccupation" on how AI will be used—for example, in the form of LAWS—to whether this use will significantly alter geopolitical norms. This was the intriguing argument made by scholars Toni Erskine and Steven Miller in a January article, as well as articles in an accompanying issue of the Cambridge Forum on AI: Law and Governance.

Amongst scholars, this shift from seeing LAWS as tools to seeing them as strategic influences has been far from totally uniform or completely new. Research on military AI and LAWS is spread across many sectors of academic study. Nonetheless, it is possible to sketch how and why such a shift has happened, and to explain some of the findings of the new research.

Surprisingly, some scholars have come to somewhat comforting conclusions. For example, in a July 2025 study from the RAND corporation, the authors assessed that AI is not likely to lead to big new wars. "AI’s net effect may tend toward strengthening rather than eroding international stability," the authors wrote.

Submission + - Is research into recursive self-improvement becoming a safety hazard? (foommagazine.org) 1

Gazelle Bay writes: One of the earliest speculations about machine intelligence was that, because it would be made of much simpler components than biological intelligence, like source code instead of cellular tissues, the machine would have a much easier time modifying itself. In principal, it would also have a much easier time improving itself, and therefore improving its ability to improve itself, thereby potentially leading to an exponential growth in cognitive performance—or an 'intelligence explosion,' as envisioned in 1965 by the mathematician Irving John Good.

Recently, this historically envisioned objective, called recursive self-improvement (RSI), has started to be publicly pursued by scientists and openly discussed by AI corporations' senior leadership. Perhaps the most visible signature of this trend is that a group of academic and corporate researchers will be hosting, in April, a first formal workshop explicitly focused on the subject, located at the International Conference on Learning Representations (ICLR), a premier conference for AI research. In their workshop proposal, organizers state they expect over 500 in attendance.

However, prior to recent discussions of the subject, RSI was often—but not always—seen as posing serious concerns about AI systems that executed it. These concerns were typically less focused on RSI, itself, and more focused on the consequences of RSI, like the intelligence explosion it might (hypothetically) generate. Were such an explosion not carefully controlled, or perhaps even if it were, various researchers argued that it might not secure the values or ethics of the system, even while bringing about exponential improvements to its problem solving capabilities—thereby making the system unpredictable or dangerous.

Recent developments have therefore raised questions about whether the topic is being treated with a sufficient safety focus. David Scott Krueger of the University of Montreal and Mila, the Quebec Artificial Intelligence Institute, is critical of the research. "I think it's completely wild and crazy that this is happening, it's unconscionable," said Krueger to Foom in an interview. "It's being treated as if researchers are just trying to solve some random, arcane math problem ... it shows you how unserious the field is about the social impact of what it's doing."

Submission + - Language models resemble more than just language cortex, show neuroscientists (foommagazine.org)

Gazelle Bay writes: In a paper presented in November 2025 at the Empirical Methods in Natural Language Processing (EMNLP) conference, researchers at the Swiss Federal Institute of Technology (EPFL), the Massachusetts Institute of Technology (MIT), and Georgia Tech revisited earlier findings that showed that language models, the engines of commercial AI chatbots, show strong signal correlations with the human language network, the region of the brain responsible for processing language.

In their new results, they found that signal correlations between model and brain region change significantly over the course of the 'training' process, where models are taught to autocomplete as many as trillions of elided words (or sub-words, known as tokens) from text passages.

The correlations between the signals in the model and the signals in the language network reach their highest levels relatively early on in training. While further training continues to improve the functional performance of the models, it does not increase the correlations with the language network.

The results lend clarity to the surprising picture that has been emerging from the last decade of neuroscience research: That AI programs can show strong resemblances to large-scale brain regions—performing similar functions, and doing so using highly similar signal patterns.

Such resemblances have been exploited by neuroscientists to make much better models of cortical regions. Perhaps more importantly, the links between AI and cortex provide an interpretation of commercial AI technology as being profoundly brain-like, validating both its capabilities as well as the risks it might pose for society as the first ever synthetic braintech.

"It is something we, as a community, need to think about a lot more," said Badr AlKhamissi, doctoral student in neuroscience at EPFL and first author of the preprint, in an interview with Foom. "These models are getting better and better every day. And their similarity to the brain [or brain regions] is also getting better—probably. We're not 100% sure about it."

Submission + - The moral critic of the AI industry—a Q&A with Holly Elmore (foommagazine.org)

Gazelle Bay writes: Since AI was first conceived of as a serious technology, some people wondered whether it might bring about the end of humanity. For some, this concern was simply logical. Human individuals have caused catastrophes throughout history, and powerful AI, which would not be bounded in the same way, might therefore pose even worse dangers.

In recent times, as the capabilities of AI have grown larger, one might have thought that its existential risks would also have become more obvious in nature. And in some ways, they have. It is increasingly easy to see how AI could pose severe risks now that it is being endowed with agency, for example, or being put in control of military weaponry.

On the other hand, the existential risks of AI have become more murky. Corporations increasingly sell powerful AI as just another consumer technology. They talk blandly about giving it the capability to improve itself, without setting any boundaries. They perform safety research, even while racing to increase performance. And, while they might acknowledge existential risks of AI, in some cases, they tend to disregard serious problems with other, closely related technologies.

The rising ambiguity of the AI issue has led to introspection and self-questioning in the AI safety community, chiefly concerned about existential risks for humanity. Consider what happened in November, when a prominent researcher named Joe Carlsmith, who had worked at the grantmaking organization called Open Philanthropy (recently renamed as Coefficient Giving), announced that he would be joining the leading generative AI company, Anthropic.

There was one community member on Twitter/X, named Holly Elmore, who provided a typically critical commentary: "Sellout," she wrote, succinctly.

Submission + - Scientists make sense of shapes in the minds of the models (foommagazine.org)

Gazelle Bay writes: It was at least since 2021, according to the authors of a preprint from March, that researchers began to see something interesting on the insides of their models.

Also known as an AI program, created from a neural network architecture, a model processes a word by learning to represent it as an arrow or a vector within a high-dimensional space. The directions of these words—which each end up at one single point—become the model's carriers of information.

While these spaces are already strange in their vastness, often consisting in thousands of dimensions, researchers were noticing something even more peculiar; sometimes, inputs would form clouds of points that were distinctively shaped, looking for example like 'Swiss rolls,' or cylinders, after being projected back down to just three dimensions, using standard methods. Over the next few years, they started to see other cloudy shapes, too: curves, loops, circles; helixes, torii; even trees and fractal geometries.

That models might learn to organize information in shapes did not necessarily surprise people. It was natural to think that a model might learn that certain categories of inputs could all be clumped together; like inputs describing calendar dates, or colors, or arithmetical operations.

But in 2023, when others discovered a new method for understanding the insides of their models, called sparse autoencoders (SAEs), the observations began to seem a little odder. This method, which quickly gained traction, was suggestive of a very different picture—that the most important concepts a model learned, like love, or logic, or the identities of different people, were highly fragmented, each one tearing off in a very different direction. But why then were certain inputs found close together?

Almost as soon as this hint of contradiction surfaced, it was quelled by other findings. Both the study from March as well as an October study, by researchers at the company Anthropic, have shown that models learn shapes in ways that compliment these other tendencies, suggested by other methods. As a consequence, we are increasingly making sense of why models learn to make shapes in the high-dimensional minds that they live in.

"There's a lot of confusion, but it also feels like there's been a lot of progress," said Eric Michaud of the Massachusetts Institute of Technology (MIT), who spoke to Foom in an interview. "I don't know where it's all going to go. But overall, it feels healthy."

Submission + - Plans to build AGI with nuclear reactor-like safety lack 'systematic thinking' (foommagazine.org)

Gazelle Bay writes: In a preprint from October 13, two researchers from the Ruhr University Bochum and the University of Bonn in Germany found that while leading AI companies say they will design their most general-purpose AI, often called AGI, based on the most stringent safety principles—adapted from fields like nuclear engineering—the safety techniques they apply do not satisfy them.

In particular, the authors note that existing proposals fail to satisfy the principle known as defense in depth, which calls for the application of multiple, redundant, and independent safety mechanisms. The conventional safety methods that companies are known to apply are not independent; in certain problematic scenarios, which are relatively easy to foresee, they all tend to fail simultaneously.

Many leading AI companies, including Anthropic, Microsoft, and OpenAI have all published safety documents that explicitly mention their intention to implement defense in depth for the design of their most advanced AI systems.

In an interview with Foom, the first co-author of the study, Leonard Dung of the Ruhr University Bochum, said that it was not surprising that many of the methods for designing AI systems to be safe might fail. Research on making powerful AI systems safe is broadly viewed to be at an early stage of maturity.

More surprising for Dung, and also concerning, was that it was him and his co-author, who are academic scholars in philosophy and machine learning, to make what is arguably a foundational contribution to the safety literature of a new branch of industrial engineering.

"There has not been much systematic thinking about what exactly does it mean to take a defense-in-depth approach to safety," said Dung. "The sort of basic way of thinking about risk that you would expect these companies—and policymakers who regulate these companies—to implement has not been implemented."

Slashdot Top Deals

The only thing worse than X Windows: (X Windows) - X

Working...