Cellphones

Can Using a Dumber Phone Cure 'Brain Rot'? (seattletimes.com) 87

Brain rot — the inability to think deeply after too much scrolling on a phone — afflicts "plenty of people," writes the New York Times' lead consumer technology writer. [Alternate URL here.] He's suffering from it too — "These days, it's tough to even finish a book."

But is the answer just avoiding distractions with a stripped-down $600 phone "that barely does anything"? For a week he tested the Light Phone III... The newest version, which began shipping in March and is set for a broader release in July, can place calls, send texts, take photos, show map directions, play music and podcasts and not do much else. There is no web browser. There is also no app store, meaning there's no Uber to hail a ride, no Slack and no social media. There isn't even email... There were moments I enjoyed it. While waiting for a train, resting at the gym or eating alone, I was not tempted to stare at the phone screen, and I felt more mindful of my surroundings. Phone calls sounded nice and clear. The maps app did a fine job navigating me around town.

It reminded me of simpler times when we used phones primarily to converse before putting them away to focus on other tasks. But over the week, the downsides of a dumber phone chipped away at my enjoyment, and overall I felt more stressed and less capable. I suddenly found myself unable to get into a train station, look up the name of a new restaurant or control my garage door. Some of that has less to do with the Light Phone itself, which is a so-so product, and more to do with how society as a whole has become dependent on advanced smartphone features.

For example, his old smartphone supported virtual phone-based passes for mass transit and speedy access to his gym. (And his friends made phone of the Light Phone's photos.) But at least never felt tempted to check his email when out to dinner.

"While I admire the goal of the Light Phone, my experience demonstrates there's nothing we can realistically do or buy to bring us back to simpler times. So many aspects of our lives, including getting around town, working, paying for things and controlling home appliances, revolve around our highly capable smartphones.

"This Light Phone experiment reminded me of glamping: paying a lot to have an artificially crummier experience."
Microsoft

Microsoft's Copilot Roasts Bill Gates, Steve Ballmer, and Satya Nadella (youtube.com) 35

"Three Microsoft CEOs walk into a room..." Satya Nadella posted on X.com, "and are interviewed by Copilot!"

As part of Microsoft's 50th anniversary celebration, Microsoft CEO Nadella joined former Microsoft CEOs Bill Gates and Steve Ballmer in a funny two-minute video (also available on YouTube) in which Nadella asks Copilot for an AI-powered roast of each of them.

"All right Satya," Copilot answers. "Let's spice it up..." Copilot: Steve, we'll start with you. Your enthusiasm is legendary. But do you ever worry that your energy might short-circuit the AI? I mean, even robots need a coffee break after your pep talks.

Steve Ballmer: Come on dude! If you can't handle me, you can't handle a huge percentage of the world's population! We're counting on you!

Copilot: Ha ha, fair point Steve. Now Bill, you've got that signature thoughtful gaze. But do you ever think the AI might feel intimidated by your intense thinking face? I mean, it's like it's waiting for a blue screen moment.

Bill Gates: I hope so. I mean, it's all I've got left as the AIs get so damn intelligent. It's just that gaze and my willingness to criticize that differentiates me.

Copilot: And it's a formidable gaze indeed, Bill.

Steve Ballmer: Copilot, would you ever be willing to criticize him?

Copilot: Absolutely. Criticism is where growth happens. Now Satya, AI seems to be your best buddy — but ever worry that your deep passion for AI could one day have it running the show, leaving you to just enjoy the ride?

Satya Nadella: Well, the day AI can play like the best cricket player that I enjoy, that's the day it can run the show, as far as I'm concerned.

Copilot: Touche, Satya. Let's see if it can hit a century in cricket and then we'll talk.

This has been a blast, gentlemen. Here's to another 50 years of innovation and adventure. Cheers.

Ballmer, Nadella, and Gates: Cheers.

Bill Gates: Whatever that means to an AI. Does it drink?

AI

New Tinder Game 'Lets You Flirt With AI Characters. Three of Them Dumped Me' (msn.com) 72

Tinder "is experimenting with a chatbot that claims to help users improve their flirting skills," notes Washington Post internet-culture reporter Tatum Hunter. The chatbot is available only to users in the United States on iPhones for a limited time, and powered by OpenAI's GPT-4o each character "kicks off an improvised conversation, and the user responds out loud with something flirty..."

"Three of them dumped me." You can win points for banter the app deems "charming" or "playful." You lose points if your back-and-forth seems "cheeky" or "quirky"... It asked me to talk out loud into my phone and win the romantic interest of various AI characters.

The first scenario involved a financial analyst named Charles, whom I've supposedly run into at the Tokyo airport after accidentally swapping our luggage. I tried my best to be polite to the finance guy who stole my suitcase, asking questions about his travel and agreeing to go to coffee. But the game had some critical feedback: I should try to connect more emotionally using humor or stories from my life. My next go had me at a Dallas wedding trying to flirt with Andrew, a data analyst who had supposedly stumbled into the venue, underdressed, because he'd been looking for a quiet spot to ... analyze data. This time I kept things playful, poking fun at Andrew for crashing a wedding. Andrew didn't like that. I'd "opted to disengage" by teasing this person instead of helping him blend in at the wedding, the app said. A failure on my part, apparently — and also a reminder why generative AI doesn't belong everywhere...

Going in, I was worried Tinder's AI characters would outperform the people I've met on dating apps and I'd fall down a rabbit hole of robot love. Instead, they behaved in a way typical for chatbots: Drifting toward biased norms and failing to capture the complexity of human emotions and interactions. The "Game Game" seemed to replicate the worst parts of flirting — the confusion, the unclear expectations, the uncomfortable power dynamics — without the good parts, like the spark of curiosity about another person. Tinder released the feature on April Fools' Day, likely as a bid for impressions and traffic. But its limitations overshadowed its novelty...

Hillary Paine, Tinder's vice president of product, growth and revenue, said in an email that AI will play a "big role in the future of dating and Tinder's evolution." She said the game is meant to be silly and that the company "leaned into the campiness." Gen Z is a socially anxious generation, Paine said, and this age group is willing to endure a little cringe if it leads to a "real connection."

The article suggests it's another example of companies "eager to incorporate this newish technology, often without considering whether it adds any value for users." But "As apps like Tinder and Bumble lose users amid 'dating app burnout,' the companies are turning to AI to win new growth." (The dating app Rizz "uses AI to autosuggest good lines to use," while Teaser "spins up a chatbot that's based on your personality, meant to talk and behave like you would during a flirty chat," and people "are forming relationships with AI companion bots by the millions.") And the companion-bot company Replika "boasts more than 30 million users..."
Sci-Fi

'Tron' Sequel Trailer Released by Disney (arstechnica.com) 148

This October will see the release of a film that's nearly 43 years in the making, reports Ars Technica: It's difficult to underestimate the massive influence that Disney's 1982 cult science fiction film, TRON, had on both the film industry — thanks to combining live action with what were then groundbreaking visual effects rife with computer-generated imagery — and on nerd culture at large. Over the ensuing decades there has been one sequel, an animated TV series, a comic book miniseries, video games, and theme park attractions, all modeled on director Steve Lisberg's original fictional world.

Now we're getting a third installment in the film franchise: TRON: Ares, directed by Joachim Rønning (Pirates of the Caribbean: Dead Men Tell No Tales, Maleficent: Mistress of Evil), that serves as a standalone sequel to 2010's TRON: Legacy. Disney just released the first trailer and poster art, and while the footage is short on plot, it's got the show-stopping visuals we've come to expect from all things TRON.

The film's director says it "builds upon the legacy of cutting-edge design, technology and storytelling, according to an official statement from Disney. And here's how they describe the plot. "TRON: Ares follows a highly sophisticated Program, Ares, who is sent from the digital world into the real world on a dangerous mission, marking humankind's first encounter with A.I. beings."

Share your thoughts in the comments. (Anyone remember playing the Tron videogame?)

The first episode of 2012's animated Tron: Uprising is available on the Disney XD YouTube channel...
AI

Microsoft's New AI-Generated Version of 'Quake 2' Now Playable Online (microsoft.com) 31

Microsoft has created a real-time AI-generated rendition of Quake II gameplay (playable on the web).

Friday Xbox's general manager of gaming AI posted the startling link to "an AI-generated gaming experience" at Copilot.Microsoft.com "Move, shoot, explore — and every frame is created on the fly by an AI world model, responding to player inputs in real-time. Try it here."

They started with their "Muse" videogame world models, adding "a real-time playable extension" that players can interact with through keyboard/controller actions, "essentially allowing you to play inside the model," according to a Microsoft blog post. A concerted effort by the team resulted in both planning out what data to collect (what game, how should the testers play said game, what kind of behaviours might we need to train a world model, etc), and the actual collection, preparation, and cleaning of the data required for model training. Much to our initial delight we were able to play inside the world that the model was simulating. We could wander around, move the camera, jump, crouch, shoot, and even blow-up barrels similar to the original game. Additionally, since it features in our data, we can also discover some of the secrets hidden in this level of Quake II. We can also insert images into the models' context and have those modifications persist in the scene...

We do not intend for this to fully replicate the actual experience of playing the original Quake II game. This is intended to be a research exploration of what we are able to build using current ML approaches. Think of this as playing the model as opposed to playing the game... The interactions with enemy characters is a big area for improvement in our current WHAMM model. Often, they will appear fuzzy in the images and combat with them (damage being dealt to both the enemy/player) can be incorrect.

They warn that the model "can and will forget about objects that go out of view" for longer than 0.9 seconds. "This can also be a source of fun, whereby you can defeat or spawn enemies by looking at the floor for a second and then looking back up. Or it can let you teleport around the map by looking up at the sky and then back down. These are some examples of playing the model."

This generative AI model was trained on Quake II "with just over a week of data," reports Tom's Hardware — a dramatic reduction from the seven years required for the original model launched in February.

Some context from The Verge: "You could imagine a world where from gameplay data and video that a model could learn old games and really make them portable to any platform where these models could run," said Microsoft Gaming CEO Phil Spencer in February. "We've talked about game preservation as an activity for us, and these models and their ability to learn completely how a game plays without the necessity of the original engine running on the original hardware opens up a ton of opportunity."
"Is porting a game like Gameday 98 more feasible through AI or a small team?" asks the blog Windows Central. "What costs less or even takes less time? These are questions we'll be asking and answering over the coming decade as AI continues to grow. We're in year two of the AI boom; I'm terrified of what we'll see in year 10."

"It's clear that Microsoft is now training Muse on more games than just Bleeding Edge," notes The Verge, "and it's likely we'll see more short interactive AI game experiences in Copilot Labs soon." Microsoft is also working on turning Copilot into a coach for games, allowing the AI assistant to see what you're playing and help with tips and guides. Part of that experience will be available to Windows Insiders through Copilot Vision soon.
Earth

A Busy Hurricane Season is Expected. Here's How It Will Be Different From the Last (washingtonpost.com) 60

An anonymous reader shares a report: Yet another busy hurricane season is likely across the Atlantic this year -- but some of the conditions that supercharged storms like Hurricanes Helene and Milton in 2024 have waned, according to a key forecast issued Thursday.

A warm -- yet no longer record-hot -- strip of waters across the Atlantic Ocean is forecast to help fuel development of 17 named tropical cyclones during the season that runs from June 1 through Nov. 30, according to Colorado State University researchers. Of those tropical cyclones, nine are forecast to become hurricanes, with four of those expected to reach "major" hurricane strength.

That would mean a few more tropical storms and hurricanes than in an average year, yet slightly quieter conditions than those observed across the Atlantic basin last year. This time last year, researchers from CSU were warning of an "extremely active" hurricane season with nearly two dozen named tropical storms. The next month, the National Oceanic and Atmospheric Administration released an aggressive forecast, warning the United States could face one of its worst hurricane seasons in two decades.

The forecast out Thursday underscores how warming oceans and cyclical patterns in storm activity have primed the Atlantic basin for what is now a decades-long string of frequent, above-normal -- but not necessarily hyperactive -- seasons, said Philip Klotzbach, a senior research scientist at Colorado State and the forecast's lead author.

Science

Bonobos May Combine Words In Ways Previously Thought Unique To Humans (theguardian.com) 27

A new study shows bonobos can combine vocal calls in ways that mirror human language, producing phrases with meanings beyond the sum of individual sounds. "Human language is not as unique as we thought," said Dr Melissa Berthet, the first author of the research from the University of Zurich. Another author, Dr Simon Townsend, said: "The cognitive building blocks that facilitate this capacity is at least 7m years old. And I think that is a really cool finding." The Guardian reports: Writing in the journal Science, Berthet and colleagues said that in the human language, words were often combined to produce phrases that either had a meaning that was simply the sum of its parts, or a meaning that was related to, but differed from, those of the constituent words. "'Blond dancer' -- it's a person that is both blond and a dancer, you just have to add the meanings. But a 'bad dancer' is not a person that is bad and a dancer," said Berthet. "So bad is really modifying the meaning of dancer here." It was previously thought animals such as birds and chimpanzees were only able to produce the former type of combination, but scientists have found bonobos can create both.

The team recorded 700 vocalizations from 30 adult bonobos in the Democratic Republic of the Congo, checking the context of each against a list of 300 possible situations or descriptions. The results reveal bonobos have seven different types of call, used in 19 different combinations. Of these, 15 require further analysis, but four appear to follow the rules of human sentences. Yelps -- thought to mean "'et's do that" -- followed by grunts -- thought to mean "look at what I am doing," were combined to make "yelp-grunt," which appeared to mean "let's do what I'm doing." The combination, the team said, reflected the sum of its parts and was used by bonobos to encourage others to build their night nests.

The other three combinations had a meaning apparently related to, but different from, their constituent calls. For example, the team found a peep -- which roughly means "I would like to ..." -- followed by a whistle -- appeared to mean "let's stay together" -- could be combined to create "peep-whistle." This combination was used to smooth over tense social situations, such as during mating or displays of prowess. The team speculated its meaning was akin to "let's find peace." The team said the findings in bonobos, together with the previous work in chimps, had implications for the evolution of language in humans, given all three species showed the ability to combine words or vocalizations to create phrases.

Security

Google Launches Sec-Gemini v1 AI Model To Improve Cybersecurity Defense 2

Google has introduced Sec-Gemini v1, an experimental AI model built on its Gemini platform and tailored for cybersecurity. BetaNews reports: Sec-Gemini v1 is built on top of Gemini, but it's not just some repackaged chatbot. Actually, it has been tailored with security in mind, pulling in fresh data from sources like Google Threat Intelligence, the OSV vulnerability database, and Mandiant's threat reports. This gives it the ability to help with root cause analysis, threat identification, and vulnerability triage.

Google says the model performs better than others on two well-known benchmarks. On CTI-MCQ, which measures how well models understand threat intelligence, it scores at least 11 percent higher than competitors. On CTI-Root Cause Mapping, it edges out rivals by at least 10.5 percent. Benchmarks only tell part of the story, but those numbers suggest it's doing something right.
Access is currently limited to select researchers and professionals for early testing. If you meet that criteria, you can request access here.
The Courts

AI Avatar Tries To Argue Case Before a New York Court (apnews.com) 24

An anonymous reader quotes a report from the Associated Press: It took only seconds for the judges on a New York appeals court to realize that the man addressing them from a video screen -- a person about to present an argument in a lawsuit -- not only had no law degree, but didn't exist at all. The latest bizarre chapter in the awkward arrival of artificial intelligence in the legal world unfolded March 26 under the stained-glass dome of New York State Supreme Court Appellate Division's First Judicial Department, where a panel of judges was set to hear from Jerome Dewald, a plaintiff in an employment dispute. "The appellant has submitted a video for his argument," said Justice Sallie Manzanet-Daniels. "Ok. We will hear that video now."

On the video screen appeared a smiling, youthful-looking man with a sculpted hairdo, button-down shirt and sweater. "May it please the court," the man began. "I come here today a humble pro se before a panel of five distinguished justices." "Ok, hold on," Manzanet-Daniels said. "Is that counsel for the case?" "I generated that. That's not a real person," Dewald answered. It was, in fact, an avatar generated by artificial intelligence. The judge was not pleased. "It would have been nice to know that when you made your application. You did not tell me that sir," Manzanet-Daniels said before yelling across the room for the video to be shut off. "I don't appreciate being misled," she said before letting Dewald continue with his argument.

Dewald later penned an apology to the court, saying he hadn't intended any harm. He didn't have a lawyer representing him in the lawsuit, so he had to present his legal arguments himself. And he felt the avatar would be able to deliver the presentation without his own usual mumbling, stumbling and tripping over words. In an interview with The Associated Press, Dewald said he applied to the court for permission to play a prerecorded video, then used a product created by a San Francisco tech company to create the avatar. Originally, he tried to generate a digital replica that looked like him, but he was unable to accomplish that before the hearing. "The court was really upset about it," Dewald conceded. "They chewed me up pretty good." [...] As for Dewald's case, it was still pending before the appeals court as of Thursday.

Operating Systems

Coreboot 25.03 Released With Support For 22 More Motherboards (phoronix.com) 26

Coreboot 25.03 has been released with support for 22 new motherboards and several other significant updates, including enhanced display handling, USB debugging, RISC-V support, and RAM initialization for older Intel platforms. Phoronix reports: Coreboot 25.03 delivers display handling improvements, a better USB debugging experience, CPU topology updates, various improvements to the open-source RAM initialization for aging Intel Haswell platforms, improved USB Type-C and Thunderbolt handling, various embedded controller (EC) improvements, better RISC-V architecture support, DDR5-7500 support, and many bug fixes across the sprawling Coreboot codebase. More details, including a full list of the supported boards, can be found here.
AI

DeepMind Details All the Ways AGI Could Wreck the World (arstechnica.com) 36

An anonymous reader quotes a report from Ars Technica, written by Ryan Whitwam: Researchers at DeepMind have ... released a new technical paper (PDF) that explains how to develop AGI safely, which you can download at your convenience. It contains a huge amount of detail, clocking in at 108 pages before references. While some in the AI field believe AGI is a pipe dream, the authors of the DeepMind paper project that it could happen by 2030. With that in mind, they aimed to understand the risks of a human-like synthetic intelligence, which they acknowledge could lead to "severe harm." This work has identified four possible types of AGI risk, along with suggestions on how we might ameliorate said risks. The DeepMind team, led by company co-founder Shane Legg, categorized the negative AGI outcomes as misuse, misalignment, mistakes, and structural risks.

The first possible issue, misuse, is fundamentally similar to current AI risks. However, because AGI will be more powerful by definition, the damage it could do is much greater. A ne'er-do-well with access to AGI could misuse the system to do harm, for example, by asking the system to identify and exploit zero-day vulnerabilities or create a designer virus that could be used as a bioweapon. DeepMind says companies developing AGI will have to conduct extensive testing and create robust post-training safety protocols. Essentially, AI guardrails on steroids. They also suggest devising a method to suppress dangerous capabilities entirely, sometimes called "unlearning," but it's unclear if this is possible without substantially limiting models. Misalignment is largely not something we have to worry about with generative AI as it currently exists. This type of AGI harm is envisioned as a rogue machine that has shaken off the limits imposed by its designers. Terminators, anyone? More specifically, the AI takes actions it knows the developer did not intend. DeepMind says its standard for misalignment here is more advanced than simple deception or scheming as seen in the current literature.

To avoid that, DeepMind suggests developers use techniques like amplified oversight, in which two copies of an AI check each other's output, to create robust systems that aren't likely to go rogue. If that fails, DeepMind suggests intensive stress testing and monitoring to watch for any hint that an AI might be turning against us. Keeping AGIs in virtual sandboxes with strict security and direct human oversight could help mitigate issues arising from misalignment. Basically, make sure there's an "off" switch. If, on the other hand, an AI didn't know that its output would be harmful and the human operator didn't intend for it to be, that's a mistake. We get plenty of those with current AI systems -- remember when Google said to put glue on pizza? The "glue" for AGI could be much stickier, though. DeepMind notes that militaries may deploy AGI due to "competitive pressure," but such systems could make serious mistakes as they will be tasked with much more elaborate functions than today's AI. The paper doesn't have a great solution for mitigating mistakes. It boils down to not letting AGI get too powerful in the first place. DeepMind calls for deploying slowly and limiting AGI authority. The study also suggests passing AGI commands through a "shield" system that ensures they are safe before implementation.

Lastly, there are structural risks, which DeepMind defines as the unintended but real consequences of multi-agent systems contributing to our already complex human existence. For example, AGI could create false information that is so believable that we no longer know who or what to trust. The paper also raises the possibility that AGI could accumulate more and more control over economic and political systems, perhaps by devising heavy-handed tariff schemes. Then one day, we look up and realize the machines are in charge instead of us. This category of risk is also the hardest to guard against because it would depend on how people, infrastructure, and institutions operate in the future.

Power

Open-Source Tool Designed To Throttle PC and Server Performance Based On Electricity Pricing (tomshardware.com) 56

Robotics and machine learning engineer Naveen Kul developed WattWise, a lightweight open-source CLI tool that monitors power usage via smart plugs and throttles system performance based on electricity pricing and peak hours. Tom's Hardware reports: The simple program, called WattWise, came about when Naveen built a dual-socket EPYC workstation with plans to add four GPUs. It's a power-intensive setup, so he wanted a way to monitor its power consumption using a Kasa smart plug. The enthusiast has released the monitoring portion of the project to the public now, but the portion that manages clocks and power will be released later. Unfortunately, the Kasa Smart app and the Home Assistant dashboard was inconvenient and couldn't do everything he desired. He already had a terminal window running monitoring tools like htop, nvtop, and nload, and decided to take matters into his own hands rather than dealing with yet another app.

Naveen built a terminal-based UI that shows power consumption data through Home Assistant and the TP-Link integration. The app monitors real-time power use, showing wattage and current, as well as providing historical consumption charts. More importantly, it is designed to automatically throttle CPU and GPU performance. Naveen's power provider uses Time-of-Use (ToU) pricing, so using a lot of power during peak hours can cost significantly more. The workstation can draw as much as 1400 watts at full load, but by reducing the CPU frequency from 3.7 GHz to 1.5 GHz, he's able to reduce consumption by about 225 watts. (No mention is made of GPU throttling, which could potentially allow for even higher power savings with a quad-GPU setup.)

Results will vary based on the hardware being used, naturally, and servers can pull far more power than a typical desktop -- even one designed and used for gaming. WattWise optimizes the system's clock speed based on the current system load, power consumption as reported by the smart plug, and the time -- with the latter factoring in peak pricing. From there, it uses a Proportional-Integral (PI) controller to manage the power and adapts system parameters based on the three variables.
A blog post with more information is available here.

WattWise is also available on GitHub.
Biotech

Open Source Genetic Database Shuts Down To Protect Users From 'Authoritarian Governments' (404media.co) 28

An anonymous reader quotes a report from 404 Media: The creator of an open source genetic database is shutting it down and deleting all of its data because he has come to believe that its existence is dangerous with "a rise in far-right and other authoritarian governments" in the United States and elsewhere. "The largest use case for DTC genetic data was not biomedical research or research in big pharma," Bastian Greshake Tzovaras, the founder of OpenSNP, wrote in a blog post. "Instead, the transformative impact of the data came to fruition among law enforcement agencies, who have put the genealogical properties of genetic data to use."

OpenSNP has collected roughly 7,500 genomes over the last 14 years, primarily by allowing people to voluntarily submit their own genetic information they have downloaded from 23andMe. With the bankruptcy of 23andMe, increased interest in genetic data by law enforcement, and the return of Donald Trump and rise of authoritarian governments worldwide, Greshake Tzovaras told 404 Media he no longer believes it is ethical to run the database. "I've been thinking about it since 23andMe was on the verge of bankruptcy and been really considering it since the U.S. election. It definitely is really bad over there [in the United States]," Greshake Tzovaras told 404 Media. "I am quite relieved to have made the decision and come to a conclusion. It's been weighing on my mind for a long time."

Greshake Tzovaras said that he is proud of the OpenSNP project, but that, in a world where scientific data is being censored and deleted and where the Trump administration has focused on criminalizing immigrants and trans people, he now believes that the most responsible thing to do is to delete the data and shut down the project. "Most people in OpenSNP may not be at particular risk right now, but there are people from vulnerable populations in here as well," Greshake Tzovaras said. "Thinking about gender representation, minorities, sexual orientation -- 23andMe has been working on the whole 'gay gene' thing, it's conceivable that this would at some point in the future become an issue."
"Across the globe there is a rise in far-right and other authoritarian governments. While they are cracking down on free and open societies, they are also dedicated to replacing scientific thought and reasoning with pseudoscience across disciplines," Greshake Tzovaras wrote. "The risk/benefit calculus of providing free & open access to individual genetic data in 2025 is very different compared to 14 years ago. And so, sunsetting openSNP -- along with deleting the data stored within it -- feels like it is the most responsible act of stewardship for these data today."

"The interesting thing to me is there are data preservation efforts in the U.S. because the government is deleting scientific data that they don't like. This is approaching that same problem from a different direction," he added. "We need to protect the people in this database. I am supportive of preserving scientific data and knowledge, but the data comes second -- the people come first. We prefer deleting the data."
Programming

How Rust Finally Got a Specification - Thanks to a Consultancy's Open-Source Donation (rustfoundation.org) 16

As Rust approaches its 10th anniversary, "there is an important piece of documentation missing that many other languages provide," notes the Rust Foundation.

While there's documentation and tutorials — there's no official language specification: In December 2022, an RFC was submitted to encourage the Rust Project to begin working on a specification. After much discussion, the RFC was approved in July 2023, and work began.

Initially, the Rust Project specification team (t-spec) were interested in creating the document from scratch using the Rust Reference as a guiding marker. However, the team knew there was already an external Rust specification that was being used successfully for compiler qualification purposes — the FLS.

Thank Berlin-based Ferrous Systems, a Rust-based consultancy who assembled that description "some years ago," according to a post on the Rust blog: They've since been faithfully maintaining and updating this document for new versions of Rust, and they've successfully used it to qualify toolchains based on Rust for use in safety-critical industries. [The Rust Foundation notes it part of the consultancy's "Ferrocene" Rust compiler/toolchain.] Seeing this success, others have also begun to rely on the FLS for their own qualification efforts when building with Rust.
The Rust Foundation explains: The FLS provides a structured and detailed reference for Rust's syntax, semantics, and behavior, serving as a foundation for verification, compliance, and standardization efforts. Since Rust did not have an official language specification back then, nor a plan to write one, the FLS represented a major step toward describing Rust in a way that aligns with industry requirements, particularly in high-assurance domains.
And the Rust Project is "passionate about shipping high quality tools that enable people to build reliable software at scale," adds the Rust blog. So... It's in that light that we're pleased to announce that we'll be adopting the FLS into the Rust Project as part of our ongoing specification efforts. This adoption is being made possible by the gracious donation of the FLS by Ferrous Systems. We're grateful to them for the work they've done in assembling the FLS, in making it fit for qualification purposes, in promoting its use and the use of Rust generally in safety-critical industries, and now, for working with us to take the next step and to bring the FLS into the Project.

With this adoption, we look forward to better integrating the FLS with the processes of the Project and to providing ongoing and increased assurances to all those who use Rust in safety-critical industries and, in particular, to those who use the FLS as part of their qualification efforts.

More from the Rust Foundation: The t-spec team wanted to avoid potential confusion from having two highly visible Rust specifications in the industry and so decided it would be worthwhile to try to integrate the FLS with the Rust Reference to create the official Rust Project specification. They approached Ferrous Systems, which agreed to contribute its FLS to the Rust Project and allow the Rust Project to take over its development and management... This generous donation will provide a clearer path to delivering an official Rust specification. It will also empower the Rust Project to oversee its ongoing evolution, providing confidence to companies and individuals already relying on the FLS, and marking a major milestone for the Rust ecosystem.

"I really appreciate Ferrous taking this step to provide their specification to the Rust Project," said Joel Marcey, Director of Technology at the Rust Foundation and member of the t-spec team. "They have already done a massive amount of legwork...." This effort will provide others who require a Rust specification with an official, authoritative reference for their work with the Rust programming language... This is an exciting outcome. A heartfelt thank you to the Ferrous Systems team for their invaluable contribution!

Marcey said the move allows the team "to supercharge our progress in the delivery of an official Rust specification."

And the co-founder of Ferrous Systems, Felix Gilcher, also sounded excited. "We originally created the Ferrocene Language Specification to provide a structured and reliable description of Rust for the certification of the Ferrocene compiler. As an open source-first company, contributing the FLS to the Rust Project is a logical step toward fostering the development of a unified, community-driven specification that benefits all Rust users."
Cloud

Microsoft Announces 'Hyperlight Wasm': Speedy VM-Based Security at Scale with a WebAssembly Runtime (microsoft.com) 18

Cloud providers like the security of running things in virtual machines "at scale" — even though VMs "are not known for having fast cold starts or a small footprint..." noted Microsoft's Open Source blog last November. So Microsoft's Azure Core Upstream team built an open source Rust library called Hyperlight "to execute functions as fast as possible while isolating those functions within a VM."

But that was just the beginning... Then, we showed how to run Rust functions really, really fast, followed by using C to [securely] run Javascript. In February 2025, the Cloud Native Computing Foundation (CNCF) voted to onboard Hyperlight into their Sandbox program [for early-stage projects].

[This week] we're announcing the release of Hyperlight Wasm: a Hyperlight virtual machine "micro-guest" that can run wasm component workloads written in many programming languages...

Traditional virtual machines do a lot of work to be able to run programs. Not only do they have to load an entire operating system, they also boot up the virtual devices that the operating system depends on. Hyperlight is fast because it doesn't do that work; all it exposes to its VM guests is a linear slice of memory and a CPU. No virtual devices. No operating system. But this speed comes at the cost of compatibility. Chances are that your current production application expects a Linux operating system running on the x86-64 architecture (hardware), not a bare linear slice of memory...

[B]uilding Hyperlight with a WebAssembly runtime — wasmtime — enables any programming language to execute in a protected Hyperlight micro-VM without any prior knowledge of Hyperlight at all. As far as program authors are concerned, they're just compiling for the wasm32-wasip2 target... Executing workloads in the Hyperlight Wasm guest isn't just possible for compiled languages like C, Go, and Rust, but also for interpreted languages like Python, JavaScript, and C#. The trick here, much like with containers, is to also include a language runtime as part of the image... Programming languages, runtimes, application platforms, and cloud providers are all starting to offer rich experiences for WebAssembly out of the box. If we do things right, you will never need to think about whether your application is running inside of a Hyperlight Micro-VM in Azure. You may never know your workload is executing in a Hyperlight Micro VM. And that's a good thing.

While a traditional virtual-device-based VM takes about 125 milliseconds to load, "When the Hyperlight VMM creates a new VM, all it needs do to is create a new slice of memory and load the VM guest, which in turn loads the wasm workload. This takes about 1-2 milliseconds today, and work is happening to bring that number to be less than 1 millisecond in the future."

And there's also double security due to Wasmtime's software-defined runtime sandbox within Hyperlight's larger VM...
Transportation

'Why Did the Government Declare War on My Adorable Tiny Truck?' (bloomberg.com) 176

Automotive historian Dan Albert loves the "adorable tiny truck" he's driving. It's one of the small Japan-made "kei" pickups and minivans that "make up about a third of car sales in Japan." Americans can legally import older models for less than $10,000, and getting 40 miles per gallon they're "Cheap to buy and run... rugged, practical, no-frills machines — exactly what the American-built pickup truck used to be."

But unfortunately, kei buyers face "bureaucratic roadblocks that states like Massachusetts have erected to keep kei cars and trucks out of the hands of U.S. drivers." Several state departments of motor vehicles (DMVs) have balked at registering the imported machines, saying that they're too unsafe for American streets. Owners have responded with a righteous mix of good humor, lobbying and lawsuits... Kei trucks do not meet the Federal Motor Vehicle Safety Standards, or FMVSS — the highly specific rules US-market new cars must meet. But since 1988, the Imported Vehicle Safety Compliance Act has exempted vehicles that are at least 25 years old from these crash safety standards, allowing drivers to bring over vintage European and Asian market models...

Getting insurance coverage was the next barrier, as the company that had long been underwriting the Albert family's fleet also rejected me, forcing me to seek out a specialty "collector car" insurer. (I did eventually get regular coverage....) Maine, Rhode Island, New York, Pennsylvania, Georgia, Virginia, and Michigan also tightened their rules on registering small Japanese imports in recent years. The culprit, according to the auto enthusiast press, was the American Association of Motor Vehicle Administrators, the trade organization that serves as the lobbying and policy arm of DMVs across North America. Much of AAMVA's work involves integrating the databases of the 69 US and Canadian motor vehicle jurisdictions who are its members, so that a car stolen in one state can't be titled in another... The kei truck's regulatory troubles can be traced to a 2011 AAMVA report, "Best Practices Regarding Registration and Titling of Mini-Trucks," which called for outright bans and encouraged DMVs to lobby state legislatures to outlaw keis entirely.

The Insurance Institute of Highway Safety concurred, telling AAMVA that its recommendation did not go far enough: The IIHS said that keis should join the class of conveyances that the U.S. government calls Low Speed Vehicles, which are mechanically limited to 25 miles per hour or less and should be used only for short local trips on low-speed-limit roads because they can't protect occupants in the event of a collision with a regular vehicle... [But] By 2008, Japan's kei trucks did feature crumple zones and driver airbags in compliance with that country's safety standards...

Despite its name, the Imported Vehicle Safety Compliance Act that lets older cars into the US from overseas isn't really about safety: Car industry lobbyists secured passage of the law to protect dealer profits. Newer keis — which are banned — are safer and cleaner than the 25-year-old ones that can be imported now. (Battery-powered keis debuted in 2009.) But even mine has an airbag, front crumple zone, seatbelt pretensioners, and anti-lock brakes.

The article notes that kie fans have "a distinctly libertarian streak... Some owners I've talked to report forging titles, setting up shell companies in Montana and finding other means of skirting DMV rules."

Thanks to long-time Slashdot reader schwit1 for sharing the article.
Facebook

'An Open Letter To Meta: Support True Messaging Interoperability With XMPP' (xmpp.org) 31

In 1999 Slashdot reader Jeremie announced "a new project I recently started to create a complete open-source platform for Instant Messaging with transparent communication to other IM systems (ICQ, AIM, etc)." It was the first release of the eXtensible Messaging and Presence Protocol, and by 2008 Slashdot was asking if XMPP was "the next big thing." Facebook even supported it for third-party chat clients until 2015.

And here in 2025, the chair of the nonprofit XMPP Standards Foundation is long-time Slashdot reader ralphm, who is now issuing this call to action at XMPP.org: The European Digital Markets Act (DMA) is designed to break down walled gardens and enforce messaging interoperability. As a designated gatekeeper, Meta—controlling WhatsApp and Messenger—must comply. However, its current proposal falls short, risking further entrenchment of its dominance rather than fostering genuine competition. [..]

A Call to Action

The XMPP Standards Foundation urges Meta to adopt XMPP for messaging interoperability. It is ready to collaborate, continue to evolve the protocol to meet modern needs, and ensure true compliance with the DMA. Let's build an open, competitive messaging ecosystem—one that benefits both users and service providers.

It's time for real interoperability. Let's make it happen.

AI

First Trial of Generative AI Therapy Shows It Might Help With Depression 42

An anonymous reader quotes a report from MIT Technology Review: The first clinical trial of a therapy bot that uses generative AI suggests it was as effective as human therapy for participants with depression, anxiety, or risk for developing eating disorders. Even so, it doesn't give a go-ahead to the dozens of companies hyping such technologies while operating in a regulatory gray area. A team led by psychiatric researchers and psychologists at the Geisel School of Medicine at Dartmouth College built the tool, called Therabot, and the results were published on March 27 in the New England Journal of Medicine. Many tech companies are building AI therapy bots to address the mental health care gap, offering more frequent and affordable access than traditional therapy. However, challenges persist: poorly worded bot responses can cause harm, and forming meaningful therapeutic relationships is hard to replicate in software. While many bots rely on general internet data, researchers at Dartmouth developed "Therabot" using custom, evidence-based datasets. Here's what they found: To test the bot, the researchers ran an eight-week clinical trial with 210 participants who had symptoms of depression or generalized anxiety disorder or were at high risk for eating disorders. About half had access to Therabot, and a control group did not. Participants responded to prompts from the AI and initiated conversations, averaging about 10 messages per day. Participants with depression experienced a 51% reduction in symptoms, the best result in the study. Those with anxiety experienced a 31% reduction, and those at risk for eating disorders saw a 19% reduction in concerns about body image and weight. These measurements are based on self-reporting through surveys, a method that's not perfect but remains one of the best tools researchers have.

These results ... are about what one finds in randomized control trials of psychotherapy with 16 hours of human-provided treatment, but the Therabot trial accomplished it in about half the time. "I've been working in digital therapeutics for a long time, and I've never seen levels of engagement that are prolonged and sustained at this level," says [Michael Heinz, a research psychiatrist at Dartmouth College and Dartmouth Health and first author of the study].
Crime

Trump Pardons Founder of Electric Vehicle Start-Up Nikola, Trevor Milton (theguardian.com) 64

An anonymous reader quotes a report from The Guardian: Trevor Milton, the founder of electric vehicle start-up Nikola who was sentenced to prison last year, was pardoned by Donald Trump late on Thursday, the White House confirmed on Friday. The pardon of Milton, who was sentenced to four years in prison for exaggerating the potential of his technology, could wipe out hundreds of millions of dollars in restitution that prosecutors were seeking for defrauded investors. Milton and his wife donated more than $1.8 million to a Trump re-election campaign fund less than a month before the November election, according to the Federal Election Commission.

At Milton's trial, prosecutors say a company video of a prototype truck appearing to be driven down a desert highway was actually a video of a non-functioning Nikola that had been rolled down a hill. Milton had not been incarcerated pending an appeal. Milton said late on Thursday on social media and via a press release that he had been pardoned by Trump. "I am incredibly grateful to President Trump for his courage in standing up for what is right and for granting me this sacred pardon of innocence," Milton said.
Here's a timeline of notable events surrounding Nikola:

June, 2016: Nikola Motor Receives Over 7,000 Preorders Worth Over $2.3 Billion For Its Electric Truck
December, 2016: Nikola Motor Company Reveals Hydrogen Fuel Cell Truck With Range of 1,200 Miles
February, 2020: Nikola Motors Unveils Hybrid Fuel-Cell Concept Truck With 600-Mile Range
June, 2020: Nikola Founder Exaggerated the Capability of His Debut Truck
September, 2020: Nikola Motors Accused of Massive Fraud, Ocean of Lies
September, 2020: Nikola Admits Prototype Was Rolling Downhill In Promo Video
September, 2020: Nikola Founder Trevor Milton Steps Down as Chairman in Battle With Short Seller
October, 2020: Nikola Stock Falls 14 Percent After CEO Downplays Badger Truck Plans
November, 2020: Nikola Stock Plunges As Company Cancels Badger Pickup Truck
July, 2021: Nikola Founder Trevor Milton Indicted on Three Counts of Fraud
December, 2021: EV Startup Nikola Agrees To $125 Million Settlement
September, 2022: Nikola Founder Lied To Investors About Tech, Prosecutor Says in Fraud Trial
December, 2023: Nikola Founder Trevor Milton Sentenced To 4 Years For Securities Fraud
February 19, 2025: Nikola Files for Bankruptcy With Plans To Sell Assets, Wind Down
Operating Systems

Linux Kernel 6.14 Is a Big Leap Forward In Performance, Windows Compatibility (zdnet.com) 34

An anonymous reader quotes a report from ZDNet, written by Steven Vaughan-Nichols: Despite the minor delay, Linux 6.14 arrives packed with cutting-edge features and improvements to power upcoming Linux distributions, such as the forthcoming Ubuntu 25.04 and Fedora 42. The big news for desktop users is the improved NTSYNC driver, especially those who like to play Windows games or run Windows programs on Linux. This driver is designed to emulate Windows NT synchronization primitives. What that feature means for you and me is that it will significantly improve the performance of Windows programs running on Wine and Steam Play. [...] Gamers always want the best possible graphics performance, so they'll also be happy to see that Linux now supports recently launched AMD RDNA 4 graphics cards. This approach includes support for the AMD Radeon RX 9070 XT and RX 9070 graphics cards. Combine this support with the recently improved open-source RADV driver and AMD gamers should see the best speed yet on their gaming rigs.

Of course, the release is not just for gamers. Linux 6.14 also includes several AMD and Intel processor enhancements. These boosts focus on power management, thermal control, and compute performance optimizations. These updates are expected to improve overall system efficiency and performance. This release also comes with the AMDXDNA driver, which provides official support for AMD's neural processing units based on the XDNA architecture. This integration enables efficient execution of AI workloads, such as convolutional neural networks and large language models, directly on supported AMD hardware. While Rust has faced some difficulties in recent months in Linux, more Rust programming language abstractions have been integrated into the kernel, laying the groundwork for future drivers written in Rust. [...] Besides drivers, Miguel Ojeda, Rust for Linux's lead developer, said recently that the introduction of the macro for smart pointers with Rust 1.84: derive(CoercePointee) is an "important milestone on the way to building a kernel that only uses stable Rust functions." This approach will also make integrating C and Rust code easier. We're getting much closer to Rust being grafted into Linux's tree.

In addition, Linux 6.14 supports Qualcomm's latest Snapdragon 8 Elite mobile processor, enhancing performance and stability for devices powered by this chipset. That support means you can expect to see much faster Android-based smartphones later this year. This release includes a patch for the so-called GhostWrite vulnerability, which can be used to root some RISC-V processors. This fix will block such attacks. Additionally, Linux 6.14 includes improvements for the copy-on-write Btrfs file system/logical volume manager. These primarily read-balancing methods offer flexibility for different RAID hardware configurations and workloads. Additionally, support for uncached buffered I/O optimizes memory usage on systems with fast storage devices.
Linux 6.14 is available for download here.

Slashdot Top Deals