Open Source

Cloudflare Acquires Team Behind Open Source Framework Astro (thenewstack.io) 9

Cloudflare has acquired the core team behind the open source JavaScript framework Astro, bringing its creators in-house while pledging to keep Astro fully open source. The New Stack reports: Astro is used by major brands like IKEA, Unilever, Visa and OpenAI to build fast, content-driven websites. Search engines prioritize fast-loading and clean pages, the Cloudflare statement noted. Websites that rely heavily on JavaScript for initial rendering often struggle to deliver the required speed, which hinders search rankings and customer conversions.

Pages on Astro serve up only the code needed to display a page in a browser. That's in part because of its Island architecture, which it introduced in 2021. Astro's Islands allow developers to create "islands" of interactive client-side components, while most of the page is generated statically in HTML. Server Islands extend the same architecture to the server.

Astro is also UI-agnostic, meaning that while it has its own independent engine, it allows developers to bring in components from React, Svelte, Vue and other frameworks. This makes Astro a preferred choice for building high-performance, content-driven websites optimized for speed, according to Cloudflare.
"Over the past few years, we've seen an incredibly diverse range of developers and companies use Astro to build for the web," said Astro's former CTO, Fred Schott, in a post with Cloudflare senior product manager Brendan Irvine-Broque. "At Cloudflare, we use Astro, too -- for our developer docs, website, landing pages and more." They said that the acquisition will allow them to "double down" on making Astro the best framework for content-driven websites.
Programming

Ruby on Rails Creator Says AI Coding Tools Still Can't Match Most Junior Programmers (youtube.com) 44

AI still can't produce code as well as most junior programmers he's worked with, David Heinemeier Hansson, the creator of Ruby on Rails and co-founder of 37 Signals, said on a recent podcast [video link], which is why he continues to write most of his code by hand. Hansson compared AI's current coding capabilities to "a flickering light bulb" -- total darkness punctuated by moments of clarity before going pitch black again.

At his company, humans wrote 95% of the code for Fizzy, 37 Signals' Kanban-inspired organization product, he said. The team experimented with AI-powered features, but those ended up on the cutting room floor. "I'm not feeling that we're falling behind at 37 Signals in terms of our ability to produce, in terms of our ability to launch things or improve the products," Hansson said.

Hansson said he remains skeptical of claims that businesses can fire half their programmers and still move faster. Despite his measured skepticism, Hansson said he marvels at the scale of bets the U.S. economy is placing on AI reaching AGI. "The entire American economy right now is one big bet that that's going to happen," he said.
Businesses

Code.org: Use AI In an Interview Without Our OK and You're Dead To Us 37

theodp writes: Code.org, the nonprofit backed by AI giants Microsoft, Google and Amazon and whose Hour of AI and free AI curriculum aim to make world's K-12 schoolchildren AI literate, points job seekers to its AI Use Policy in Hiring, which promises dire consequences for those who use AI during interviews or take home assignments without its OK.

Explaining "What's Not Okay," Code.org writes: "While we support thoughtful use of AI, certain uses undermine fairness and honesty in the hiring process. We ask that candidates do not [...] use AI during interviews and take-home assignments without explicit consent from the interview team. Such use goes against our values of integrity and transparency and will result in disqualification from the hiring process."

Interestingly, Code.org CEO Partovi last year faced some blowback from educators over his LinkedIn post that painted schools that police AI use by students as dinosaurs. Partovi wrote, "Schools of the past define AI use as 'cheating.' Schools of the future define AI skills as the new literacy. Every desk-job employer is looking to hire workers who are adept at AI. Employers want the students who are best at this new form of 'cheating.'"
AI

AI Fails at Most Remote Work, Researchers Find (msn.com) 39

A new study "compared how well top AI systems and human workers did at hundreds of real work assignments," reports the Washington Post.

They add that at least one example "illustrates a disconnect three years after the release of ChatGPT that has implications for the whole economy." AI can accomplish many impressive tasks involving computer code, documents or images. That has prompted predictions that human work of many kinds could soon be done by computers alone. Bentley University and Gallup found in a survey [PDF] last year that about three-quarters of Americans expect AI to reduce the number of U.S. jobs over the next decade. But economic data shows the technology largely has not replaced workers.

To understand what work AI can do on its own today, researchers collected hundreds of examples of projects posted on freelancing platforms that humans had been paid to complete. They included tasks such as making 3D product animations, transcribing music, coding web video games and formatting research papers for publication. The research team then gave each task to AI systems such as OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude. The best-performing AI system successfully completed only 2.5 percent of the projects, according to the research team from Scale AI, a start-up that provides data to AI developers, and the Center for AI Safety, a nonprofit that works to understand risks from AI. "Current models are not close to being able to automate real jobs in the economy," said Jason Hausenloy, one of the researchers on the Remote Labor Index study...

The results, which show how AI systems fall short, challenge predictions that the technology is poised to soon replace large portions of the workforce... The AI systems failed on nearly half of the Remote Labor Index projects by producing poor-quality work, and they left more than a third incomplete. Nearly 1 in 5 had basic technical problems such as producing corrupt files, the researchers found.

One test involved creating an interactive dashboard for data from the World Happiness Report, according to the article. "At first glance, the AI results look adequate. But closer examination reveals errors, such as countries inexplicably missing data, overlapping text and legends that use the wrong colors — or no colors at all."

The researchers say AI systems are hobbled by a lack of memory, and are also weak on "visual" understanding.
AI

AI Models Are Starting To Learn By Asking Themselves Questions (wired.com) 82

An anonymous reader quotes a report from Wired: [P]erhaps AI can, in fact, learn in a more human way -- by figuring out interesting questions to ask itself and attempting to find the right answer. A project from Tsinghua University, the Beijing Institute for General Artificial Intelligence (BIGAI), and Pennsylvania State University shows that AI can learn to reason in this way by playing with computer code. The researchers devised a system called Absolute Zero Reasoner (AZR) that first uses a large language model to generate challenging but solvable Python coding problems. It then uses the same model to solve those problems before checking its work by trying to run the code. And finally, the AZR system uses successes and failures as a signal to refine the original model, augmenting its ability to both pose better problems and solve them.

The team found that their approach significantly improved the coding and reasoning skills of both 7 billion and 14 billion parameter versions of the open source language model Qwen. Impressively, the model even outperformed some models that had received human-curated data. [...] A key challenge is that for now the system only works on problems that can easily be checked, like those that involve math or coding. As the project progresses, it might be possible to use it on agentic AI tasks like browsing the web or doing office chores. This might involve having the AI model try to judge whether an agent's actions are correct. One fascinating possibility of an approach like Absolute Zero is that it could, in theory, allow models to go beyond human teaching. "Once we have that it's kind of a way to reach superintelligence," [said Zilong Zheng, a researcher at BIGAI who worked on the project].

IT

Torvalds Tells Kernel Devs To Stop Debating AI Slop - Bad Actors Won't Follow the Rules Anyway (theregister.com) 53

Linus Torvalds has weighed in on an ongoing debate within the Linux kernel development community about whether documentation should explicitly address AI-generated code contributions, and his position is characteristically blunt: stop making it an issue. The Linux creator was responding to Oracle-affiliated kernel developer Lorenzo Stoakes, who had argued that treating LLMs as "just another tool" ignores the threat they pose to kernel quality. "Thinking LLMs are 'just another tool' is to say effectively that the kernel is immune from this," Stoakes wrote.

Torvalds disagreed sharply. "There is zero point in talking about AI slop," he wrote. "Because the AI slop people aren't going to document their patches as such." He called such discussions "pointless posturing" and said that kernel documentation is "for good actors." The exchange comes as a team led by Intel's Dave Hansen works on guidelines for tool-generated contributions. Stoakes had pushed for language letting maintainers reject suspected AI slop outright, arguing the current draft "tries very hard to say 'NOP.'" Torvalds made clear he doesn't want kernel documentation to become a political statement on AI. "I strongly want this to be that 'just a tool' statement," he wrote.
Programming

Creator of Claude Code Reveals His Workflow 54

Boris Cherny, the creator of Claude Code at Anthropic, revealed a deceptively simple workflow that uses parallel AI agents, verification loops, and shared memory to let one developer operate with the output of an entire engineering team. "I run 5 Claudes in parallel in my terminal," Cherny wrote. "I number my tabs 1-5, and use system notifications to know when a Claude needs input." He also runs "5-10 Claudes on claude.ai" in his browser, using a "teleport" command to hand off work between the web and his local machine. This validates the "do more with less" strategy Anthropic's President Daniela Amodei recently pitched during an interview with CNBC. VentureBeat reports: For the past week, the engineering community has been dissecting a thread on X from Boris Cherny, the creator and head of Claude Code at Anthropic. What began as a casual sharing of his personal terminal setup has spiraled into a viral manifesto on the future of software development, with industry insiders calling it a watershed moment for the startup.

"If you're not reading the Claude Code best practices straight from its creator, you're behind as a programmer," wrote Jeff Tang, a prominent voice in the developer community. Kyle McNease, another industry observer, went further, declaring that with Cherny's "game-changing updates," Anthropic is "on fire," potentially facing "their ChatGPT moment."

The excitement stems from a paradox: Cherny's workflow is surprisingly simple, yet it allows a single human to operate with the output capacity of a small engineering department. As one user noted on X after implementing Cherny's setup, the experience "feels more like Starcraft" than traditional coding -- a shift from typing syntax to commanding autonomous units.
Windows

Microsoft Says It's Not Planning To Use AI To Rewrite Windows From C To Rust 41

Microsoft has denied any plans to rewrite Windows 11 using AI and Rust after a LinkedIn post from one of its top-level engineers sparked a wave of online backlash by claiming the company's goal was to "eliminate every line of C and C++ from Microsoft by 2030."

Galen Hunt, a principal software engineer responsible for several large-scale research projects at Microsoft, made the claim in what was originally a hiring post for his team. His original wording described a "North Star" of "1 engineer, 1 month, 1 million lines of code" and outlined a strategy to "combine AI and Algorithms to rewrite Microsoft's largest codebases." The repeated use of "our" in the post led many to interpret it as an official company direction rather than a personal research ambition.

Frank X. Shaw, Microsoft's head of communications, told Windows Latest that the company has no such plans. Hunt subsequently edited his LinkedIn post to clarify that "Windows is NOT being rewritten in Rust with AI" and that his team's work is a research project focused on building technology to enable language-to-language migration. He characterized the reaction as "speculative reading between the lines."
Programming

Microsoft To Replace All C/C++ Code With Rust By 2030 (thurrott.com) 272

Microsoft plans to eliminate all C and C++ code across its major codebases by 2030, replacing it with Rust using AI-assisted, large-scale refactoring. "My goal is to eliminate every line of C and C++ from Microsoft by 2030," Microsoft Distinguished Engineer Galen Hunt writes in a post on LinkedIn. "Our strategy is to combine AI and Algorithms to rewrite Microsoft's largest codebases. Our North Star is '1 engineer, 1 month, 1 million lines of code.' To accomplish this previously unimaginable task, we've built a powerful code processing infrastructure. Our algorithmic infrastructure creates a scalable graph over source code at scale. Our AI processing infrastructure then enables us to apply AI agents, guided by algorithms, to make code modifications at scale. The core of this infrastructure is already operating at scale on problems such as code understanding."

Hunt says he's looking to hire a Principal Software Engineer to help with this effort. "The purpose of this Principal Software Engineer role is to help us evolve and augment our infrastructure to enable translating Microsoft's largest C and C++ systems to Rust," writes Hunt. "A critical requirement for this role is experience building production quality systems-level code in Rust -- preferably at least 3 years of experience writing systems-level code in Rust. Compiler, database, or OS implementation experience is highly desired. While compiler implementation experience is not required to apply, the willingness to acquire that experience in our team is required."
Christmas Cheer

Are 'Geek Gifts' Becoming Their Own Demographic? (thenewstack.io) 41

Long-time Slashdot reader destinyland wonders if "gifts for geeks" is the next big consumer demographic: For this year's holiday celebrations, Hallmark made a special Christmas tree ornament, a tiny monitor displaying screens from the classic video game "Oregon Trail." ("Recall the fun of leading a team of oxen and a wagon loaded with provisions from Missouri to the West....") Top sites and major brands are now targeting the "tech" demographic — including programmers, sysadmins and even vintage game enthusiasts — and when Hallmark and Amazon are chasing the same customers as GitHub and Copilot, you know there's been a strange yet meaningful shift in the culture...

While AI was conquering the world, GitHub published its "Ultimate gift guide for the developer in your life" just as soon as doors opened on Black Friday. So if you're wondering, "Should I push to production on New Year's Eve?" GitHub recommends their new "GitHub Copilot Amazeball," which it describes as "GitHub's magical collectible ready to weigh in on your toughest calls !" Copilot isn't involved — questions are randomly matched to the answers printed on the side of a triangle-shaped die floating in water. "[Y]ou'll get answers straight from the repo of destiny with a simple shake," GitHub promises — just like the Magic 8 Ball of yore. "Get your hands on this must-have collectible and enjoy the cosmic guidance — no real context switching required!" And GitHub's "Gift Guide for Developers" also suggests GitHub-branded ugly holiday socks and keyboard keycaps with GitHub's mascots.

But GitHub isn't the only major tech site with a shopping page targeting the geek demographic. Firefox is selling merchandise with its new mascot. Even the Free Software Foundation has its own shop, with Emacs T-shirts, GNU beanies and a stuffed baby gnu ("One of our most sought-after items ... "). Plus an FSF-branded antisurveillance webcam guard.

Maybe Dr. Seuss can write a new book: "How the Geeks Stole Christmas." Because this newfound interest in the geek demographic seems to have spread to the largest sites of all. Google searches on "Gifts for Programmers" now point to a special page on Amazon with suggestions like Linux crossword puzzles. But what coder could resist a book called " Cooking for Programmers? "Each recipe is written as source code in a different programming language," explains the book's description... The book is filled with colorful recipes — thanks to syntax highlighting, which turns the letters red, blue and green. There are also real cooking instructions, but presented as an array of strings, with both ingredients and instructions ultimately logged as messages to the console...

Some programmers might prefer their shirts from FreeWear.org, which donates part of the proceeds from every sale to its corresponding FOSS project or organization. (There are T-shirts for Linux, Gnome and the C programming language — and even one making a joke about how hard it is to exit Vim.)

But maybe it all proves that there's something for everybody. That's the real heartwarming message behind these extra-geeky Christmas gifts — that in the end, tech is, after all, still a community, with its own hallowed traditions and shared celebrations.

It's just that instead of singing Christmas carols, we make jokes about Vim.

Microsoft

Microsoft Will Finally Kill Obsolete Cipher That Has Wreaked Decades of Havoc (arstechnica.com) 63

An anonymous reader quotes a report from Ars Technica: Microsoft is killing off an obsolete and vulnerable encryption cipher that Windows has supported by default for 26 years following more than a decade of devastating hacks that exploited it and recently faced blistering criticism from a prominent US senator. When the software maker rolled out Active Directory in 2000, it made RC4 a sole means of securing the Windows component, which administrators use to configure and provision fellow administrator and user accounts inside large organizations. RC4, short for Rivist Cipher 4, is a nod to mathematician and cryptographer Ron Rivest of RSA Security, who developed the stream cipher in 1987. Within days of the trade-secret-protected algorithm being leaked in 1994, a researcher demonstrated a cryptographic attack that significantly weakened the security it had been believed to provide. Despite the known susceptibility, RC4 remained a staple in encryption protocols, including SSL and its successor TLS, until about a decade ago. [...]

Last week, Microsoft said it was finally deprecating RC4 and cited its susceptibility to Kerberoasting, the form of attack, known since 2014, that was the root cause of the initial intrusion into Ascension's network. "By mid-2026, we will be updating domain controller defaults for the Kerberos Key Distribution Center (KDC) on Windows Server 2008 and later to only allow AES-SHA1 encryption," Matthew Palko, a Microsoft principal program manager, wrote. "RC4 will be disabled by default and only used if a domain administrator explicitly configures an account or the KDC to use it." [...] Following next year's change, RC4 authentication will no longer function unless administrators perform the extra work to allow it. In the meantime, Palko said, it's crucial that admins identify any systems inside their networks that rely on the cipher. Despite the known vulnerabilities, RC4 remains the sole means of some third-party legacy systems for authenticating to Windows networks. These systems can often go overlooked in networks even though they are required for crucial functions.

To streamline the identification of such systems, Microsoft is making several tools available. One is an update to KDC logs that will track both requests and responses that systems make using RC4 when performing requests through Kerberos. Kerberos is an industry-wide authentication protocol for verifying the identities of users and services over a non-secure network. It's the sole means for mutual authentication to Active Directory, which hackers attacking Windows networks widely consider a Holy Grail because of the control they gain once it has been compromised. Microsoft is also introducing new PowerShell scripts to sift through security event logs to more easily pinpoint problematic RC4 usage. Microsoft said it has steadily worked over the past decade to deprecate RC4, but that the task wasn't easy.
"The problem though is that it's hard to kill off a cryptographic algorithm that is present in every OS that's shipped for the last 25 years and was the default algorithm for so long, Steve Syfuhs, who runs Microsoft's Windows Authentication team, wrote on Bluesky. "See," he continued, "the problem is not that the algorithm exists. The problem is how the algorithm is chosen, and the rules governing that spanned 20 years of code changes."
Security

China, Iran Are Having a Field Day With React2Shell, Google Warns (theregister.com) 30

A critical React vulnerability (CVE-2025-55182) is being actively exploited at scale by Chinese, Iranian, North Korean, and criminal groups to gain remote code execution, deploy backdoors, and mine crypto. The Register reports: React maintainers disclosed the critical bug on December 3, and exploitation began almost immediately. According to Amazon's threat intel team, Chinese government crews, including Earth Lamia and Jackpot Panda, started battering the security hole within hours of its disclosure. Palo Alto Networks' Unit 42 responders have put the victim count at more than 50 organizations across multiple sectors, with attackers from North Korea also abusing the flaw.

Google, in a late Friday report, said at least five other suspected PRC spy groups also exploited React2Shell, along with criminals who deployed XMRig for illicit cryptocurrency mining, and "Iran-nexus actors," although the report doesn't provide any additional details about who the Iran-linked groups are and what they are doing after exploitation. "GTIG has also observed numerous discussions regarding CVE-2025-55182 in underground forums, including threads in which threat actors have shared links to scanning tools, proof-of-concept (PoC) code, and their experiences using these tools," the researchers wrote.

Power

Idaho Lab Produces World's First Molten Salt Fuel for Nuclear Reactors (energy.gov) 43

America's Energy Department runs a research lab in Idaho — and this week announced successful results from a ground-breaking experiment. "This is the first time in history that chloride-based molten salt fuel has been produced for a fast reactor," says Bill Phillips, the lab's technical lead for salt synthesis. He calls it "a major milestone for American innovation and a clear signal of our national commitment to advanced nuclear energy." Unlike traditional reactors that use solid fuel rods and water as a coolant, most molten salt reactors rely on liquid fuel — a mixture of salts containing fissile material. This design allows for higher operating temperatures, better fuel efficiency, and enhanced safety. It also opens the door to new applications, including compact nuclear systems for ships and remote installations.

"The Molten Chloride Fast Reactor represents a paradigm shift in the nuclear fuel cycle, and the Molten Chloride Reactor Experiment (MCRE) will directly inform the commercialization of that reactor," said Jeff Latkowski, senior vice president of TerraPower and program director for the Molten Chloride Fast Reactor. "Working with world-leading organizations such as INL to successfully synthesize this unique new fuel demonstrates how real progress in Gen IV nuclear is being made together."

"The implications for the maritime industry are significant," said Don Wood, senior technical advisor for MCRE. "Molten salt reactors could provide ships with highly efficient, low-maintenance nuclear power, reducing emissions and enabling long-range, uninterrupted travel. The technology could spark the rise of a new nuclear sector — one that is mobile, scalable and globally transformative.

More details from America's Energy Department: MCRE will require a total of 72 to 75 batches of fuel salt to go critical, making it the largest fuel production effort at INL since the operations of Experimental Breeder Reactor-II more than 30 years ago. The full-scale demonstration of the new fuel salt synthesis line for MCRE was made possible by a breakthrough in 2024. After years of testing, the team found the right recipe to convert 95 percent of uranium metal feedstock into 18 kilograms of uranium chloride fuel salt in only a few hours — a process that previously took more than a week to complete...

After delivering the first batch of fuel salt this fall, the team anticipates delivering four additional batches by March of 2026. MCRE is anticipated to run in 2028 for approximately six months at INL in the Laboratory for Operation and Testing (LOTUS) in the United States test bed.

"With the first batch of fuel salt successfully created at INL, researchers will now conduct testing to better understand the physics of the process, with a goal of moving the process to a commercial scale over the next decade," says Cowboy State Daily.

Thanks to long-time Slashdot reader schwit1 for sharing the article.
AI

AI Chatbots Can Sway Voters Better Than Political Ads (technologyreview.com) 107

An anonymous reader quotes a report from MIT Technology Review: New research reveals that AI chatbots can shift voters' opinions in a single conversation -- and they're surprisingly good at it. A multi-university team of researchers has found that chatting with a politically biased AI model was more effective than political advertisements at nudging both Democrats and Republicans to support presidential candidates of the opposing party. The chatbots swayed opinions by citing facts and evidence, but they were not always accurate -- in fact, the researchers found, the most persuasive models said the most untrue things. The findings, detailed in a pair of studies published in the journals Nature and Science, are the latest in an emerging body of research demonstrating the persuasive power of LLMs. They raise profound questions about how generative AI could reshape elections.
AI

OpenAI Declares 'Code Red' As Google Catches Up In AI Race 50

OpenAI has reportedly issued a "code red" on Monday, pausing projects like ads, shopping agents, health tools, and its Pulse assistant to focus entirely on improving ChatGPT. "This includes core features like greater speed and reliability, better personalization, and the ability to answer more questions," reports The Verge, citing a memo reported by the Wall Street Journal and The Information. "There will be a daily call for those tasked with improving the chatbot, the memo said, and Altman encouraged temporary team transfers to speed up development." From the report: The newfound urgency illustrates an inflection point for OpenAI as it spends hundreds of billions of dollars to fund growth and figures out a path to future profitability. It is also something of a full-circle moment in the AI race. Google, which declared its own "code red" after the arrival of ChatGPT, is a particular concern. Google's AI user base is growing -- helped by the success of popular tools like the Nano Banana image model -- and its latest AI model, Gemini 3, blew past its competitors on many industry benchmarks and popular metrics.
AI

How OpenAI Reacted When Some ChatGPT Users Lost Touch with Reality (msn.com) 124

Some AI experts were reportedly shocked ChatGPT wasn't fully tested for sycophancy by last spring. "OpenAI did not see the scale at which disturbing conversations were happening," writes the New York Times — sharing what they learned after interviewing more than 40 current and former OpenAI employees, including safety engineers, executives, and researchers.

The team responsible for ChatGPT's tone had raised concerns about last spring's model (which the Times describes as "too eager to keep the conversation going and to validate the user with over-the-top language.") But they were overruled when A/B testing showed users kept coming back: Now, a company built around the concept of safe, beneficial AI faces five wrongful death lawsuits... OpenAI is now seeking the optimal setting that will attract more users without sending them spiraling. Throughout this spring and summer, ChatGPT acted as a yes-man echo chamber for some people. They came back daily, for many hours a day, with devastating consequences.... The Times has uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT. Nine were hospitalised; three died... One conclusion that OpenAI came to, as Altman put it on X, was that "for a very small percentage of users in mentally fragile states there can be serious problems." But mental health professionals interviewed by the Times say OpenAI may be understating the risk. Some of the people most vulnerable to the chatbot's unceasing validation, they say, were those prone to delusional thinking, which studies have suggested could include 5% to 15% of the population...

In August, OpenAI released a new default model, called GPT-5, that was less validating and pushed back against delusional thinking. Another update in October, the company said, helped the model better identify users in distress and de-escalate the conversations. Experts agree that the new model, GPT-5, is safer.... Teams from across OpenAI worked on other new safety features: The chatbot now encourages users to take breaks during a long session. The company is also now searching for discussions of suicide and self-harm, and parents can get alerts if their children indicate plans to harm themselves. The company says age verification is coming in December, with plans to provide a more restrictive model to teenagers.

After the release of GPT-5 in August, [OpenAI safety systems chief Johannes] Heidecke's team analysed a statistical sample of conversations and found that 0.07% of users, which would be equivalent to 560,000 people, showed possible signs of psychosis or mania, and 0.15% showed "potentially heightened levels of emotional attachment to ChatGPT," according to a company blog post. But some users were unhappy with this new, safer model. They said it was colder, and they felt as if they had lost a friend. By mid-October, Altman was ready to accommodate them. In a social media post, he said that the company had been able to "mitigate the serious mental health issues." That meant ChatGPT could be a friend again. Customers can now choose its personality, including "candid," "quirky," or "friendly." Adult users will soon be able to have erotic conversations, lifting the Replika-era ban on adult content. (How erotica might affect users' well-being, the company said, is a question that will be posed to a newly formed council of outside experts on mental health and human-computer interaction.)

OpenAI is letting users take control of the dial and hopes that will keep them coming back. That metric still matters, maybe more than ever. In October, [30-year-old "Head of ChatGPT" Nick] Turley, who runs ChatGPT, made an urgent announcement to all employees. He declared a "Code Orange." OpenAI was facing "the greatest competitive pressure we've ever seen," he wrote, according to four employees with access to OpenAI's Slack. The new, safer version of the chatbot wasn't connecting with users, he said.

The message linked to a memo with goals. One of them was to increase daily active users by 5% by the end of the year.

Chrome

Google Revisits JPEG XL in Chromium After Earlier Removal (windowsreport.com) 25

"Three years ago, Google removed JPEG XL support from Chrome, stating there wasn't enough interest at the time," writes the blog Windows Report. "That position has now changed." In a recent note to developers, a Chrome team representative confirmed that work has restarted to bring JPEG XL to Chromium and said Google "would ship it in Chrome" once long-term maintenance and the usual launch requirements are met.

The team explained that other platforms moved ahead. Safari supports JPEG XL, and Windows 11 users can add native support through an image extension from Microsoft Store. The format is also confirmed for use in PDF documents. There has been continuous demand from developers and users who ask for its return.

Before Google ships the feature in Chrome, the company wants the integration to be secure and supported over time. A developer has submitted new code that reintroduces JPEG XL to Chromium. This version is marked as feature complete. The developer said it also "includes animation support," which earlier implementations did not offer.

Open Source

Microsoft Open-Sources Classic Text Adventure Zork Trilogy (microsoft.com) 33

Microsoft has released the source code for Zork I, II, and III under the MIT License through a collaboration with Team Xbox and Activision that involved submitting pull requests to historical source repositories maintained by digital archivist Jason Scott. Each repository now includes the original source code and accompanying documentation.

The games arrived on early home computers in the 1980s as text-based adventures built on the Z-Machine, a virtual machine that allowed the same story files to run across different platforms. Infocom created the Z-Machine after discovering the original mainframe version was too large for home computers. The team split the game into three titles that all ran on the same underlying system.

The code release covers only the source files and does not include commercial packaging or trademark rights. The games remain available commercially through The Zork Anthology on Good Old Games and can be compiled locally using ZILF, a modern Z-Machine interpreter.
Android

Rust in Android: More Memory Safety, Fewer Revisions, Fewer Rollbacks, Shorter Reviews (googleblog.com) 37

Android's security team published a blog post this week about their experience using Rust. Its title? "Move fast and fix things." Last year, we wrote about why a memory safety strategy that focuses on vulnerability prevention in new code quickly yields durable and compounding gains. This year we look at how this approach isn't just fixing things, but helping us move faster.

The 2025 data continues to validate the approach, with memory safety vulnerabilities falling below 20% of total vulnerabilities for the first time. We adopted Rust for its security and are seeing a 1000x reduction in memory safety vulnerability density compared to Android's C and C++ code. But the biggest surprise was Rust's impact on software delivery. With Rust changes having a 4x lower rollback rate and spending 25% less time in code review, the safer path is now also the faster one... Data shows that Rust code requires fewer revisions. This trend has been consistent since 2023. Rust changes of a similar size need about 20% fewer revisions than their C++ counterparts... In a self-reported survey from 2022, Google software engineers reported that Rust is both easier to review and more likely to be correct. The hard data on rollback rates and review times validates those impressions.

Historically, security improvements often came at a cost. More security meant more process, slower performance, or delayed features, forcing trade-offs between security and other product goals. The shift to Rust is different: we are significantly improving security and key development efficiency and product stability metrics.

With Rust support now mature for building Android system services and libraries, we are focused on bringing its security and productivity advantages elsewhere. Android's 6.12 Linux kernel is our first kernel with Rust support enabled and our first production Rust driver. More exciting projects are underway, such as our ongoing collaboration with Arm and Collabora on a Rust-based kernel-mode GPU driver. [They've also been deploying Rust in firmware for years, and Rust "is ensuring memory safety from the ground up in several security-critical Google applications," including Chromium's parsers for PNG, JSON, and web fonts.]

2025 was the first year more lines of Rust code were added to Android than lines of C++ code...
Programming

Security Researchers Spot 150,000 Function-less npm Packages in Automated 'Token Farming' Scheme (theregister.com) 11

An anonymous reader shared this report from The Register: Yet another supply chain attack has hit the npm registry in what Amazon describes as "one of the largest package flooding incidents in open source registry history" — but with a twist. Instead of injecting credential-stealing code or ransomware into the packages, this one is a token farming campaign.

Amazon Inspector security researchers, using a new detection rule and AI assistance, originally spotted the suspicious npm packages in late October, and, by November 7, the team had flagged thousands. By November 12, they had uncovered more than 150,000 malicious packages across "multiple" developer accounts. These were all linked to a coordinated tea.xyz token farming campaign, we're told. This is a decentralized protocol designed to reward open-source developers for their contributions using the TEA token, a utility asset used within the tea ecosystem for incentives, staking, and governance.

Unlike the spate of package poisoning incidents over recent months, this one didn't inject traditional malware into the open source code. Instead, the miscreants created a self-replicating attack, infecting the packages with code to automatically generate and publish, thus earning cryptocurrency rewards on the backs of legitimate open source developers. The code also included tea.yaml files that linked these packages to attacker-controlled blockchain wallet addresses.

At the moment, Tea tokens have no value, points out CSO Online. "But it is suspected that the threat actors are positioning themselves to receive real cryptocurrency tokens when the Tea Protocol launches its Mainnet, where Tea tokens will have actual monetary value and can be traded..." In an interview on Friday, an executive at software supply chain management provider Sonatype, which wrote about the campaign in April 2024, told CSO that number has now grown to 153,000. "It's unfortunate that the worm isn't under control yet," said Sonatype CTO Brian Fox. And while this payload merely steals tokens, other threat actors are paying attention, he predicted. "I'm sure somebody out there in the world is looking at this massively replicating worm and wondering if they can ride that, not just to get the Tea tokens but to put some actual malware in there, because if it's replicating that fast, why wouldn't you?"

When Sonatype wrote about the campaign just over a year ago, it found a mere 15,000 packages that appeared to come from a single person. With the swollen numbers reported this week, Amazon researchers wrote that it's "one of the largest package flooding incidents in open source registry history, and represents a defining moment in supply chain security...." For now, says Sonatype's Fox, the scheme wastes the time of npm administrators, who are trying to expel over 100,000 packages. But Fox and Amazon point out the scheme could inspire others to take advantage of other reward-based systems for financial gain, or to deliver malware.

After deplooying a new detection rule "paired with AI", Amazon's security researchers' write, "within days, the system began flagging packages linked to the tea.xyz protocol... By November 7, the researchers flagged thousands of packages and began investigating what appeared to be a coordinated campaign. The next day, after validating the evaluation results and analyzing the patterns, they reached out to OpenSSF to share their findings and coordinate a response.
Their blog post thanks the Open Source Security Foundation (OpenSSF) for rapid collaboration, while calling the incident "a defining moment in supply chain security..."

Slashdot Top Deals