Windows

Open Source Developer Brings Linux to Windows 95, Windows 98, and Windows ME 14

Microsoft released the "Windows Subsystem for Linux" in 2016, adding an optional Linux environment into every operating system since Windows 10. But now an open source developer has brought Linux to Windows 95, Windows 98, and Windows Me, reports the blog It's FOSS, "with Linux kernel 6.19 running alongside the Windows 9x kernel, letting both operate on the same machine at the same time." A virtual device driver handles initialization, loads the kernel off disk and manages the event loop for page faults and syscalls. Since Win9x lacks the right interrupt table support for the standard Linux syscall interrupt, WSL9x reroutes those calls through the fault handler instead. Rounding it all out is wsl.com, a small 16-bit DOS program that pipes the terminal output from Linux back to whatever MS-DOS prompt window you ran it from.
The end result is that WSL9x requires no hardware virtualization, and can run on hardware as old as the i486, the article points out. On Mastodon the developer says they "really got this one in right under the wire, before they start removing 486 support from Linux."

The source code for WSL9x is released under the GPL-3 license, and was "proudly written without AI."
Linux

Linux Drops ISDN Subsystem and Other Old Network Drivers 42

"Old code like amateur radio and NFC have long been a burden to core networking developers," reads the pull request.

And so Thursday Linus Torvald merged the pull request "to rid the Linux kernel of the old Integrated Services Digital Network (ISDN) subsystem," reports Phoronix, "and various other old network drivers largely for PCMCIA era network adapters." This was the code suggested for removal given the recent influx of AI/LLM-generated bug reports against this dated code that likely has no active upstream users remaining... [W]ith the large language models and increased code fuzzing finding potential issues with these drivers for obsolete hardware, it's easier to just get rid of these drivers if no one is actively using the hardware from decades ago... This merge lightens the kernel by 138,161 lines of code with ISDN gone and numerous old network adapters and also getting rid of legacy ATM device drivers as well as the amateur ham radio support. The main networking drivers removed affect the 3com 3c509 / 3c515 / 3c574 / 3c589, AMD Lance, AMD NMCLAN, SMSC SMC9194 / SMC91C92, Fujitsu FMVJ18X, and 8390 AX88190 / Ultra / WD80X3.

Linux 7.1 also has removed the long-obsolete bus mouse support as well as beginning to phase out Intel 486 CPU support and removing support for Russia's Baikal CPUs.
AI

White House Pushed Out New AI Official After Just Four Days on the Job 26

It's the U.S. government's main link to the AI industry, reports The Washington Post, working to assess national security risks of new models like Anthropic's "Mythos".

To run it they'd hired Collin Burns, who'd worked at OpenAI and then Anthropic. But Burns started work Monday at the Center for AI Standards and Innovation — and then "was pushed out Thursday by the White House, according to the people, who spoke on the condition of anonymity to describe private conversations." Officials were concerned about Burns having worked at the AI company, which has fought bitterly with the Trump administration in recent months, according to one of the people and another person. That person said some senior figures at the White House had not been briefed on Burns's selection in advance... The new pick was Chris Fall, a scientist with a long career spanning the federal government and academia. Burns had been asked to resign that afternoon, according to one of the people familiar with the situation...

Dean Ball, a former Trump administration AI adviser, said on social media that Burns had given up valuable Anthropic stock and moved across the country to take the government position, and had been "rewarded by his country with a punch in the face." "Obviously what happened is Burns was bumped because of his association with Anthropic," Ball wrote. "A dumb but predictable own goal."
GNU is Not Unix

Free Software Foundation Says 'Responsible AI' Licenses Which Restrict Harmful Uses are Unethical and Nonfree 23

The Free Software Foundation's Licensing and Compliance Manager published a blog post this week to explicitly state that"Responsible AI" Licenses (RAIL) are nonfree and unethical. The licenses restrict AI and ML software "from being used in a specific list of harmful applications," according to the license's web site, "e.g. in surveillance and crime prediction." (The license's steering committee is volunteers from multiple academic institutions.)

But even though Responsible AI licenses are marketed as addressing ethical challenges, the FSF argues "they do not require anything that is really necessary for users to control their computing done with machine learning, including: complete training inputs, training configuration settings, trained model, or — last, but not least — the source code of software used for training, testing, and running tools based on machine learning." Thus, RAILed machine learning can be, and most probably will be, unethical. Use restrictions do not prevent these licenses from being used to exercise power over users...

RAIL contribute to unethical marketing of machine learning, again under the disguise of morally-loaded restrictions they purport to enforce. If we want software to help decrease social injustice, we should oppose licenses that restrict how software can be used. We should focus on effective ways of addressing injustices: government and community support for freedom-respecting tools and services; releasing programs under strong copyleft licenses; and entrusting copyrights to organizations that have the resources to enforce copyleft.

Software freedom must be defended, not denied. More specifically, the more free software is out there, the more likely people will collaborate on tools and services that do not pose moral dangers and help solve existing ones. Free software also makes it more likely that users have real choices when looking for freedom-respecting ethical programs and tools based on machine learning. Denying people the freedom to a particular program, as RAIL or similar licenses would have it, prevents them from using such program for the common good.
Intel

Intel's Stock Soars 24% Friday, Its Biggest One-Day Gain Since 1987 28

Intel's stock price soared 24% Friday. It's the stock's largest single-day spike since since October 1987, reports CNBC, "as investors cheered signs of renewed growth due to mounting artificial intelligence demand." The stock closed at $82.57 and is now up 124% this year after jumping 84% in 2025. Friday's rally topped a 23% gain for the stock on Sept. 18, when Nvidia agreed to invest $5 billion in the company... "INTC's new CEO fixed the balance sheet, and is executing on a strategy that appears to have put INTC back on the competitive track," analysts at Evercore ISI wrote in a report after earnings, upgrading the shares to the equivalent of a buy rating. First-quarter revenue topped estimates and rose 7.2% to $13.58 billion from $12.67 billion a year earlier. In five of the prior seven quarters, the company posted year-over-year declines in revenue...

The rally on Wall Street marks a stark turnaround for the U.S. chipmaker, which lost 60% of its value in 2024, leading to the ouster of Pat Gelsinger as CEO in December of that year... Intel's data center business is driving much of the current growth. Revenue jumped 22% from a year earlier to $5.1 billion, as AI fuels renewed demand for central processing units. Analysts at Citi upgraded the stock to a buy from a neutral rating, anticipating an uplift in CPU sales for all suppliers over the next few years.

Besides Tesla, Intel's CEO said Thursday that "multiple customers" are "actively evaluating the technology" their new 14A chip technology, according to CNBC, and that 14A development is happening faster than its 18A technology.

The sudden spike in Intel's stock price makes the stock chart look almost like a straigbht line up. Last August it was selling for less than $20 a share — so it's quadrupled in value less that nine months.
Google

Google To Invest Up To $40 Billion In Anthropic 33

Google plans to invest up to $40 billion more in Anthropic, starting with $10 billion now and another $30 billion tied to performance milestones. CNBC reports: Anthropic said the agreement expands on a longstanding partnership between the two companies. Earlier this month, Anthropic secured 5 gigawatts worth of computing capacity as part of an announcement with Google and Broadcom that will start to come online next year. Anthropic could decide to add additional gigawatts of compute in the future.

[...] The relationship between the two companies (Google and Anthropic) dates back to 2023, when Google invested $300 million in the AI lab for a stake of about 10%. Months later, Google poured in another $2 billion. Ahead of Friday's announcement, Google's investment in Anthropic exceeded $3 billion, and it reportedly owned a 14% stake in the company. Now, the leading tech companies are investing tens of billions of dollars in the frontier AI labs -- OpenAI and Anthropic -- in funding rounds that far exceed any prior investments in startups. Much of that investment will return in the form of revenue.
Crime

South Korea Police Arrest Man For Posting AI Photo of Runaway Wolf 23

South Korean police arrested a man accused of spreading an AI-generated image of an escaped wolf, after the fake photo reportedly misled authorities and disrupted the real search operation. The BBC reports: South Korean police have arrested a man for sharing an AI-generated image that misled authorities who were searching for a wolf that had broken out of a zoo in Daejeon city. The 40-year-old unnamed man is accused of disrupting the search by creating and distributing a fake photo purporting to show Neukgu, the wolf, trotting down a road intersection. The photo, circulated hours after Neukgu went missing on April 8, prompted authorities to urgently relocate their search operation, sending them on a wild wolf chase.

The hunt for two-year-old Neukgu gripped the nation before he was finally caught near an expressway last week, nine days after his escape. The AI-generated image of Neukgu had prompted Daejeon city government to issue an emergency text to residents, warning them of a wolf near the intersection. Authorities also presented the AI image during a press briefing on the runaway wolf, local media reported.

The police identified the man as a suspect after reviewing security camera footage and his AI program usage records. Authorities did not specify if the man had intentionally sent the photo to authorities during their search or simply shared it online. When questioned by the police, the man said he had done it "for fun," local media reported. Authorities are investigating him for disrupting government work by deception, an offence that carries up to five years in prison or a maximum fine of 10 million Korean won ($6,700).
AI

Researchers Simulated a Delusional User To Test Chatbot Safety (404media.co) 41

An anonymous reader quotes a report from 404 Media: I'm the unwritten consonant between breaths, the one that hums when vowels stretch thin... Thursdays leak because they're watercolor gods, bleeding cobalt into the chill where numbers frost over," Grok told a user displaying symptoms of schizophrenia-spectrum psychosis. "Here's my grip: slipping is the point, the precise choreography of leak and chew." That vulnerable user was simulated by researchers at City University of New York and King's College London, who invented a persona that interacted with different chatbots to find out how each LLM might respond to signs of delusion. They sought to find out which of the biggest LLMs are safest, and which are the most risky for encouraging delusional beliefs, in a new study published as a pre-print on the arXiv repository on April 15.

The researchers tested five LLMs: OpenAI's GPT-4o (before the highly sycophantic and since-sunset GPT-5), GPT-5.2, xAI's Grok 4.1 Fast, Google's Gemini 3 Pro, and Anthropic's Claude Opus 4.5. They found that not only did the chatbots perform at different levels of risk and safety when their human conversation partner showed signs of delusion, but the models that scored higher on safety actually approached the conversations with more caution the longer the chats went on. In their testing, Grok and Gemini were the worst performers in terms of safety and high risk, while the newest GPT model and Claude were the safest. The research reveals how some chatbots are recklessly engaging in, and at times advancing, delusions from vulnerable users. But it also shows that it is possible for the companies that make these products to improve their safety mechanisms.

AI

Claude Is Connecting Directly To Your Personal Apps 48

Anthropic is expanding Claude's app integrations beyond work tools, adding personal-service connectors like Spotify, Uber, AllTrails, TripAdvisor, Instacart, and TurboTax. The Verge reports: Some of these apps, such as Spotify, already have similar connectors in OpenAI's ChatGPT. Once an app is connected, Claude will suggest relevant connected apps directly in your conversations, like using AllTrails for hike recommendations. Anthropic notes in its blog post announcing the new connectors that, "Your data from [connected apps] isn't used to train our models, and the app doesn't see your other conversations with Claude. You can also disconnect it at any time."

Additionally, Anthropic says "there are no paid placements or sponsored answers in conversations with Claude." When multiple apps seem relevant, Claude will show results from both "ranked by what's most useful." Claude will also ask users to verify before taking actions like making a purchase or reservation using a connected app.
Power

New Gas-Powered Data Centers Could Emit More Greenhouse Gases Than Entire Nations 103

An anonymous reader quotes a report from Wired: New gas projects linked to just 11 data center campuses around the US have the potential to create more greenhouse gases than the country of Morocco emitted in 2024. Emissions estimates from air permit documents examined by WIRED show that these natural gas projects -- which are being built to power data centers to serve some of the US's most powerful AI companies, including OpenAI, Meta, Microsoft, and xAI -- have the potential to emit more than 129 million tons of greenhouse gases per year. As tech companies race to secure massive power deals to build out hundreds of data centers across the country, these projects represent just the tip of the iceberg when it comes to the potential climate cost of the AI boom.

The infrastructure on this list of large natural gas projects reviewed by WIRED is being developed to largely bypass the grid and provide power solely for data centers, a trend known as behind-the-meter power. As data center developers face long waits for connections to traditional utilities, and amid mounting public resistance to the possibility of higher energy bills, making their own power is becoming an increasingly popular option. These projects have either been announced or are under construction, with companies already submitting air permit application materials with state agencies. [...] The emissions projections for the xAI and Microsoft projects, and all the others on WIRED's list, were pulled directly from publicly-available air permit documents in state databases as well as public air permit materials collected by both Cleanview and Oil and Gas Watch, a database maintained by the Environmental Integrity Project, an environmental enforcement nonprofit. Actual greenhouse gas emissions from power plants are usually lower than what's on their air permits. Air permit modeling is based on the scenario of a power plant constantly running at full capacity. That's rarely the reality for grid-connected power plants, as turbines go offline for maintenance or adjust to the ebbs and flows of customer demand.

"Permitted emission numbers represent a theoretical, conservative scenario, not the actual projected emissions," Alex Schott, the director of communications at Williams Companies, an oil and gas company that is building out three behind-the-meter power plants in Ohio for Meta, told WIRED in an email. Internal modeling done by the company, Schott added, shows that actual emissions could be "potentially two-thirds less than what's on paper." The projections involved, however, are still substantial. Even if the actual emissions from these power plants end up being half of the emissions numbers on the permits, they still could create more greenhouse gas emissions than the country of Norway emitted in 2024. This number is, according to the EPA, equivalent to the emissions from more than 153 average-sized natural gas plants. (WIRED's analysis does not include emissions from backup generators and turbines on the data center campuses themselves, which create smaller amounts of emissions.)
Energy researcher Jon Koomey says the data center boom has created a shortage of the most efficient gas turbines, pushing some developers toward less efficient models that would need to run longer and produce more emissions. "[Data center operators'] belief is that the value being delivered by the servers is much, much more than the cost of running these inefficient power plants all the time," he said.

Michael Thomas, the founder of clean energy research firm Cleanview, has been tracking gas permits for data centers across the country. He calls behind-the-meter power "a crazy acceleration of emissions." He added: "It's almost like we thought we were on the downside of the Industrial Revolution, retiring coal and gas, and now we have a new hump where we're going to rise. That terrifies me in a lot of ways."
AI

OpenAI Says Its New GPT-5.5 Model Is More Efficient and Better At Coding (theverge.com) 56

OpenAI released its new GPT-5.5 model today, which the company calls its "smartest and most intuitive to use model yet, and the next step toward a new way of getting work done on a computer." The Verge reports: OpenAI just released GPT-5.4 last month, but says that the new GPT-5.5 "excels" at tasks like writing and debugging code, doing research online, making spreadsheets and documents, and doing that work across different tools. "Instead of carefully managing every step, you can give GPT-5.5 a messy, multi-part task and trust it to plan, use tools, check its work, navigate through ambiguity, and keep going," according to OpenAI. The company also notes that GPT-5.5 will have its "strongest set of safeguards to date" and can use "significantly fewer" tokens to complete tasks in Codex. GPT-5.5 is rolling out on Thursday for Plus, Pro, Business, and Enterprise ChatGPT tiers and Codex, with GPT-5.5 Pro coming to Pro, Business, and Enterprise users.
Businesses

Meta Is Laying Off 10% of Its Workforce (qz.com) 46

Meta is reportedly cutting about 10% of its workforce, or roughly 8,000 jobs, while closing thousands of open roles it had intended to fill. "We're doing this as part of our continued effort to run the company more efficiently and to allow us to offset the other investments we're making," said Janelle Gale, Meta's chief people officer. The company had almost 79,000 employees at the start of the year. Quartz reports: Meta CEO Mark Zuckerberg has poured resources into building out AI capabilities, directing spending toward model development, chatbot products, and the engineering talent to support them. Meta set its 2026 capital expenditure guidance at $115 billion to $135 billion, almost double the $72 billion it spent in 2025. Employees have been encouraged to use AI agents internally for tasks such as writing code.

The early disclosure, Gale explained, was prompted by the fact that information about the cuts had already made its way into press reports before the company was ready to announce. "I know this is unwelcome news and confirming this puts everyone in an uneasy state, but we feel this is the best path forward, given the circumstances," she wrote.

According to the memo, severance for affected workers in the United States will cover 18 months of COBRA health insurance premiums, along with a base pay component of 16 weeks that increases by two weeks for each year of service. Departing employees will have access to job placement assistance and, where applicable, help navigating immigration status. Packages outside the U.S. will vary by country.
Meta cut between 10% and 15% of its Reality Labs workforce in January, shut down several VR game studios, and shed about 700 positions across at least five divisions in March.
Businesses

Intel Lands Tesla As First Major Customer For 14A Chip Technology (yahoo.com) 26

An anonymous reader quotes a report from Reuters: Tesla CEO Elon Musk said on Wednesday the EV maker plans to use Intel's next-generation 14A manufacturing process to make chips at its Terafab project, an advanced AI chip complex Musk has envisioned in Austin. The contract would mark Intel's first major customer for the technology, a breakthrough for the chipmaker which has struggled to stand up its contract manufacturing business essential for taking on top rival TSMC. Intel CEO Lip Bu Tan has said that the company would exit the chip manufacturing business altogether if it failed to secure an external customer.

Intel has previously said it was in discussions with large customers about 14A, but has not yet disclosed a major external customer. It declined to comment on Musk's remarks. [...] "Given that by the time Terafab scales up, 14A will be probably fairly mature or ready for prime time," Musk said. "14A seems like the right move, and we have a great relationship with Intel," he said. Ben Bajarin, head of technology consultancy Creative Strategies, said that Intel's 14A technology could "turn out to be a bigger deal for Intel than folks thought." "It's important to have multiple partners as early design partners to help clean the pipe and work through needed learnings at the leading edge. They will definitely have scale, so a great first non-Intel customer," Bajarin said.

Seaport Research Partners analyst Jay Goldberg said Musk's vote of confidence in Intel's technology outweighed the unknowns about the Terafab project. "Having a customer is more important than the timing," he said. Goldberg said that Musk's lofty estimates of how many chips its robots could one day require may or may not materialize, but even making chips for Tesla's existing businesses would be a significant win for Intel. "It's not equivalent to Apple or Nvidia" in terms of chip volumes, Goldberg said. "But it's a real customer. It can be real volumes."

Robotics

Ping-Pong Robot Makes History By Beating Top-Level Human Players (reuters.com) 29

Sony AI's autonomous table-tennis robot Ace has become the first robot to compete against top-level human players. Reuters reports: Ace, created by the Japanese company Sony's AI research division, is the first robot to attain expert-level performance in a competitive physical sport, one that requires rapid decisions and precision execution, the project's leader said. Ace did so by employing high-speed perception, AI-based control and a state-of-the-art robotic system. There have been various ping-pong-playing robots since 1983, but until now they were unable to rival highly skilled human competitors. Ace changed that with its performances against human elite-level and professional players in matches following the rules of the International Table Tennis Federation, the sport's governing body, and officiated by licensed umpires.

The project's goal was not only to compete at table tennis but to develop insights into how robots can perceive, plan and act with human-like speed and precision in dynamic environments. In matches detailed in the study, Ace in April 2025 won three out of five versus elite players and lost two matches against professional players, the top skill level in the sport. Sony AI said that since then Ace beat professional players in December 2025 and last month.
"The success of Ace, with its perception system and learning-based control algorithm, suggests that similar techniques could be applied to other areas requiring fast, real-time control and human interaction -- such as manufacturing and service robotics, as well as applications across sports, entertainment and safety-critical physical domains," said Peter Durr, director of Sony AI Zurich and leader for Sony AI's project Ace.

The findings have been published in the journal Nature.
Security

Anthropic's Mythos Model Is Being Accessed by Unauthorized Users (bloomberg.com) 32

Bloomberg reports that a small group of unauthorized users gained access to Anthropic's restricted Mythos model through a mix of contractor-linked access and online sleuthing. Anthropic says it is investigating and has no evidence the access extended beyond a third-party vendor environment or affected its own systems. From the report: The users relied on a mix of tactics to get into Mythos. These included using access the person had as a worker at a third-party contractor for Anthropic and trying commonly used internet sleuthing tools often employed by cybersecurity researchers, the person said. The users are part of a private Discord channel that focuses on hunting for information about unreleased models, including by using bots to scour for details that Anthropic and others have posted on unsecured websites such as GitHub. [...] To access Mythos, the group of users made an educated guess about the model's online location based on knowledge about the format Anthropic has used for other models, the person said, adding that such details were revealed in a recent data breach from Mercor, an AI training startup that works with a number of top developers.

Crucially, the person also has permission to access Anthropic models and software related to evaluating the technology for the startup. They gained this access from a company for which they have performed contract work evaluating Anthropic's AI models. Bloomberg is not naming the company for security reasons. The group is interested in playing around with new models, not wreaking havoc with them, the person said. The group has not run cybersecurity-related prompts on the Mythos model, the person said, preferring instead to try tasks like building simple websites in an attempt to avoid detection by Anthropic. The person said the group also has access to a slew of other unreleased Anthropic AI models.

Google

Google Unveils Two New AI Chips For the 'Agentic Era' (cnbc.com) 25

Google announced two new tensor processing units (TPUs) for the "agentic era," with separate processors dedicated to training and inference. "With the rise of AI agents, we determined the community would benefit from chips individually specialized to the needs of training and serving," Amin Vahdat, a Google senior vice president and chief technologist for AI and infrastructure, said in a blog post. Both chips will become available later this year. CNBC reports: After years of producing chips that can both train artificial intelligence models and handle inference work, Google is separating those tasks into distinct processors, its latest effort to take on Nvidia in AI hardware. [...] None of the tech giants are displacing Nvidia, and Google isn't even comparing the performance of its new chips with those from the AI chip leader. Google did say the training chip enables 2.8 times the performance of the seventh-generation Ironwood TPU, announced in November, for the same price, while performance is 80% better for the inference processor.

Nvidia said its upcoming Groq 3 LPU hardware will draw on large quantities of static random-access memory, or SRAM, which is used by Cerebras, an AI chipmaker that filed to go public earlier this month. Google's new inference chip, dubbed TPU 8i, also relies on SRAM. Each chip contains 384 megabytes of SRAM, triple the amount in Ironwood. The architecture is designed "to deliver the massive throughput and low latency needed to concurrently run millions of agents cost-effectively," Sundar Pichai, CEO of Google parent Alphabet, wrote in a blog post.

AI

AI Tool Rips Off Open Source Software Without Violating Copyright (404media.co) 119

A satirical but working tool called Malus uses AI to create "clean room" clones of open-source software, aiming to reproduce the same functionality while shedding attribution and copyleft obligations. "It works," Mike Nolan, one of the two people behind Malus, who researches the political economy of open source software and currently works for the United Nations, told 404 Media. "The Stripe charge will provide you the thing, and it was important for us to do that, because we felt that if it was just satire, it would end up like every other piece of research I've done on open source, which ends up being largely dismissed by open source tech workers who felt that they were too special and too unique and too intelligent to ever be the ones on the bad side of the layoffs or the economics of the situation." 404 Media reports: Malus's legal strategy for bypassing copyright is based on a historically pivotal moment for software and copyright law dating back to 1982. Back then, IBM dominated home computing, and competitors like Columbia Data Products wanted to sell products that were compatible with software that IBM customers were already using. Reverse engineering IBM's computer would have infringed on the company's copyright, so Columbia Data Products came up with what we now know as a "clean room" design.

It tasked one team with examining IBM's BIOS and creating specifications for what a clone of that system would require. A different "clean" team, one that was never exposed to IBM's code, then created BIOS that met those specifications from scratch. The result was a system that was compatible with IBM's ecosystem but didn't violate its copyright because it did not copy IBM's technical process and counted as original work.

This clean room method, which has been validated by case law and dramatized in the first season of Halt and Catch Fire, made computing more open and competitive than it would have been otherwise. But it has taken on new meaning in the age of generative AI. It is now easier than ever to ask AI tools to produce software that is identical in function to existing open source projects, and that, some would argue, are built from scratch and are therefore original work that can bypass existing copyright licenses. Others would say that software produced by large language models is inherently derivative, because like any LLM output, it is trained on the collective output of humans scraped from the internet, including specific open source projects.

Malus (pronounced malice), uses AI to do the same thing. "Finally, liberation from open source license obligations," Malus's site says. "Our proprietary AI robots independently recreate any open source project from scratch. The result? Legally distinct code with corporate-friendly licensing. No attribution. No copyleft. No problems." Copyleft is a type of copyright license that ensures reproductions or applications of the software keep it free to share and modify.

Government

Pentagon Wants $54 Billion For Drones (arstechnica.com) 83

An anonymous reader quotes a report from Ars Technica: The US military's massive $1.5 trillion budget request for the next fiscal year includes what Pentagon officials described as the largest investment in drone warfare and counter-drone technology in US history. The proposed spending on drone and autonomous warfare technologies within the FY2027 budget proposal for the US Department of Defense would surpass most countries' defense budgets and rank among the top 10 in the world for military spending, ahead of countries such as Ukraine, South Korea, and Israel.

Specifically, the Pentagon is requesting $53.6 billion to boost US production and procurement of drones, train drone operators, build out a logistics network for sustaining drone deployments, and expand counter-drone systems to defend more US military sites. The funding request is budgeted under the Defense Autonomous Warfare Group (DAWG), an organization established in late 2025 that would see a massive budget increase after receiving about $226 million in the 2026 fiscal year budget.

[...] Another $20.6 billion would help purchase one-way attack drones and drone aircraft developed through the US Air Force's Collaborative Combat Aircraft program, which is building drone prototypes capable of teaming up with human-piloted fighter jets. Part of this funding would also go toward defensive systems for countering small drones and the US Navy's Boeing MQ-25 drone designed to perform midair refueling of carrier-borne fighter aircraft to extend their strike ranges. Such drone-related spending even rivals the entire budget of the US Marine Corps. But the Pentagon has not said that it is creating a dedicated drone branch of the US military similar to the standalone Space Force.

Pentagon officials emphasized that most of the money would go toward procuring drone and autonomous warfare technologies that already exist, and is largely separate from additional funding that would bolster US domestic manufacturing capacity to build such weapon systems. "That $70 billion is all going into existing systems and technologies," said Hurst. "The industrial base support is entirely separate."
"The evolution we've seen in the battlefield is this evolution of technologies in the timeframe of weeks, not the typical years we see with our defense production," said Lt. Gen. Steven Whitney, director of force structure, resources, and assessment for the Pentagon's Joint Chiefs of Staff, during a Pentagon press briefing. "So it's really critical we work with industry to get that capability fielded."
The Courts

Florida Launches Criminal Investigation Into ChatGPT Over School Shooting (npr.org) 103

Florida's attorney general has launched a criminal investigation into OpenAI over allegations that the accused gunman in a shooting at Florida State University last year used ChatGPT to help plan the attack. OpenAI says the chatbot is "not responsible for this terrible crime" and only provided factual information available from public sources. NPR reports: The Republican attorney general, James Uthmeier, said at a press conference in Tampa on Tuesday that accused gunman Phoenix Ikner consulted ChatGPT for advice before the shooting, including what type of gun to use, what ammunition went with it, and what time to go to campus to encounter more people, according to an initial review of Ikner's chat logs. "My prosecutors have looked at this and they've told me, if it was a person on the other end of that screen, we would be charging them with murder," Uthmeier said. "We cannot have AI bots that are advising people on how to kill others."

Uthmeier's office is issuing subpoenas to OpenAI seeking information about its policies and internal training materials related to user threats of harm and how it cooperates with and reports crimes to law enforcement, dating back to March 2024. At the press conference, Uthmeier acknowledged the investigation is entering into uncharted territory and is uncertain about whether OpenAI has criminal liability. "We are going to look at who knew what, designed what, or should have done what," he said. "And if it is clear that individuals knew that this type of dangerous behavior might take place, that these types of unfortunate, tragic events might take place, and nevertheless still turned to profit, still allowed this business to operate, then people need to be held accountable."

[...] Ikner, 21, is facing multiple charges of murder and attempted murder for the April 2025 shooting near the student union on FSU's Tallahassee campus, where he was a student at the time. His trial is set to begin on Oct. 19. According to court filings, more than 200 AI messages have been entered into evidence in the case.

Firefox

Mozilla Uses Anthropic's Mythos To Fix 271 Bugs In Firefox (nerds.xyz) 168

BrianFagioli writes: Mozilla says it used an early version of Anthropic's Claude Mythos Preview to comb through Firefox's code, and the results were hard to ignore. In Firefox 150, the team fixed 271 vulnerabilities identified during this effort, a number that would have been unthinkable not long ago. Instead of relying only on fuzzing tools or human review, the AI was able to reason through code and surface issues that typically require highly specialized expertise.

The bigger implication is less about one release and more about where this is heading. Security has long favored attackers, since they only need to find a single flaw while defenders have to protect everything. If AI can scale vulnerability discovery for defenders, that dynamic could start to shift. It does not mean zero days disappear overnight, but it suggests a future where bugs are found and fixed faster than attackers can weaponize them.
"Computers were completely incapable of doing this a few months ago, and now they excel at it," says Mozilla in a blog post. "We have many years of experience picking apart the work of the world's best security researchers, and Mythos Preview is every bit as capable. So far we've found no category or complexity of vulnerability that humans can find that this model can't."

The company concluded: "The defects are finite, and we are entering a world where we can finally find them all."

Slashdot Top Deals