Open Source

New Project Brings Strong Linux Compatibility To More Classic Windows Games (arstechnica.com) 18

An anonymous reader quotes a report from Ars Technica: For years now, Valve has been slowly improving the capabilities of the Proton compatibility layer that lets thousands of Windows games work seamlessly on the Linux-based SteamOS. But Valve's Windows-to-Linux compatibility layer generally only extends back to games written for Direct3D 8, the proprietary Windows graphics API Microsoft released in late 2000. Now, a new open source project is seeking to extend Linux interoperability further back into PC gaming history. The d7vk project describes itself as "a Vulkan-based translation layer for Direct3D 7 [D3D7], which allows running 3D applications on Linux using Wine."

The new project isn't the first attempt to get Direct3D 7 games running on Linux. Wine's own built-in WineD3D compatibility layer has supported D3D7 in some form or another for at least two decades now. But the new d7vk project instead branches off the existing dxvk compatibility layer, which is already used by Valve's Proton for SteamOS and which reportedly offers better performance than WineD3D on many games. D7vk project author WinterSnowfall writes that while they don't expect this new project to be upstreamed into the main dxvk in the future, the new version should have "the same level of per application/targeted configuration profiles and fixes that you're used to seeing in dxvk proper." And though d7vk might not perform universally better than the existing alternatives, WinterSnowfall writes that "having more options on the table is a good thing in my book at least."
The report notes that the PC Gaming Wiki lists more than 400 games built on the aging D3D7 APIs, spanning mostly early-2000s releases but with a trickle of new titles still appearing through 2022. Notable classics include Escape from Monkey Island and Hitman: Codename 47.
AI

Magika 1.0 Goes Stable As Google Rebuilds Its File Detection Tool In Rust (googleblog.com) 26

BrianFagioli writes: Google has released Magika 1.0, a stable version of its AI-based file type detection tool, and rebuilt the entire engine in Rust for speed and memory safety. The system now recognizes more than 200 file types, up from about 100, and is better at distinguishing look-alike formats such as JSON vs JSONL, TSV vs CSV, C vs C++, and JavaScript vs TypeScript. The team used a 3TB training dataset and even relied on Gemini to generate synthetic samples for rare file types, allowing Magika to handle formats that don't have large, publicly available corpora. The tool supports Python and TypeScript integrations and offers a native Rust command-line client.

Under the hood, Magika uses ONNX Runtime for inference and Tokio for parallel processing, allowing it to scan around 1,000 files per second on a modern laptop core and scale further with more CPU cores. Google says this makes Magika suitable for security workflows, automated analysis pipelines, and general developer tooling. Installation is a single curl or PowerShell command, and the project remains fully open source.
The project is available on GitHub and documentation can be found here.
Power

Ukraine First To Demo Open Source Security Platform To Help Secure Power Grid (theregister.com) 10

concertina226 shares a report from The Register: [A massive power outage in April left tens of millions across Spain, Portugal, and parts of France without electricity for hours due to cascading grid failures, exposing how fragile and interconnected Europe's energy infrastructure is. The incident, though not a cyberattack, reignited concerns about the vulnerability of aging, fragmented, and insecure operational technology systems that could be easily exploited in future cyber or ransomware attacks.] This headache is one the European Commission is focused on. It is funding several projects looking at making electric grids more resilient, such as the eFort framework being developed by cybersecurity researchers at the independent non-profit Netherlands Organisation for Applied Scientific Research (TNO) and the Delft University of Technology (TU Delft).

TNO's SOARCA tool is the first ever open source security orchestration, automation and response (SOAR) platform designed to protect power plants by automating the orchestration of the response to physical attacks, as well as cyberattacks, on substations and the network, and the first country to demo it will be the Ukraine this year. At the moment, SOAR systems only exist for dedicated IT environments. The researchers' design includes a SOAR system in each layer of the power station: the substation, the control room, the enterprise layer, the cloud, or the security operations centre (SOC), so that the SOC and the control room work together to detect anomalies in the network, whether it's an attacker exploiting a vulnerability, a malicious device being plugged into a substation, or a physical attack like a missile hitting a substation. The idea is to be able to isolate potential problems and prevent lateral movement from one device to another or privilege escalation, so an attacker cannot go through the network to the central IT management system of the electricity grid. [...]

The SOARCA tool is underpinned by CACAO Playbooks, an open source specification developed by the OASIS Open standards body and its members (which include lots of tech giants and US government agencies) to create standardized predefined, automated workflows that can detect intrusions and changes made by malicious actors, and then carry out a series of steps to protect the network and mitigate the attack. Experts largely agree the problem facing critical infrastructure is only worsening as years pass, and the more random Windows implementations that are added into the network, the wider the attack surface is. [...] TNO's Wolthuis said the energy industry is likely to be pushed soon to take action by regulators, particularly once the Network Code on Cybersecurity (NCCS), which lays out rules requiring cybersecurity risk assessments in the electricity sector, is formalized.

Open Source

International Criminal Court To Ditch Microsoft Office For European Open Source Alternative (euractiv.com) 55

An anonymous reader shares a report: The International Criminal Court will switch its internal work environment away from Microsoft Office to Open Desk, a European open source alternative, the institution confirmed to Euractiv. The switch comes amid rising concerns about public bodies being reliant on US tech companies to run their services, which have stepped up sharply since the start of US President Donald Trump's second administration.

For the ICC, such concerns are not abstract: Trump has repeatedly lashed out at the court and slapped sanctions on its chief prosecutor, Karim Khan. Earlier this year, the AP also reported that Microsoft had cancelled Khan's email account, a claim the company denies. "We value our relationship with the ICC as a customer and are convinced that nothing impedes our ability to continue providing services to the ICC in the future," a Microsoft spokesperson told Euractiv.

Programming

Does Generative AI Threaten the Open Source Ecosystem? (zdnet.com) 47

"Snippets of proprietary or copyleft reciprocal code can enter AI-generated outputs, contaminating codebases with material that developers can't realistically audit or license properly."

That's the warning from Sean O'Brien, who founded the Yale Privacy Lab at Yale Law School. ZDNet reports: Open software has always counted on its code being regularly replenished. As part of the process of using it, users modify it to improve it. They add features and help to guarantee usability across generations of technology. At the same time, users improve security and patch holes that might put everyone at risk. But O'Brien says, "When generative AI systems ingest thousands of FOSS projects and regurgitate fragments without any provenance, the cycle of reciprocity collapses. The generated snippet appears originless, stripped of its license, author, and context." This means the developer downstream can't meaningfully comply with reciprocal licensing terms because the output cuts the human link between coder and code. Even if an engineer suspects that a block of AI-generated code originated under an open source license, there's no feasible way to identify the source project. The training data has been abstracted into billions of statistical weights, the legal equivalent of a black hole.

The result is what O'Brien calls "license amnesia." He says, "Code floats free of its social contract and developers can't give back because they don't know where to send their contributions...."

"Once AI training sets subsume the collective work of decades of open collaboration, the global commons idea, substantiated into repos and code all over the world, risks becoming a nonrenewable resource, mined and never replenished," says O'Brien. "The damage isn't limited to legal uncertainty. If FOSS projects can't rely upon the energy and labor of contributors to help them fix and improve their code, let alone patch security issues, fundamentally important components of the software the world relies upon are at risk."

O'Brien says, "The commons was never just about free code. It was about freedom to build together." That freedom, and the critical infrastructure that underlies almost all of modern society, is at risk because attribution, ownership, and reciprocity are blurred when AIs siphon up everything on the Internet and launder it (the analogy of money laundering is apt), so that all that code's provenance is obscured.

Open Source

Ladybird Browser Gains Cloudflare Support to Challenge the Status Quo (linuxiac.com) 103

An anonymous reader shared this report from the blog Linuxiac: In a somewhat unexpected move, Cloudflare has announced its sponsorship of the Ladybird browser, an independent (still-in-development) open-source initiative aimed at developing a modern, standalone web browser engine.

It's a project launched by GitHub's co-founder and former CEO, Chris Wanstrath, and tech visionary Andreas Kling. It's written in C++, and designed to be fast, standards-compliant, and free of external dependencies. Its main selling point? Unlike most alternative browsers today, Ladybird doesn't sit on top of Chromium or WebKit. Instead, it's building a completely new rendering engine from scratch, which is a rare thing in today's web landscape. For reference, the vast majority of web traffic currently runs through engines developed by either Google (Blink/Chromium), Apple (WebKit), or Mozilla (Gecko).

The sponsorship means the Ladybird team will have more resources to accelerate development. This includes paying developers to work on crucial features, such as JavaScript support, rendering improvements, and compatibility with modern web applications. Cloudflare stated that its support is part of a broader initiative to keep the web open, where competition and multiple implementations can drive enhanced security, performance, and innovation.

The article adds that Cloudflare also chose to sponsor Omarchy, a tool that runs on Arch and sets up and configures a Hyprland tiling window manager, along with a curated set of defaults and developer tools including Neovim, Docker, and Git.
Programming

Bundler's Lead Maintainer Asserts Trademark in Ongoing Struggle with Ruby Central (arko.net) 7

After the nonprofit Ruby Central removed all RubyGems' maintainers from its GitHub repository, André Arko — who helped build Bundler — wrote a new blog post on Thursday "detailing Bundler's relationship with Ruby Central," according to this update from The New Stack. "In the last few weeks, Ruby Central has suddenly asserted that they alone own Bundler," he wrote. "That simply isn't true. In order to defend the reputation of the team of maintainers who have given so much time and energy to the project, I have registered my existing trademark on the Bundler project."

He adds that trademarks do not affect copyright, which stays with the original contributors unchanged. "Trademarks only impact one thing: Who is allowed say that what they make is named 'Bundler,'" he wrote. "Ruby Central is welcome to the code, just like everyone else. They are not welcome to the project name that the Bundler maintainers have painstakingly created over the last 15 years."

He is, however, not seeking the trademark for himself, noting that the "idea of Bundler belongs to the Ruby community." "Once there is a Ruby organization that is accountable to the maintainers, and accountable to the community, with openly and democratically elected board members, I commit to transfer my trademark to that organization," he said. "I will not license the trademark, and will instead transfer ownership entirely. Bundler should belong to the community, and I want to make sure that is true for as long as Bundler exists."

The blog It's FOSS also has an update on Spinel, the new worker-owned collective founded by Arko, Samuel Giddins [who Giddins led RubyGems security efforts], and Kasper Timm Hansen (who served served on the Rails core team from 2016 to 2022 and was one of its top contributors): These guys aren't newcomers but some of the architects behind Ruby's foundational infrastructure. Their flagship offering is rv ["the Ruby swiss army knife"], a tool that aims to replace the fragmented Ruby tooling ecosystem. It promises to [in the future] handle everything from rvm, rbenv, chruby, bundler, rubygems, and others — all at once while redefining how Ruby development tools should work... Spinel operates on retainer agreements with companies needing Ruby expertise instead of depending on sponsors who can withdraw support or demand control. This model maintains independence while ensuring sustainability for the maintainers.
The Register had reported Thursday: Spinel's 'rv' project aims to supplant elements of RubyGems and Bundler with a more modular, version-aware manager. Some in the Ruby community have already accused core Rails figures of positioning Spinel as a threat. For example, Rafael FranÃa of Shopify commented that admins of the new project should not be trusted to avoid "sabotaging rubygems or bundler."
Ruby

Open Source Turmoil: RubyGems Maintainers Kicked Off GitHub 75

Ruby Central, a non-profit organization committed to "driving innovation and building community within the Ruby programming ecosystem since 2001," removed all RubyGems maintainers from the project's GitHub repository on September 18, granting administrative access exclusively to its employees and contractors following alleged pressure from Shopify, one of its biggest backers, according to Ruby developer Joel Drapper. The nonprofit organization, which operates RubyConf and RailsConf, cited fiduciary responsibility and supply chain security concerns following a recent audit.

The controversy began September 9 when HSBT (Hiroshi Shibata), a Ruby infrastructure maintainer, renamed the RubyGems GitHub enterprise to "Ruby Central" and added Director of Open Source Marty Haught as owner while demoting other maintainers. The action allegedly followed Shopify's threat to cut funding unless Ruby Central assumed full ownership of RubyGems and Bundler. Ruby Central had reportedly become financially dependent on Shopify after Sidekiq withdrew $250,000 annual sponsorship over the organization platforming Rails creator DHH at RailsConf 2025. Andre Arko, a veteran contributor on-call for RubyGems.org at the time, was among those removed.

Maintainer Ellen Dash has characterized the action as a "hostile takeover" and also resigned. Executive Director Shan Cureton acknowledged poor communication in a YouTube video Monday, stating removals were temporary while finalizing operator agreements. Arko and others are launching Spinel, an alternative Ruby tooling project, though Shopify's Rafael Franca commented that Spinel admins shouldn't be trusted to avoid "sabotaging rubygems or bundler."
Software

Nova Launcher's Founder and Sole Developer Has Left (theverge.com) 20

Kevin Barry, founder and sole developer of Nova Launcher, has left parent company Branch Metrics after being told to stop work on both the launcher and an open-source release. While the app remains on Google Play, the launcher's website currently shows a 404 error. The Verge reports: Mobile analytics company Branch Metrics acquired Nova in 2022. The company's CEO at the time, co-founder Alex Austin, said on Reddit that if Barry were to leave Branch, "it's contracted that the code will be open-sourced and put in the hands of the community." Austin left Branch in 2023, and now with Barry officially gone from the company, too, it's unclear if the launcher will now actually be open-sourced.

"I think the newer leadership since Alex Austin left has put a different focus on the company and Nova simply isn't part of that focus in any way at all," Cliff Wade, Nova's former customer relations lead who left as part of the 2024 layoffs, tells The Verge. "It's just some app that they own but no longer feel they need or want." Wade also said that "I don't believe Branch will do the right thing any time soon with regards to open-sourcing Nova. I think they simply just don't care and don't want to invest time, unless of course, they get enough pressure from the community and individuals who care."

Users have started a change.org petition to ask for the project to be open-sourced, and Wade says it's a "great start" to apply that pressure. Wade said he hasn't personally seen Barry's contract, so couldn't corroborate the claim of a contractual obligation to open-source Nova. Still, he said that the community "deserves" for the launcher to be open-sourced. "Branch just simply needs to do the right thing here and honor what they as a company have stated as well as what then CEO Alex Austin has stated numerous times prior to him leaving Branch."

Microsoft

Microsoft's 6502 BASIC Is Now Open Source (microsoft.com) 50

alternative_right writes: For decades, fragments and unofficial copies of Microsoft's 6502 BASIC have circulated online, mirrored on retrocomputing sites, and preserved in museum archives. Coders have studied the code, rebuilt it, and even run it in modern systems. Today, for the first time, we're opening the hatch and officially releasing the code under an open-source license. Microsoft BASIC began in 1975 as the company's very first product: a BASIC interpreter for the Intel 8080, written by Bill Gates and Paul Allen for the Altair 8800. That codebase was soon adapted to run on other 8-bit CPUs, including the MOS 6502, Motorola 6800, and 6809.

The 6502 port was completed in 1976 by Bill Gates and Ric Weiland. In 1977, Commodore licensed it for a flat fee of $25,000, a deal that placed Microsoft BASIC at the heart of Commodore's PET computers and, later, the VIC-20 and Commodore 64. The version we are releasing here -- labeled "1.1" -- contains fixes to the garbage collector identified by Commodore and jointly implemented in 1978 by Commodore engineer John Feagans and Bill Gates, when Feagans traveled to Microsoft's Bellevue offices. This is the version that shipped as the PET's "BASIC V2." It even contains a playful Bill Gates Easter egg, hidden in the labels STORDO and STORD0, which Gates himself confirmed in 2010.

AI

Switzerland Releases Open-Source AI Model Built For Privacy 26

Switzerland has launched Apertus, a fully open-source, multilingual LLM trained on 15 trillion tokens and over 1,000 languages. "What distinguishes Apertus from many other generative AI systems is its commitment to complete openness," reports CyberInsider. From the report: Unlike popular proprietary models, where users can only interact via APIs or hosted interfaces, Apertus provides open access to its model weights, training datasets, documentation, and even intermediate checkpoints. The source code and all training materials are released under a permissive open-source license that allows commercial use. Since the full training process is documented and reproducible, researchers and watchdogs can audit the data sources, verify compliance with data protection laws, and inspect how the model was trained. Apertus' development explicitly adhered to Swiss data protection and copyright laws, and incorporated retroactive opt-out mechanisms to respect data source preferences.

From a privacy perspective, Apertus represents a compelling shift in the AI landscape. The model only uses publicly available data, filtered to exclude personal information and to honor opt-out signals from content sources. This not only aligns with emerging regulatory frameworks like the EU AI Act, but also provides a tangible example of how AI can be both powerful and privacy-respecting. According to ETH Zurich's Imanol Schlag, technical lead of the project at ETH Zurich, Apertus is "built for the public good" and is a demonstration of how AI can be deployed as a public digital infrastructure, much like utilities or transportation.
The model is available via Swisscom's Sovereign Swiss AI Platform. It's also available through Hugging Face and the Public AI Inference Utility.
Open Source

Linux Turns 34 (tomshardware.com) 66

Mark Tyson writes via Tom's Hardware: On this day 34 years ago, an unknown computer science student from Finland announced that a new free operating system project was "starting to get ready." Linus Benedict Torvalds elaborated by explaining that the OS was "just a hobby, [it] won't be big and professional like GNU." Of course, this was the first public outing for the colossal collaborative project that is now known as Linux. Above, you can see Torvalds' first posting regarding Linux to the comp.os.minix newsgroup. The now famously caustic, cantankerous, curmudgeon seemed relatively mild, meek, and malleable in this historic Linux milestone posting.

Torvalds asked the Minix community about their thoughts on a free new OS being prepared for Intel 386 and 486 clones. He explained that he'd been brewing the project since April (a few months prior), and asked for direction. Specifically, he sought input about other Minix users' likes and dislikes of that OS, in order to differentiate Linux. The now renowned developer then provided a rough summary of the development so far. Some features of Linux that Torvalds thought were important, or that he was particularly proud of, were then highlighted in the newsgroup posting. For example, the Linux chief mentioned his OS's multithreaded file system, and its absence of any Minix code. However, he humbly admitted the code as it stood was Intel x86 specific, and thus "is not portable."

Last but not least, Torvalds let it be known that version 0.01 of this free OS would be out in the coming month (September 1991). It was indeed released on September 17, 1991, but someone else decided on the OS name at the last minute. Apparently, Torvalds didn't want to release his new OS under the name of Linux, as it would be too egotistical, too self-aggrandizing. He preferred Freax, a portmanteau word formed from Free-and-X. However, one of Torvald's colleagues, who was the administrator for the project's FTP server, did not think that 'Freax' was an appealing name for the OS. So this co-worker went ahead and uploaded the OS as 'Linux' on that date in September, without asking Torvalds.

Open Source

Remember the Companies Making Vital Open Source Contributions (infoworld.com) 22

Matt Asay answered questions from Slashdot readers in 2010 as the then-COO of Canonical. Today he runs developer marketing at Oracle (after holding similar positions at AWS, Adobe, and MongoDB).

And this week Asay contributed an opinion piece to InfoWorld reminding us of open source contributions from companies where "enlightened self-interest underwrites the boring but vital work — CI hardware, security audits, long-term maintenance — that grassroots volunteers struggle to fund." [I]f you look at the Linux 6.15 kernel contributor list (as just one example), the top contributor, as measured by change sets, is Intel... Another example: Take the last year of contributions to Kubernetes. Google (of course), Red Hat, Microsoft, VMware, and AWS all headline the list. Not because it's sexy, but because they make billions of dollars selling Kubernetes services... Some companies (including mine) sell proprietary software, and so it's easy to mentally bucket these vendors with license fees or closed cloud services. That bias makes it easy to ignore empirical contribution data, which indicates open source contributions on a grand scale.
Asay notes Oracle's many contributions to Linux: In the [Linux kernel] 6.1 release cycle, Oracle emerged as the top contributor by lines of code changed across the entire kernel... [I]t's Oracle that patches memory-management structures and shepherds block-device drivers for the Linux we all use. Oracle's kernel work isn't a one-off either. A few releases earlier, the company topped the "core of the kernel" leaderboard in 5.18, and it hasn't slowed down since, helping land the Maple Tree data structure and other performance boosters. Those patches power Oracle Cloud Infrastructure (OCI), of course, but they also speed up Ubuntu on your old ThinkPad. Self-interested contributions? Absolutely. Public benefit? Equally absolute.

This isn't just an Oracle thing. When we widen the lens beyond Oracle, the pattern holds. In 2023, I wrote about Amazon's "quiet open source revolution," showing how AWS was suddenly everywhere in GitHub commit logs despite the company's earlier reticence. (Disclosure: I used to run AWS' open source strategy and marketing team.) Back in 2017, I argued that cloud vendors were open sourcing code as on-ramps to proprietary services rather than end-products. Both observations remain true, but they miss a larger point: Motives aside, the code flows and the community benefits.

If you care about outcomes, the motives don't really matter. Or maybe they do: It's far more sustainable to have companies contributing because it helps them deliver revenue than to contribute out of charity. The former is durable; the latter is not.

There's another practical consideration: scale. "Large vendors wield resources that community projects can't match."

Asay closes by urging readers to "Follow the commits" and "embrace mixed motives... the point isn't sainthood; it's sustainable, shared innovation. Every company (and really every developer) contributes out of some form of self-interest. That's the rule, not the exception. Embrace it." Going forward, we should expect to see even more counterintuitive contributor lists. Generative AI is turbocharging code generation, but someone still has to integrate those patches, write tests, and shepherd them upstream. The companies with the most to lose from brittle infrastructure — cloud providers, database vendors, silicon makers — will foot the bill. If history is a guide, they'll do so quietly.
Open Source

China's Lead in Open-Source AI Jolts Washington and Silicon Valley (msn.com) 89

China has established a lead in the field of open-source AI, a development that is reportedly sending jolts through both Washington and Silicon Valley. The nation's progress has become a significant event for American policymakers in the U.S. capital. The advancement has registered as a shock within Silicon Valley, the hub of the American technology industry. From the report: The overall performance of China's best open-weight model has surpassed the American open-source champion since November, according to research firm Artificial Analysis. The firm, which rates the ability of models in math, coding and other areas, found a version of Alibaba's Qwen3 beat OpenAI's gpt-oss.

However, the Chinese model is almost twice as big as OpenAI's, suggesting that for simpler tasks, Qwen might consume more computing power to do the same job. OpenAI said its open-source model outperformed rivals of similar size on reasoning tasks and delivered strong performance at low cost.

Python

How Python is Fighting Open Source's 'Phantom' Dependencies Problem (blogspot.com) 33

Since 2023 the Python Software Foundation has had a Security Developer-in-Residence (sponsored by the Open Source Security Foundation's vulnerability-finding "Alpha-Omega" project). And he's just published a new 11-page white paper about open source's "phantom dependencies" problem — suggesting a way to solve it.

"Phantom" dependencies aren't tracked with packaging metadata, manifests, or lock files, which makes them "not discoverable" by tools like vulnerability scanners or compliance and policy tools. So Python security developer-in-residence Seth Larson authored a recently-accepted Python Enhancement Proposal offering an easy way for packages to provide metadata through Software Bill-of-Materials (SBOMs). From the whitepaper: Python Enhancement Proposal 770 is backwards compatible and can be enabled by default by tools, meaning most projects won't need to manually opt in to begin generating valid PEP 770 SBOM metadata. Python is not the only software package ecosystem affected by the "Phantom Dependency" problem. The approach using SBOMs for metadata can be remixed and adopted by other packaging ecosystems looking to record ecosystem-agnostic software metadata...

Within Endor Labs' [2023 dependencies] report, Python is named as one of the most affected packaging ecosystems by the "Phantom Dependency" problem. There are multiple reasons that Python is particularly affected:

- There are many methods for interfacing Python with non-Python software, such as through the C-API or FFI. Python can "wrap" and expose an easy-to-use Python API for software written in other languages like C, C++, Rust, Fortran, Web Assembly, and more.

- Python is the premier language for scientific computing and artificial intelligence, meaning many high-performance libraries written in system languages need to be accessed from Python code.

- Finally, Python packages have a distribution type called a "wheel", which is essentially a zip file that is "installed" by being unzipped into a directory, meaning there is no compilation step allowed during installation. This is great for being able to inspect a package before installation, but it means that all compiled languages need to be pre-compiled into binaries before installation...


When designing a new package metadata standard, one of the top concerns is reducing the amount of effort required from the mostly volunteer maintainers of packaging tools and the thousands of projects being published to the Python Package Index... By defining PEP 770 SBOM metadata as using a directory of files, rather than a new metadata field, we were able to side-step all the implementation pain...

We'll be working to submit issues on popular open source SBOM and vulnerability scanning tools, and gradually, Phantom Dependencies will become less of an issue for the Python package ecosystem.

The white paper "details the approach, challenges, and insights into the creation and acceptance of PEP 770 and adopting Software Bill-of-Materials (SBOMs) to improve the measurability of Python packages," explains an announcement from the Python Software Foundation. And the white paper ends with a helpful note.

"Having spoken to other open source packaging ecosystem maintainers, we have come to learn that other ecosystems have similar issues with Phantom Dependencies. We welcome other packaging ecosystems to adopt Python's approach with PEP 770 and are willing to provide guidance on the implementation."
AI

Initiative Seeks AI Lab to Build 'American Truly Open Models' (ATOM) (msn.com) 20

"Benchmarking firm Artificial Analysis found that only five of the top 15 AI models are open source," reports the Washington Post, "and all were developed by Chinese AI companies...."

"Now some American executives, investors and academics are endorsing a plan to make U.S. open-source AI more competitive." A new campaign called the ATOM Project, for American Truly Open Models, aims to create a U.S.-based AI lab dedicated to creating software that developers can freely access and modify. Its blueprint calls for access to serious computing power, with upward of 10,000 of the cutting-edge GPU chips used to power corporate AI development. The initiative, which launched Monday, has gathered signatures of support from more than a dozen industry figures. They include veteran tech investor Bill Gurley; Clement Delangue, CEO of Hugging Face, a repository for open-source AI models and datasets; Stanford professor and AI investor Chris Manning; chipmaker Nvidia's director of applied research, Oleksii Kuchaiev; Jason Kwon, chief strategy officer for OpenAI; and Dylan Patel, CEO and founder of research firm SemiAnalysis...

The lack of progress in open-source AI underscores the case for initiatives like ATOM: The U.S. has not produced a major new open-source AI release since Meta's launch of its Llama 4 model in April, which disappointed some AI experts... "A lot of it is a coordination problem," said ATOM's creator, Nathan Lambert, a senior research scientist at the nonprofit Allen Institute for AI who is launching the project in a personal capacity... Lambert said the idea was to develop much more powerful open-source AI models than existing U.S. efforts such as Bloom, an AI language model from Hugging Face, Pythia from EleutherAI, and others. Those groups were willing to take on more legal risk in the name of scientific progress but suffered from underfunding, said Lambert, who has worked at Google's DeepMind AI lab, Facebook AI Research and Hugging Face.

The other problem? The hefty cost of top-performing AI. Lambert estimates that getting access to 10,000 state-of-the-art GPUs will cost at least $100 million. But the funding must be found if American efforts are to stay competitive, he said.

The initiative's web page is seeking signatures, but also asks visitors to the site to "consider how your expertise or resources might contribute to building the infrastructure America needs."
China

Facing US Chip Restrictions, China Pitches Global Cooperation on AI (msn.com) 13

In Shanghai at the World Artificial Intelligence Conference (which ran until Tuesday), the Chinese government "announced an international organization for AI regulation and a 13-point action plan aimed at fostering global cooperation to ensure the technology's beneficial and responsible development," reports the Washington Post.

The theme of the conference was "Global Solidarity in the AI Era," the article notes, and "the expo is one part of Beijing's bid to establish itself as a responsible AI leader for the international community."

CNN points out that China's announcement comes "just days after the United States unveiled its own plan to promote U.S. dominance." Chinese Premier Li Qiang unveiled China's vision for future AI oversight at the World AI Conference, an annual gathering in Shanghai of tech titans from more than 40 countries... While Li did not directly refer to the U.S. in his speech, he alluded to the ongoing trade tensions between the two superpowers, which include American restrictions on advanced semiconductor exports — a component vital for powering and training AI, which is currently causing a shortage in China. "Key resources and capabilities are concentrated in a few countries and a few enterprises," said Li in his speech on Saturday. "If we engage in technological monopoly, controls and restrictions, AI will become an exclusive game for a small number of countries and enterprises...."

Secretary-General of the Association of Southeast Asian Nations, Dr. Kao Kim Hourn, also called for "robust governance" of artificial intelligence to mitigate potential threats, including misinformation, deepfakes, and cybersecurity threats... Former Google CEO Eric Schmidt reiterated the call for international collaboration, explicitly calling on the U.S. and China to work together... "We have a vested interest to keep the world stable, keep the world not at war, to keep things peaceful, to make sure we have human control of these tools."

China's plan "called for establishing an international open-source community," reports the Wall Street Journal, "through which AI models can be freely deployed and improved by users." Industry participants said that plan "showed China's ambition to set global standards for AI and could undermine the U.S., whose leading models aren't open-source... While the world's best large language model is still American, the best model that everyone can use free is now Chinese."

"The U.S. should commit to ensuring that powerful models remain openly available," argues an opinion piece in The Hill by Stability AI's former head of public policy. Ubiquity is a matter of national security: retreating behind paywalls will leave a vacuum filled by strategic adversaries. Washington should treat open technology not as a vector for Chinese Communist Party propaganda but as a vessel to transmit U.S. influence abroad, molding the global ecosystem around U.S. industry. If DeepSeek is China's open-source "Sputnik moment," we need a legislative environment that supports — not criminalizes — an American open-source Moon landing.
Security

CISA Open-Sources Thorium Platform For Malware, Forensic Analysis (bleepingcomputer.com) 7

CISA has publicly released Thorium, a powerful open-source platform developed with Sandia National Labs that automates malware and forensic analysis at massive scale. According to BleepingComputer, the platform can "schedule over 1,700 jobs per second and ingest over 10 million files per hour per permission group." From the report: Security teams can use Thorium for automating and speeding up various file analysis workflows, including but not limited to:

- Easily import and export tools to facilitate sharing across cyber defense teams,
- Integrate command-line tools as Docker images, including open-source, commercial, and custom software,
- Filter results using tags and full-text search,
- Control access to submissions, tools, and results with strict group-based permissions,
- Scale with Kubernetes and ScyllaDB to meet workload demands.

Defenders can find installation instructions and get their own copy of Thorium from CISA's official GitHub repository.

Open Source

Google's New Security Project 'OSS Rebuild' Tackles Package Supply Chain Verification (googleblog.com) 13

This week Google's Open Source Security Team announced "a new project to strengthen trust in open source package ecosystems" — by reproducing upstream artifacts.

It includes automation to derive declarative build definitions, new "build observability and verification tools" for security teams, and even "infrastructure definitions" to help organizations rebuild, sign, and distribute provenance by running their own OSS Rebuild instances. (And as part of the initiative, the team also published SLSA Provenance attestations "for thousands of packages across our supported ecosystems.") Our aim with OSS Rebuild is to empower the security community to deeply understand and control their supply chains by making package consumption as transparent as using a source repository. Our rebuild platform unlocks this transparency by utilizing a declarative build process, build instrumentation, and network monitoring capabilities which, within the SLSA Build framework, produces fine-grained, durable, trustworthy security metadata. Building on the hosted infrastructure model that we pioneered with OSS Fuzz for memory issue detection, OSS Rebuild similarly seeks to use hosted resources to address security challenges in open source, this time aimed at securing the software supply chain... We are committed to bringing supply chain transparency and security to all open source software development. Our initial support for the PyPI (Python), npm (JS/TS), and Crates.io (Rust) package registries — providing rebuild provenance for many of their most popular packages — is just the beginning of our journey...

OSS Rebuild helps detect several classes of supply chain compromise:

- Unsubmitted Source Code: When published packages contain code not present in the public source repository, OSS Rebuild will not attest to the artifact.

- Build Environment Compromise: By creating standardized, minimal build environments with comprehensive monitoring, OSS Rebuild can detect suspicious build activity or avoid exposure to compromised components altogether.

- Stealthy Backdoors: Even sophisticated backdoors like xz often exhibit anomalous behavioral patterns during builds. OSS Rebuild's dynamic analysis capabilities can detect unusual execution paths or suspicious operations that are otherwise impractical to identify through manual review.


For enterprises and security professionals, OSS Rebuild can...

Enhance metadata without changing registries by enriching data for upstream packages. No need to maintain custom registries or migrate to a new package ecosystem.

Augment SBOMs by adding detailed build observability information to existing Software Bills of Materials, creating a more complete security picture...

- Accelerate vulnerability response by providing a path to vendor, patch, and re-host upstream packages using our verifiable build definitions...


The easiest (but not only!) way to access OSS Rebuild attestations is to use the provided Go-based command-line interface.

"With OSS Rebuild's existing automation for PyPI, npm, and Crates.io, most packages obtain protection effortlessly without user or maintainer intervention."
AI

Nvidia's CUDA Platform Now Support RISC-V (tomshardware.com) 20

An anonymous reader quotes a report from Tom's Hardware: At the 2025 RISC-V Summit in China, Nvidia announced that its CUDA software platform will be made compatible with the RISC-V instruction set architecture (ISA) on the CPU side of things. The news was confirmed during a presentation during a RISC-V event. This is a major step in enabling the RISC-V ISA-based CPUs in performance demanding applications. The announcement makes it clear that RISC-V can now serve as the main processor for CUDA-based systems, a role traditionally filled by x86 or Arm cores. While nobody even barely expects RISC-V in hyperscale datacenters any time soon, RISC-V can be used on CUDA-enabled edge devices, such as Nvidia's Jetson modules. However, it looks like Nvidia does indeed expect RISC-V to be in the datacenter.

Nvidia's profile on RISC-V seems to be quite high as the keynote at the RISC-V Summit China was delivered by Frans Sijsterman, who appears to be Vice President of Hardware Engineering at Nvidia. The presentation outlined how CUDA components will now run on RISC-V. A diagram shown at the session illustrated a typical configuration: the GPU handles parallel workloads, while a RISC-V CPU executes CUDA system drivers, application logic, and the operating system. This setup enables the CPU to orchestrate GPU computations fully within the CUDA environment. Given Nvidia's current focus, the workloads must be AI-related, yet the company did not confirm this. However, there is more.

Also featured in the diagram was a DPU handling networking tasks, rounding out a system consisting of GPU compute, CPU orchestration, and data movement. This configuration clearly suggests Nvidia's vision to build heterogeneous compute platforms where RISC-V CPU can be central to managing workloads while Nvidia's GPUs, DPUs, and networking chips handle the rest. Yet again, there is more. Even with this low-profile announcement, Nvidia essentially bridges proprietary CUDA stack to an open architecture, one that seems to develop fast in China. Yet, being unable to ship flagship GB200 and GB300 offerings to China, the company has to find ways to keep its CUDA thriving.

Slashdot Top Deals