Open Source

Linux Turns 34 (tomshardware.com) 65

Mark Tyson writes via Tom's Hardware: On this day 34 years ago, an unknown computer science student from Finland announced that a new free operating system project was "starting to get ready." Linus Benedict Torvalds elaborated by explaining that the OS was "just a hobby, [it] won't be big and professional like GNU." Of course, this was the first public outing for the colossal collaborative project that is now known as Linux. Above, you can see Torvalds' first posting regarding Linux to the comp.os.minix newsgroup. The now famously caustic, cantankerous, curmudgeon seemed relatively mild, meek, and malleable in this historic Linux milestone posting.

Torvalds asked the Minix community about their thoughts on a free new OS being prepared for Intel 386 and 486 clones. He explained that he'd been brewing the project since April (a few months prior), and asked for direction. Specifically, he sought input about other Minix users' likes and dislikes of that OS, in order to differentiate Linux. The now renowned developer then provided a rough summary of the development so far. Some features of Linux that Torvalds thought were important, or that he was particularly proud of, were then highlighted in the newsgroup posting. For example, the Linux chief mentioned his OS's multithreaded file system, and its absence of any Minix code. However, he humbly admitted the code as it stood was Intel x86 specific, and thus "is not portable."

Last but not least, Torvalds let it be known that version 0.01 of this free OS would be out in the coming month (September 1991). It was indeed released on September 17, 1991, but someone else decided on the OS name at the last minute. Apparently, Torvalds didn't want to release his new OS under the name of Linux, as it would be too egotistical, too self-aggrandizing. He preferred Freax, a portmanteau word formed from Free-and-X. However, one of Torvald's colleagues, who was the administrator for the project's FTP server, did not think that 'Freax' was an appealing name for the OS. So this co-worker went ahead and uploaded the OS as 'Linux' on that date in September, without asking Torvalds.

Open Source

Remember the Companies Making Vital Open Source Contributions (infoworld.com) 22

Matt Asay answered questions from Slashdot readers in 2010 as the then-COO of Canonical. Today he runs developer marketing at Oracle (after holding similar positions at AWS, Adobe, and MongoDB).

And this week Asay contributed an opinion piece to InfoWorld reminding us of open source contributions from companies where "enlightened self-interest underwrites the boring but vital work — CI hardware, security audits, long-term maintenance — that grassroots volunteers struggle to fund." [I]f you look at the Linux 6.15 kernel contributor list (as just one example), the top contributor, as measured by change sets, is Intel... Another example: Take the last year of contributions to Kubernetes. Google (of course), Red Hat, Microsoft, VMware, and AWS all headline the list. Not because it's sexy, but because they make billions of dollars selling Kubernetes services... Some companies (including mine) sell proprietary software, and so it's easy to mentally bucket these vendors with license fees or closed cloud services. That bias makes it easy to ignore empirical contribution data, which indicates open source contributions on a grand scale.
Asay notes Oracle's many contributions to Linux: In the [Linux kernel] 6.1 release cycle, Oracle emerged as the top contributor by lines of code changed across the entire kernel... [I]t's Oracle that patches memory-management structures and shepherds block-device drivers for the Linux we all use. Oracle's kernel work isn't a one-off either. A few releases earlier, the company topped the "core of the kernel" leaderboard in 5.18, and it hasn't slowed down since, helping land the Maple Tree data structure and other performance boosters. Those patches power Oracle Cloud Infrastructure (OCI), of course, but they also speed up Ubuntu on your old ThinkPad. Self-interested contributions? Absolutely. Public benefit? Equally absolute.

This isn't just an Oracle thing. When we widen the lens beyond Oracle, the pattern holds. In 2023, I wrote about Amazon's "quiet open source revolution," showing how AWS was suddenly everywhere in GitHub commit logs despite the company's earlier reticence. (Disclosure: I used to run AWS' open source strategy and marketing team.) Back in 2017, I argued that cloud vendors were open sourcing code as on-ramps to proprietary services rather than end-products. Both observations remain true, but they miss a larger point: Motives aside, the code flows and the community benefits.

If you care about outcomes, the motives don't really matter. Or maybe they do: It's far more sustainable to have companies contributing because it helps them deliver revenue than to contribute out of charity. The former is durable; the latter is not.

There's another practical consideration: scale. "Large vendors wield resources that community projects can't match."

Asay closes by urging readers to "Follow the commits" and "embrace mixed motives... the point isn't sainthood; it's sustainable, shared innovation. Every company (and really every developer) contributes out of some form of self-interest. That's the rule, not the exception. Embrace it." Going forward, we should expect to see even more counterintuitive contributor lists. Generative AI is turbocharging code generation, but someone still has to integrate those patches, write tests, and shepherd them upstream. The companies with the most to lose from brittle infrastructure — cloud providers, database vendors, silicon makers — will foot the bill. If history is a guide, they'll do so quietly.
Open Source

China's Lead in Open-Source AI Jolts Washington and Silicon Valley (msn.com) 89

China has established a lead in the field of open-source AI, a development that is reportedly sending jolts through both Washington and Silicon Valley. The nation's progress has become a significant event for American policymakers in the U.S. capital. The advancement has registered as a shock within Silicon Valley, the hub of the American technology industry. From the report: The overall performance of China's best open-weight model has surpassed the American open-source champion since November, according to research firm Artificial Analysis. The firm, which rates the ability of models in math, coding and other areas, found a version of Alibaba's Qwen3 beat OpenAI's gpt-oss.

However, the Chinese model is almost twice as big as OpenAI's, suggesting that for simpler tasks, Qwen might consume more computing power to do the same job. OpenAI said its open-source model outperformed rivals of similar size on reasoning tasks and delivered strong performance at low cost.

Python

How Python is Fighting Open Source's 'Phantom' Dependencies Problem (blogspot.com) 33

Since 2023 the Python Software Foundation has had a Security Developer-in-Residence (sponsored by the Open Source Security Foundation's vulnerability-finding "Alpha-Omega" project). And he's just published a new 11-page white paper about open source's "phantom dependencies" problem — suggesting a way to solve it.

"Phantom" dependencies aren't tracked with packaging metadata, manifests, or lock files, which makes them "not discoverable" by tools like vulnerability scanners or compliance and policy tools. So Python security developer-in-residence Seth Larson authored a recently-accepted Python Enhancement Proposal offering an easy way for packages to provide metadata through Software Bill-of-Materials (SBOMs). From the whitepaper: Python Enhancement Proposal 770 is backwards compatible and can be enabled by default by tools, meaning most projects won't need to manually opt in to begin generating valid PEP 770 SBOM metadata. Python is not the only software package ecosystem affected by the "Phantom Dependency" problem. The approach using SBOMs for metadata can be remixed and adopted by other packaging ecosystems looking to record ecosystem-agnostic software metadata...

Within Endor Labs' [2023 dependencies] report, Python is named as one of the most affected packaging ecosystems by the "Phantom Dependency" problem. There are multiple reasons that Python is particularly affected:

- There are many methods for interfacing Python with non-Python software, such as through the C-API or FFI. Python can "wrap" and expose an easy-to-use Python API for software written in other languages like C, C++, Rust, Fortran, Web Assembly, and more.

- Python is the premier language for scientific computing and artificial intelligence, meaning many high-performance libraries written in system languages need to be accessed from Python code.

- Finally, Python packages have a distribution type called a "wheel", which is essentially a zip file that is "installed" by being unzipped into a directory, meaning there is no compilation step allowed during installation. This is great for being able to inspect a package before installation, but it means that all compiled languages need to be pre-compiled into binaries before installation...


When designing a new package metadata standard, one of the top concerns is reducing the amount of effort required from the mostly volunteer maintainers of packaging tools and the thousands of projects being published to the Python Package Index... By defining PEP 770 SBOM metadata as using a directory of files, rather than a new metadata field, we were able to side-step all the implementation pain...

We'll be working to submit issues on popular open source SBOM and vulnerability scanning tools, and gradually, Phantom Dependencies will become less of an issue for the Python package ecosystem.

The white paper "details the approach, challenges, and insights into the creation and acceptance of PEP 770 and adopting Software Bill-of-Materials (SBOMs) to improve the measurability of Python packages," explains an announcement from the Python Software Foundation. And the white paper ends with a helpful note.

"Having spoken to other open source packaging ecosystem maintainers, we have come to learn that other ecosystems have similar issues with Phantom Dependencies. We welcome other packaging ecosystems to adopt Python's approach with PEP 770 and are willing to provide guidance on the implementation."
AI

Initiative Seeks AI Lab to Build 'American Truly Open Models' (ATOM) (msn.com) 20

"Benchmarking firm Artificial Analysis found that only five of the top 15 AI models are open source," reports the Washington Post, "and all were developed by Chinese AI companies...."

"Now some American executives, investors and academics are endorsing a plan to make U.S. open-source AI more competitive." A new campaign called the ATOM Project, for American Truly Open Models, aims to create a U.S.-based AI lab dedicated to creating software that developers can freely access and modify. Its blueprint calls for access to serious computing power, with upward of 10,000 of the cutting-edge GPU chips used to power corporate AI development. The initiative, which launched Monday, has gathered signatures of support from more than a dozen industry figures. They include veteran tech investor Bill Gurley; Clement Delangue, CEO of Hugging Face, a repository for open-source AI models and datasets; Stanford professor and AI investor Chris Manning; chipmaker Nvidia's director of applied research, Oleksii Kuchaiev; Jason Kwon, chief strategy officer for OpenAI; and Dylan Patel, CEO and founder of research firm SemiAnalysis...

The lack of progress in open-source AI underscores the case for initiatives like ATOM: The U.S. has not produced a major new open-source AI release since Meta's launch of its Llama 4 model in April, which disappointed some AI experts... "A lot of it is a coordination problem," said ATOM's creator, Nathan Lambert, a senior research scientist at the nonprofit Allen Institute for AI who is launching the project in a personal capacity... Lambert said the idea was to develop much more powerful open-source AI models than existing U.S. efforts such as Bloom, an AI language model from Hugging Face, Pythia from EleutherAI, and others. Those groups were willing to take on more legal risk in the name of scientific progress but suffered from underfunding, said Lambert, who has worked at Google's DeepMind AI lab, Facebook AI Research and Hugging Face.

The other problem? The hefty cost of top-performing AI. Lambert estimates that getting access to 10,000 state-of-the-art GPUs will cost at least $100 million. But the funding must be found if American efforts are to stay competitive, he said.

The initiative's web page is seeking signatures, but also asks visitors to the site to "consider how your expertise or resources might contribute to building the infrastructure America needs."
China

Facing US Chip Restrictions, China Pitches Global Cooperation on AI (msn.com) 13

In Shanghai at the World Artificial Intelligence Conference (which ran until Tuesday), the Chinese government "announced an international organization for AI regulation and a 13-point action plan aimed at fostering global cooperation to ensure the technology's beneficial and responsible development," reports the Washington Post.

The theme of the conference was "Global Solidarity in the AI Era," the article notes, and "the expo is one part of Beijing's bid to establish itself as a responsible AI leader for the international community."

CNN points out that China's announcement comes "just days after the United States unveiled its own plan to promote U.S. dominance." Chinese Premier Li Qiang unveiled China's vision for future AI oversight at the World AI Conference, an annual gathering in Shanghai of tech titans from more than 40 countries... While Li did not directly refer to the U.S. in his speech, he alluded to the ongoing trade tensions between the two superpowers, which include American restrictions on advanced semiconductor exports — a component vital for powering and training AI, which is currently causing a shortage in China. "Key resources and capabilities are concentrated in a few countries and a few enterprises," said Li in his speech on Saturday. "If we engage in technological monopoly, controls and restrictions, AI will become an exclusive game for a small number of countries and enterprises...."

Secretary-General of the Association of Southeast Asian Nations, Dr. Kao Kim Hourn, also called for "robust governance" of artificial intelligence to mitigate potential threats, including misinformation, deepfakes, and cybersecurity threats... Former Google CEO Eric Schmidt reiterated the call for international collaboration, explicitly calling on the U.S. and China to work together... "We have a vested interest to keep the world stable, keep the world not at war, to keep things peaceful, to make sure we have human control of these tools."

China's plan "called for establishing an international open-source community," reports the Wall Street Journal, "through which AI models can be freely deployed and improved by users." Industry participants said that plan "showed China's ambition to set global standards for AI and could undermine the U.S., whose leading models aren't open-source... While the world's best large language model is still American, the best model that everyone can use free is now Chinese."

"The U.S. should commit to ensuring that powerful models remain openly available," argues an opinion piece in The Hill by Stability AI's former head of public policy. Ubiquity is a matter of national security: retreating behind paywalls will leave a vacuum filled by strategic adversaries. Washington should treat open technology not as a vector for Chinese Communist Party propaganda but as a vessel to transmit U.S. influence abroad, molding the global ecosystem around U.S. industry. If DeepSeek is China's open-source "Sputnik moment," we need a legislative environment that supports — not criminalizes — an American open-source Moon landing.
Security

CISA Open-Sources Thorium Platform For Malware, Forensic Analysis (bleepingcomputer.com) 7

CISA has publicly released Thorium, a powerful open-source platform developed with Sandia National Labs that automates malware and forensic analysis at massive scale. According to BleepingComputer, the platform can "schedule over 1,700 jobs per second and ingest over 10 million files per hour per permission group." From the report: Security teams can use Thorium for automating and speeding up various file analysis workflows, including but not limited to:

- Easily import and export tools to facilitate sharing across cyber defense teams,
- Integrate command-line tools as Docker images, including open-source, commercial, and custom software,
- Filter results using tags and full-text search,
- Control access to submissions, tools, and results with strict group-based permissions,
- Scale with Kubernetes and ScyllaDB to meet workload demands.

Defenders can find installation instructions and get their own copy of Thorium from CISA's official GitHub repository.

Open Source

Google's New Security Project 'OSS Rebuild' Tackles Package Supply Chain Verification (googleblog.com) 13

This week Google's Open Source Security Team announced "a new project to strengthen trust in open source package ecosystems" — by reproducing upstream artifacts.

It includes automation to derive declarative build definitions, new "build observability and verification tools" for security teams, and even "infrastructure definitions" to help organizations rebuild, sign, and distribute provenance by running their own OSS Rebuild instances. (And as part of the initiative, the team also published SLSA Provenance attestations "for thousands of packages across our supported ecosystems.") Our aim with OSS Rebuild is to empower the security community to deeply understand and control their supply chains by making package consumption as transparent as using a source repository. Our rebuild platform unlocks this transparency by utilizing a declarative build process, build instrumentation, and network monitoring capabilities which, within the SLSA Build framework, produces fine-grained, durable, trustworthy security metadata. Building on the hosted infrastructure model that we pioneered with OSS Fuzz for memory issue detection, OSS Rebuild similarly seeks to use hosted resources to address security challenges in open source, this time aimed at securing the software supply chain... We are committed to bringing supply chain transparency and security to all open source software development. Our initial support for the PyPI (Python), npm (JS/TS), and Crates.io (Rust) package registries — providing rebuild provenance for many of their most popular packages — is just the beginning of our journey...

OSS Rebuild helps detect several classes of supply chain compromise:

- Unsubmitted Source Code: When published packages contain code not present in the public source repository, OSS Rebuild will not attest to the artifact.

- Build Environment Compromise: By creating standardized, minimal build environments with comprehensive monitoring, OSS Rebuild can detect suspicious build activity or avoid exposure to compromised components altogether.

- Stealthy Backdoors: Even sophisticated backdoors like xz often exhibit anomalous behavioral patterns during builds. OSS Rebuild's dynamic analysis capabilities can detect unusual execution paths or suspicious operations that are otherwise impractical to identify through manual review.


For enterprises and security professionals, OSS Rebuild can...

Enhance metadata without changing registries by enriching data for upstream packages. No need to maintain custom registries or migrate to a new package ecosystem.

Augment SBOMs by adding detailed build observability information to existing Software Bills of Materials, creating a more complete security picture...

- Accelerate vulnerability response by providing a path to vendor, patch, and re-host upstream packages using our verifiable build definitions...


The easiest (but not only!) way to access OSS Rebuild attestations is to use the provided Go-based command-line interface.

"With OSS Rebuild's existing automation for PyPI, npm, and Crates.io, most packages obtain protection effortlessly without user or maintainer intervention."
AI

Nvidia's CUDA Platform Now Support RISC-V (tomshardware.com) 20

An anonymous reader quotes a report from Tom's Hardware: At the 2025 RISC-V Summit in China, Nvidia announced that its CUDA software platform will be made compatible with the RISC-V instruction set architecture (ISA) on the CPU side of things. The news was confirmed during a presentation during a RISC-V event. This is a major step in enabling the RISC-V ISA-based CPUs in performance demanding applications. The announcement makes it clear that RISC-V can now serve as the main processor for CUDA-based systems, a role traditionally filled by x86 or Arm cores. While nobody even barely expects RISC-V in hyperscale datacenters any time soon, RISC-V can be used on CUDA-enabled edge devices, such as Nvidia's Jetson modules. However, it looks like Nvidia does indeed expect RISC-V to be in the datacenter.

Nvidia's profile on RISC-V seems to be quite high as the keynote at the RISC-V Summit China was delivered by Frans Sijsterman, who appears to be Vice President of Hardware Engineering at Nvidia. The presentation outlined how CUDA components will now run on RISC-V. A diagram shown at the session illustrated a typical configuration: the GPU handles parallel workloads, while a RISC-V CPU executes CUDA system drivers, application logic, and the operating system. This setup enables the CPU to orchestrate GPU computations fully within the CUDA environment. Given Nvidia's current focus, the workloads must be AI-related, yet the company did not confirm this. However, there is more.

Also featured in the diagram was a DPU handling networking tasks, rounding out a system consisting of GPU compute, CPU orchestration, and data movement. This configuration clearly suggests Nvidia's vision to build heterogeneous compute platforms where RISC-V CPU can be central to managing workloads while Nvidia's GPUs, DPUs, and networking chips handle the rest. Yet again, there is more. Even with this low-profile announcement, Nvidia essentially bridges proprietary CUDA stack to an open architecture, one that seems to develop fast in China. Yet, being unable to ship flagship GB200 and GB300 offerings to China, the company has to find ways to keep its CUDA thriving.

Open Source

NVIDIA Makes More Hopper, Blackwell Header Files Open-Source (phoronix.com) 4

NVIDIA has released additional open-source header files for its Blackwell and Hopper GPU architectures, continuing its effort to support open-source drivers like Nouveau/NVK and the NOVA Rust driver. Phoronix reports: Last week NVIDIA open-sourced 12k lines of C header files for Blackwell GPUs to help in the open-source driver efforts, namely for Nouveau / NVK and the in-development NOVA Rust driver. On Friday they made public some additional header files for helping in the Blackwell and Hopper open-source driver enablement.

Following the previously-covered open-source header activity, on Friday this commit was pushed to their open-source documentation repository that provides Hopper and Blackwell DMA-copy class header files. [...] In turn the code has already been imported into Mesa Git.

Open Source

Jack Dorsey Pumps $10M Into a Nonprofit Focused on Open Source Social Media (techcrunch.com) 20

Twitter co-founder/Block CEO Jack Dorsey isn't just vibe coding new apps like Bitchat and Sun Day. He's also "invested $10 million in an effort to fund experimental open source projects and other tools that could ultimately transform the social media landscape," reports TechCrunch," funding the projects through an online collective formed in May called "andOtherStuff: [T]he team at "andOtherStuff" is determined not to build a company but is instead operating like a "community of hackers," explains Evan Henshaw-Plath [who handles UX/onboarding and was also Twitter's first employee]. Together, they're working to create technologies that could include new consumer social apps as well as various experiments, like developer tools or libraries, that would allow others to build apps for themselves.

For instance, the team is behind an app called Shakespeare, which is like the app-building platform Lovable, but specifically for building Nostr-based social apps with AI assistance. The group is also behind heynow, a voice note app built on Nostr; Cashu wallet; private messenger White Noise; and the Nostr-based social community +chorus, in addition to the apps Dorsey has already released. Developments in AI-based coding have made this type of experimentation possible, Henshaw-Plath points out, in the same way that technologies like Ruby on Rails, Django, and JSON helped to fuel an earlier version of the web, dubbed Web 2.0.

Related to these efforts, Henshaw-Plath sat down with Dorsey for the debut episode of his new podcast, revolution.social with @rabble... Dorsey believes Bluesky faces the same challenges as traditional social media because of its structure — it's funded by VCs, like other startups. Already, it has had to bow to government requests and faced moderation challenges, he points out. "I think [Bluesky CEO] Jay [Graber] is great. I think the team is great," Dorsey told Henshaw-Plath, "but the structure is what I disagree with ... I want to push the energy in a different direction, which is more like Bitcoin, which is completely open and not owned by anyone from a protocol layer...."

Dorsey's initial investment has gotten the new nonprofit up and running, and he worked on some of its initial iOS apps. Meanwhile, others are contributing their time to build Android versions, developer tools, and different social media experiments. More is still in the works, says Henshaw-Plath.

"There are things that we're not ready to talk about yet that'll be very exciting," he teases.

Open Source

Intel Kills Clear Linux OS As Support Ends Without Warning (nerds.xyz) 95

BrianFagioli shares a report from NERDS.xyz: Intel has quietly pulled the plug on Clear Linux OS, officially ending support for the once-promising Linux distribution that it had backed for nearly a decade. Effective immediately, the company says it will no longer provide any updates, security patches, or maintenance for the operating system. In a final blow, the Clear Linux OS GitHub repository is now archived in read-only mode.

The move was announced with little fanfare, and for users still relying on Clear Linux OS, there's no sugarcoating it... you need to move on. Intel is urging everyone to migrate to an actively maintained Linux distribution as soon as possible to avoid running unpatched software.
"Rest assured that Intel remains deeply invested in the Linux ecosystem, actively supporting and contributing to various open-source projects and Linux distributions to enable and optimize for Intel hardware," the company said in a statement. "A heartfelt thank you to every developer, user, and contributor who helped shape Clear Linux OS over the last 10 years. Your feedback and contributions have been invaluable."
Open Source

Linux Reaches 5% On Desktop (ostechnix.com) 150

Longtime Slashdot reader bobdevine shares a report from OSTechNix: For the first time, Linux has officially broken the 5% desktop market share barrier in the United States of America! It's a huge milestone for open-source and our fantastic Linux community. While many might think of Linux as a niche choice, this new data shows a significant shift is happening.

According to the latest StatCounter Global Stats for June 2025, Linux now holds 5.03% of the desktop operating system market share in the United United States of America. This is fantastic news! [...] One truly satisfying detail for me? Linux has finally surpassed the "Unknown" category in the USA! It shows that our growth is clear and recognized.
"It took eight years to go from 1% to 2% (by April 2021), then just 2.2 years to reach 3% (June 2023), and a mere 0.7 years to hit 4% (February 2024)," notes the report. "Now, here we are, at over 5% in the USA! This exponential growth suggests that we're on a promising upward trend."
Bitcoin

LibreOffice Lands Built-In Support For Bitcoin As Currency (phoronix.com) 32

An anonymous reader quotes a report from Phoronix: Merged yesterday to the latest development code for the LibreOffice open-source office suite is now recognizing Bitcoin "BTC" as a supported currency for use within the Calc spreadsheet program and elsewhere within this cross-platform free software office suite. Stemming from a recent bug report requesting Bitcoin as an official currency option within LibreOffice Calc, the necessary additions are now in place so it's a built-in preset like USD and EUR. Thus easier managing of Bitcoin transactions and the like from within LibreOffice Calc.
Software

Blender 4.5 LTS Released (nerds.xyz) 11

BrianFagioli shares a report from NERDS.xyz: Blender 4.5 has arrived and it's a long-term support release. That means users get two full years of updates and bug fixes, making it a smart choice for anyone looking for stability in serious projects. Whether you're a solo artist or part of a studio pipeline, this version is built to last. Here's a list of key features and changes in this release:

- Vulkan backend replaces OpenGL (faster, smoother UI)
- Adaptive subdivision up to 14x faster with multithreading
- New Geometry Nodes: Camera Info, Instance Bounds
- GPU-accelerated compositor nodes with standardized inputs
- New Boolean solver: Manifold (cleaner, faster mesh operations)
- UV maps visible in Object Mode + improved selection behavior
- Grease Pencil render pass and Geometry Nodes integration
- Improved file import support: PLY, OBJ, STL, CSV, VDB
- Deprecations: Collada, Big Endian, legacy .blend, Intel Mac support
- Cycles OptiX now requires NVIDIA driver v535+
- New shader variants for add-on developers (POLYLINES_*, POINT_*)
~500 bug fixes across all major systems
Graphics

Blender Studio Releases Free New Game 'Dogwalk' to Showcase Its Open Source Godot Game Engine (notebookcheck.net) 25

"Steam quietly welcomed another indie game this week, but this one is distinctly different for a lot of reasons," writes Notebookcheck: Dogwalk, which debuted on July 11, is the kind of short, gentle experience that almost forces you to smile. Developed by Blender Studio, the game introduces players to a gorgeous winter landscape. You play as a cute, fluffy dog, with a small child in tow...

What's particularly interesting here is that Dogwalk is more than just another charming indie project. It's Blender Studio's showcase for what's possible using fully open-source tools. The entire project — assets, animations, and code — is made with Blender and the popular Godot Game Engine. Unlike industry giants such as Unity or Unreal, Godot is completely open source, meaning it doesn't require developers to pay royalties or follow strict licensing agreements. This should make it great for small studios and independent creators, as it lowers the entry barrier to game creation.

Dogwalk is 100% free, which fits neatly into its open-source philosophy

AI

Linux Foundation Adopts A2A Protocol To Help Solve One of AI's Most Pressing Challenges 38

An anonymous reader quotes a report from ZDNet: The Linux Foundation announced at the Open Source Summit in Denver that it will now host the Agent2Agent (A2A) protocol. Initially developed by Google and now supported by more than 100 leading technology companies, A2A is a crucial new open standard for secure and interoperable communication between AI agents. In his keynote presentation, Mike Smith, a Google staff software engineer, told the conference that the A2A protocol has evolved to make it easier to add custom extensions to the core specification. Additionally, the A2A community is working on making it easier to assign unique identities to AI agents, thereby improving governance and security.

The A2A protocol is designed to solve one of AI's most pressing challenges: enabling autonomous agents -- software entities capable of independent action and decision-making -- to discover each other, securely exchange information, and collaborate across disparate platforms, vendors, and frameworks. Under the hood, A2A does this work by creating an AgentCard. An AgentCard is a JavaScript Object Notation (JSON) metadata document that describes its purpose and provides instructions on how to access it via a web URL. A2A also leverages widely adopted web standards, such as HTTP, JSON-RPC, and Server-Sent Events (SSE), to ensure broad compatibility and ease of integration. By providing a standardized, vendor-neutral communication layer, A2A breaks down the silos that have historically limited the potential of multi-agent systems.

For security, A2A comes with enterprise-grade authentication and authorization built in, including support for JSON Web Tokens (JWTs), OpenID Connect (OIDC), and Transport Layer Security (TLS). This approach ensures that only authorized agents can participate in workflows, protecting sensitive data and agent identities. While the security foundations are in place, developers at the conference acknowledged that integrating them, particularly authenticating agents, will be a hard slog.
Antje Barth, an Amazon Web Services (AWS) principal developer advocate for generative AI, explained what the adoption of A2A will mean for IT professionals: "Say you want to book a train ride to Copenhagen, then a hotel there, and look maybe for a fancy restaurant, right? You have inputs and individual tasks, and A2A adds more agents to this conversation, with one agent specializing in hotel bookings, another in restaurants, and so on. A2A enables agents to communicate with each other, hand off tasks, and finally brings the feedback to the end user."

Jim Zemlin, executive director of the Linux Foundation, said: "By joining the Linux Foundation, A2A is ensuring the long-term neutrality, collaboration, and governance that will unlock the next era of agent-to-agent powered productivity." Zemlin expects A2A to become a cornerstone for building interoperable, multi-agent AI systems.
Open Source

The Open-Source Software Saving the Internet From AI Bot Scrapers (404media.co) 33

An anonymous reader quotes a report from 404 Media: For someone who says she is fighting AI bot scrapers just in her free time, Xe Iaso seems to be putting up an impressive fight. Since she launched it in January, Anubis, a "program is designed to help protect the small internet from the endless storm of requests that flood in from AI companies," has been downloaded nearly 200,000 times, and is being used by notable organizations including GNOME, the popular open-source desktop environment for Linux, FFmpeg, the open-source software project for handling video and other media, and UNESCO, the United Nations organization for educations, science, and culture. [...]

"Anubis is an uncaptcha," Iaso explains on her site. "It uses features of your browser to automate a lot of the work that a CAPTCHA would, and right now the main implementation is by having it run a bunch of cryptographic math with JavaScript to prove that you can run JavaScript in a way that can be validated on the server." Essentially, Anubis verifies that any visitor to a site is a human using a browser as opposed to a bot. One of the ways it does this is by making the browser do a type of cryptographic math with JavaScript or other subtle checks that browsers do by default but bots have to be explicitly programmed to do. This check is invisible to the user, and most browsers since 2022 are able to complete this test. In theory, bot scrapers could pretend to be users with browsers as well, but the additional computational cost of doing so on the scale of scraping the entire internet would be huge. This way, Anubis creates a computational cost that is prohibitively expensive for AI scrapers that are hitting millions and millions of sites, but marginal for an individual user who is just using the internet like a human.

Anubis is free, open source, lightweight, can be self-hosted, and can be implemented almost anywhere. It also appears to be a pretty good solution for what we've repeatedly reported is a widespread problem across the internet, which helps explain its popularity. But Iaso is still putting a lot of work into improving it and adding features. She told me she's working on a non cryptographic challenge so it taxes users' CPUs less, and also thinking about a version that doesn't require JavaScript, which some privacy-minded disable in their browsers. The biggest challenge in developing Anubis, Iaso said, is finding the balance. "The balance between figuring out how to block things without people being blocked, without affecting too many people with false positives," she said. "And also making sure that the people running the bots can't figure out what pattern they're hitting, while also letting people that are caught in the web be able to figure out what pattern they're hitting, so that they can contact the organization and get help. So that's like, you know, the standard, impossible scenario."

Programming

Microsoft Open Sources Copilot Chat for VS Code on GitHub (nerds.xyz) 18

"Microsoft has released the source code for the GitHub Copilot Chat extension for VS Code under the MIT license," reports BleepingComputer. This provides the community access to the full implementation of the chat-based coding assistant, including the implementation of "agent mode," what contextual data is sent to large language models (LLMs), and the design of system prompts. The GitHub repository hosting the code also details telemetry collection mechanisms, addressing long-standing questions about data transparency in AI-assisted coding tools...

As the VS Code team explained previously, shifts in AI tooling landscape like the rapid growth of the open-source AI ecosystem and a more level playing field for all have reduced the need for secrecy around prompt engineering and UI design. At the same time, increased targeting of development tools by malicious actors has increased the need for crowdsourcing contributions to rapidly pinpoint problems and develop effective fixes. Essentially, openness is now considered superior from a security perspective.

"If you've been hesitant to adopt AI tools because you don't trust the black box behind them, this move opensources-github-copilot-chat-vscode/offers something rare these days: transparency," writes Slashdot reader BrianFagioli" Now that the extension is open source, developers can audit how agent mode actually works. You can also dig into how it manages your data, customize its behavior, or build entirely new tools on top of it. This could be especially useful in enterprise environments where compliance and control are non negotiable.

It is worth pointing out that the backend models powering Copilot remain closed source. So no, you won't be able to self host the whole experience or train your own Copilot. But everything running locally in VS Code is now fair game. Microsoft says it is planning to eventually merge inline code completions into the same open source package too, which would make Copilot Chat the new hub for both chat and suggestions.

X

X11 Fork XLibre Released For Testing On Systemd-Free Artix Linux (webpronews.com) 134

An anonymous reader shared this report from WebProNews: The Linux world is abuzz with news of XLibre, a fork of the venerable X11 window display system, which aims to be an alternative to X11's successor, Wayland.

Much of the Linux world is working to adopt Wayland, the successor to X11. Wayland has been touted as being a superior option, providing better security and performance. Despite Fedora and Ubuntu both going Wayland-only, the newer display protocol still lags behind X11, in terms of functionality, especially in the realm of accessibility, screen recording, session restore, and more. In addition, despite the promise of improved performance, many users report performance regressions compared to X11.

While progress is being made, it has been slow going, especially for a project that is more than 17 years old. To make matters worse, Wayland is largely being improved by committee, with the various desktop environment teams trying to work together to further the protocol. Progress is further hampered by the fact that the GNOME developers often object to the implementation of some functionality that doesn't fit with their vision of what a desktop should be — despite those features being present and needed in every other environment.

In response, developer Enrico Weigelt has forked Xll into the XLibre project. Weigelt was already one of the most prolific X11 contributors at a time when little to no improvements or new features are being added to the aging window system... Weigelt has wasted no time releasing the inaugural version of XLibre, XLibre 25.0. The release includes a slew of improvements.

MrBrklyn (Slashdot reader #4,775) adds that Artix Linux, a rolling-release distro based on Arch Linux which does not use systemd, now offers XLibre ISO images and packages for testing and use. They're all non-systemd based, and "Its a decent undertaking by the Artix development team. The iso is considered to be testing but it is quickly moving to the regular repos for broad public use."

Slashdot Top Deals