Programming

Does Generative AI Threaten the Open Source Ecosystem? (zdnet.com) 47

"Snippets of proprietary or copyleft reciprocal code can enter AI-generated outputs, contaminating codebases with material that developers can't realistically audit or license properly."

That's the warning from Sean O'Brien, who founded the Yale Privacy Lab at Yale Law School. ZDNet reports: Open software has always counted on its code being regularly replenished. As part of the process of using it, users modify it to improve it. They add features and help to guarantee usability across generations of technology. At the same time, users improve security and patch holes that might put everyone at risk. But O'Brien says, "When generative AI systems ingest thousands of FOSS projects and regurgitate fragments without any provenance, the cycle of reciprocity collapses. The generated snippet appears originless, stripped of its license, author, and context." This means the developer downstream can't meaningfully comply with reciprocal licensing terms because the output cuts the human link between coder and code. Even if an engineer suspects that a block of AI-generated code originated under an open source license, there's no feasible way to identify the source project. The training data has been abstracted into billions of statistical weights, the legal equivalent of a black hole.

The result is what O'Brien calls "license amnesia." He says, "Code floats free of its social contract and developers can't give back because they don't know where to send their contributions...."

"Once AI training sets subsume the collective work of decades of open collaboration, the global commons idea, substantiated into repos and code all over the world, risks becoming a nonrenewable resource, mined and never replenished," says O'Brien. "The damage isn't limited to legal uncertainty. If FOSS projects can't rely upon the energy and labor of contributors to help them fix and improve their code, let alone patch security issues, fundamentally important components of the software the world relies upon are at risk."

O'Brien says, "The commons was never just about free code. It was about freedom to build together." That freedom, and the critical infrastructure that underlies almost all of modern society, is at risk because attribution, ownership, and reciprocity are blurred when AIs siphon up everything on the Internet and launder it (the analogy of money laundering is apt), so that all that code's provenance is obscured.

Microsoft

28 Years After 'Clippy', Microsoft Upgrades Copilot With Cartoon Assistant 'Micu' (apnews.com) 19

"Clippy, the animated paper clip that annoyed Microsoft Office users nearly three decades ago, might have just been ahead of its time," writes the Associated Press: Microsoft introduced a new artificial intelligence character called Mico (pronounced MEE'koh) on Thursday, a floating cartoon face shaped like a blob or flame that will embody the software giant's Copilot virtual assistant and marks the latest attempt by tech companies to imbue their AI chatbots with more of a personality... "When you talk about something sad, you can see Mico's face change. You can see it dance around and move as it gets excited with you," said Jacob Andreou, corporate vice president of product and growth for Microsoft AI, in an interview with The Associated Press. "It's in this effort of really landing this AI companion that you can really feel."

In the U.S. only so far, Copilot users on laptops and phone apps can speak to Mico, which changes colors, spins around and wears glasses when in "study" mode. It's also easy to shut off, which is a big difference from Microsoft's Clippit, better known as Clippy and infamous for its persistence in offering advice on word processing tools when it first appeared on desktop screens in 1997. "It was not well-attuned to user needs at the time," said Bryan Reimer, a research scientist at the Massachusetts Institute of Technology. "Microsoft pushed it, we resisted it and they got rid of it. I think we're much more ready for things like that today..."

Microsoft's product releases Thursday include a new option to invite Copilot into a group chat, an idea that resembles how AI has been integrated into social media platforms like Snapchat, where Andreou used to work, or Meta's WhatsApp and Instagram. But Andreou said those interactions have often involved bringing in AI as a joke to "troll your friends," in contrast to Microsoft's designs for an "intensely collaborative" AI-assisted workplace.

AI

Fedora Approves AI-Assisted Contributions 15

The Fedora Council has approved a new policy allowing AI-assisted code contributions, provided contributors fully disclose and take responsibility for any AI-generated work. Phoronix reports: AI-assisted code contributions can be used but the contributor must take responsibility for that contribution, it must be transparent in disclosing the use of AI such as with the "Assisted-by" tag, and that AI can help in assisting human reviewers/evaluation but must not be the sole or final arbiter. This AI policy also doesn't cover large-scale initiatives which will need to be handled individually with the Fedora Council. [...] The Fedora Council does expect that this policy will need to be updated over time for staying current with AI technologies.
PHP

JetBrains Survey Declares PHP Declining, Then Says It Isn't (theregister.com) 29

JetBrains released its annual State of the Developer Ecosystem survey in late October, drawing more than twenty-four thousand responses from programmers worldwide. The survey declared that PHP and Ruby are in "long term decline" based on usage trends tracked over five years. Shortly after publication, JetBrains posted a separate statement asserting that "PHP remains a stable, professional, and evolving ecosystem." The company offered no explanation for the apparent contradiction, The Register reports.

The survey's methodology involves weighting responses to account for bias toward JetBrains users and regional distribution factors. The company acknowledges some bias likely remains since its own customers are more inclined to respond. The survey also found that 85% of developers now use AI coding tools.
Programming

A Plan for Improving JavaScript's Trustworthiness on the Web (cloudflare.com) 48

On Cloudflare's blog, a senior research engineer shares a plan for "improving the trustworthiness of JavaScript on the web."

"It is as true today as it was in 2011 that Javascript cryptography is Considered Harmful." The main problem is code distribution. Consider an end-to-end-encrypted messaging web application. The application generates cryptographic keys in the client's browser that lets users view and send end-to-end encrypted messages to each other. If the application is compromised, what would stop the malicious actor from simply modifying their Javascript to exfiltrate messages? It is interesting to note that smartphone apps don't have this issue. This is because app stores do a lot of heavy lifting to provide security for the app ecosystem. Specifically, they provide integrity, ensuring that apps being delivered are not tampered with, consistency, ensuring all users get the same app, and transparency, ensuring that the record of versions of an app is truthful and publicly visible.

It would be nice if we could get these properties for our end-to-end encrypted web application, and the web as a whole, without requiring a single central authority like an app store. Further, such a system would benefit all in-browser uses of cryptography, not just end-to-end-encrypted apps. For example, many web-based confidential LLMs, cryptocurrency wallets, and voting systems use in-browser Javascript cryptography for the last step of their verification chains. In this post, we will provide an early look at such a system, called Web Application Integrity, Consistency, and Transparency (WAICT) that we have helped author. WAICT is a W3C-backed effort among browser vendors, cloud providers, and encrypted communication developers to bring stronger security guarantees to the entire web... We hope to build even wider consensus on the solution design in the near future....

We would like to have a way of enforcing integrity on an entire site, i.e., every asset under a domain. For this, WAICT defines an integrity manifest, a configuration file that websites can provide to clients. One important item in the manifest is the asset hashes dictionary, mapping a hash belonging to an asset that the browser might load from that domain, to the path of that asset.

The blog post points out that the WEBCAT protocol (created by the Freedom of Press Foundation) "allows site owners to announce the identities of the developers that have signed the site's integrity manifest, i.e., have signed all the code and other assets that the site is serving to the user... We've made WAICT extensible enough to fit WEBCAT inside and benefit from the transparency components." The proposal also envisions a service storing metadata for transparency-enabled sites on the web (along with "witnesses" who verify the prefix tree holding the hashes for domain manifests).

"We are still very early in the standardization process," with hopes to soon "begin standardizing the integrity manifest format. And then after that we can start standardizing all the other features. We intend to work on this specification hand-in-hand with browsers and the IETF, and we hope to have some exciting betas soon. In the meantime, you can follow along with our transparency specification draft,/A>, check out the open problems, and share your ideas."
Programming

OpenAI Cofounder Builds New Open Source LLM 'Nanochat' - and Doesn't Use Vibe Coding (gizmodo.com) 25

An anonymous reader shared this report from Gizmodo: It's been over a year since OpenAI cofounder Andrej Karpathy exited the company. In the time since he's been gone, he coined and popularized the term "vibe coding" to describe the practice of farming out coding projects to AI tools. But earlier this week, when he released his own open source model called nanochat, he admitted that he wrote the whole thing by hand, vibes be damned.

Nanochat, according to Karpathy, is a "minimal, from scratch, full-stack training/inference pipeline" that is designed to let anyone build a large language model with a ChatGPT-style chatbot interface in a matter of hours and for as little as $100. Karpathy said the project contains about 8,000 lines of "quite clean code," which he wrote by hand — not necessarily by choice, but because he found AI tools couldn't do what he needed.

"It's basically entirely hand-written (with tab autocomplete)," he wrote. "I tried to use claude/codex agents a few times but they just didn't work well enough at all and net unhelpful."

Programming

GitHub Will Prioritize Migrating To Azure Over Feature Development (thenewstack.io) 32

An anonymous reader shares a report: After acquiring GitHub in 2018, Microsoft mostly let the developer platform run autonomously. But in recent months, that's changed. With GitHub CEO Thomas Dohmke leaving the company this August, and GitHub being folded more deeply into Microsoft's organizational structure, GitHub lost that independence. Now, according to internal GitHub documents The New Stack has seen, the next step of this deeper integration into the Microsoft structure is moving all of GitHub's infrastructure to Azure, even at the cost of delaying work on new features.

[...] While GitHub had previously started work on migrating parts of its service to Azure, our understanding is that these migrations have been halting and sometimes failed. There are some projects, like its data residency initiative (internally referred to as Project Proxima) that will allow GitHub's enterprise users to store all of their code in Europe, that already solely use Azure's local cloud regions.

Programming

The Great Software Quality Collapse (substack.com) 187

Engineer Denis Stetskov, writing in a blog: The Apple Calculator leaked 32GB of RAM. Not used. Not allocated. Leaked. A basic calculator app is hemorrhaging more memory than most computers had a decade ago. Twenty years ago, this would have triggered emergency patches and post-mortems. Today, it's just another bug report in the queue. We've normalized software catastrophes to the point where a Calculator leaking 32GB of RAM barely makes the news. This isn't about AI. The quality crisis started years before ChatGPT existed. AI just weaponized existing incompetence.

[...] Here's what engineering leaders don't want to acknowledge: software has physical constraints, and we're hitting all of them simultaneously. Modern software is built on towers of abstractions, each one making development "easier" while adding overhead: Today's real chain: React > Electron > Chromium > Docker > Kubernetes > VM > managed DB > API gateways. Each layer adds "only 20-30%." Compound a handful and you're at 2-6x overhead for the same behavior. That's how a Calculator ends up leaking 32GB. Not because someone wanted it to -- but because nobody noticed the cumulative cost until users started complaining.

[...] We're living through the greatest software quality crisis in computing history. A Calculator leaks 32GB of RAM. AI assistants delete production databases. Companies spend $364 billion to avoid fixing fundamental problems. This isn't sustainable. Physics doesn't negotiate. Energy is finite. Hardware has limits. The companies that survive won't be those who can outspend the crisis. There'll be those who remember how to engineer.

AI

AI Slop? Not This Time. AI Tools Found 50 Real Bugs In cURL (theregister.com) 92

The Register reports: Over the past two years, the open source curl project has been flooded with bogus bug reports generated by AI models. The deluge prompted project maintainer Daniel Stenberg to publish several blog posts about the issue in an effort to convince bug bounty hunters to show some restraint and not waste contributors' time with invalid issues. Shoddy AI-generated bug reports have been a problem not just for curl, but also for the Python community, Open Collective, and the Mesa Project.

It turns out the problem is people rather than technology. Last month, the curl project received dozens of potential issues from Joshua Rogers, a security researcher based in Poland. Rogers identified assorted bugs and vulnerabilities with the help of various AI scanning tools. And his reports were not only valid but appreciated. Stenberg in a Mastodon post last month remarked, "Actually truly awesome findings." In his mailing list update last week, Stenberg said, "most of them were tiny mistakes and nits in ordinary static code analyzer style, but they were still mistakes that we are better off having addressed. Several of the found issues were quite impressive findings...."

Stenberg told The Register that about 50 bugfixes based on Rogers' reports have been merged. "In my view, this list of issues achieved with the help of AI tooling shows that AI can be used for good," he said in an email. "Powerful tools in the hand of a clever human is certainly a good combination. It always was...!" Rogers wrote up a summary of the AI vulnerability scanning tools he tested. He concluded that these tools — Almanax, Corgea, ZeroPath, Gecko, and Amplify — are capable of finding real vulnerabilities in complex code.

The Register's conclusion? AI tools "when applied with human intelligence by someone with meaningful domain experience, can be quite helpful."

jantangring (Slashdot reader #79,804) has published an article on Stenberg's new position, including recently published comments from Stenberg that "It really looks like these new tools are finding problems that none of the old, established tools detect."
AI

What If Vibe Coding Creates More Programming Jobs? (msn.com) 82

Vibe coding tools "are transforming the job experience for many tech workers," writes the Los Angeles Times. But Gartner analyst Philip Walsh said the research firm's position is that AI won't replace software engineers and will actually create a need for more. "There's so much software that isn't created today because we can't prioritize it," Walsh said. "So it's going to drive demand for more software creation, and that's going to drive demand for highly skilled software engineers who can do it..." The idea that non-technical people in an organization can "vibe-code" business-ready software is a misunderstanding [Walsh said]... "That's simply not happening. The quality is not there. The robustness is not there. The scalability and security of the code is not there," Walsh said. "These tools reward highly skilled technical professionals who already know what 'good' looks like."
"Economists, however, are also beginning to worry that AI is taking jobs that would otherwise have gone to young or entry-level workers," the article points out. "In a report last month, researchers at Stanford University found "substantial declines in employment for early-career workers'' — ages 22-25 — in fields most exposed to AI. Stanford researchers also found that AI tools by 2024 were able to solve nearly 72% of coding problems, up from just over 4% a year earlier."

And yet Cat Wu, project manager of Anthropic's Claude Code, doesn't even use the term vibe coding. "We definitely want to make it very clear that the responsibility, at the end of the day, is in the hands of the engineers." Wu said she's told her younger sister, who's still in college, that software engineering is still a great career and worth studying. "When I talk with her about this, I tell her AI will make you a lot faster, but it's still really important to understand the building blocks because the AI doesn't always make the right decisions," Wu said. "A lot of times the human intuition is really important."
Programming

Are Software Registries Inherently Insecure? (linuxsecurity.com) 41

"Recent attacks show that hackers keep using the same tricks to sneak bad code into popular software registries," writes long-time Slashdot reader selinux geek, suggesting that "the real problem is how these registries are built, making these attacks likely to keep happening." After all, npm wasn't the only software library hit by a supply chain attack, argues the Linux Security blog. "PyPI and Docker Hub both faced their own compromises in 2025, and the overlaps are impossible to ignore." Phishing has always been the low-hanging fruit. In 2025, it wasn't just effective once — it was the entry point for multiple registry breaches, all occurring close together in different ecosystems... The real problem isn't that phishing happened. It's that there weren't enough safeguards to blunt the impact. One stolen password shouldn't be all it takes to poison an entire ecosystem. Yet in 2025, that's exactly how it played out...

Even if every maintainer spotted every lure, registries left gaps that attackers could walk through without much effort. The problem wasn't social engineering this time. It was how little verification stood between an attacker and the "publish" button. Weak authentication and missing provenance were the quiet enablers in 2025... Sometimes the registry itself offers the path in. When the failure is at the registry level, admins don't get an alert, a log entry, or any hint that something went wrong. That's what makes it so dangerous. The compromise appears to be a normal update until it reaches the downstream system... It shifts the risk from human error to systemic design.

And once that weakly authenticated code gets in, it doesn't always go away quickly, which leads straight into the persistence problem... Once an artifact is published, it spreads into mirrors, caches, and derivative builds. Removing the original upload doesn't erase all the copies... From our perspective at LinuxSecurity, this isn't about slow cleanup; it's about architecture. Registries have no universally reliable kill switch once trust is broken. Even after removal, poisoned base images replicate across mirrors, caches, and derivative builds, meaning developers may keep pulling them in long after the registry itself is "clean."

The article condlues that "To us at LinuxSecurity, the real vulnerability isn't phishing emails or stolen tokens — it's the way registries are built. They distribute code without embedding security guarantees. That design ensures supply chain attacks won't be rare anomalies, but recurring events."BR>
So in a world where "the only safe assumption is that the code you consume may already be compromised," they argue, developers should look to controls they can enforce themselves:
  • Verify artifacts with signatures or provenance tools.
  • Pin dependencies to specific, trusted versions.
  • Generate and track SBOMs so you know exactly what's in your stack.
  • Scan continuously, not just at the point of install.

Programming

Google's Jules Enters Developers' Toolchains As AI Coding Agent Competition Heats Up 2

An anonymous reader quotes a report from TechCrunch: Google is bringing its AI coding agent Jules deeper into developer workflows with a new command-line interface and public API, allowing it to plug into terminals, CI/CD systems, and tools like Slack -- as competition intensifies among tech companies to own the future of software development and make coding more of an AI-assisted task.

Until now, Jules -- Google's asynchronous coding agent -- was only accessible via its website and GitHub. On Thursday, the company introduced Jules Tools, a command-line interface that brings Jules directly into the developer's terminal. The CLI lets developers interact with the agent using commands, streamlining workflows by eliminating the need to switch between the web interface and GitHub. It allows them to stay within their environment while delegating coding tasks and validating results.
"We want to reduce context switching for developers as much as possible," Kathy Korevec, director of product at Google Labs, told TechCrunch.

Jules differs from Gemini CLI in that it focuses on "scoped," independent tasks rather than requiring iterative collaboration. Once a user approves a plan, Jules executes it autonomously, while the CLI needs more step-by-step guidance. Jules also has a public API for workflow and IDE integration, plus features like memory, a stacked diff viewer, PR comment handling, and image uploads -- capabilities not present in the CLI. Gemini CLI is limited to terminals and CI/CD pipelines and is better suited for exploratory, highly interactive use.
Android

Google Confirms Android Dev Verification Will Have Free and Paid Tiers, No Public List of Devs (arstechnica.com) 29

An anonymous reader quotes a report from Ars Technica: As we careen toward a future in which Google has final say over what apps you can run, the company has sought to assuage the community's fears with a blog post and a casual "backstage" video. Google has said again and again since announcing the change that sideloading isn't going anywhere, but it's definitely not going to be as easy. The new information confirms app installs will be more reliant on the cloud, and devs can expect new fees, but there will be an escape hatch for hobbyists.

Confirming app verification status will be the job of a new system component called the Android Developer Verifier, which will be rolled out to devices in the next major release of Android 16. Google explains that phones must ensure each app has a package name and signing keys that have been registered with Google at the time of installation. This process may break the popular FOSS storefront F-Droid. It would be impossible for your phone to carry a database of all verified apps, so this process may require Internet access. Google plans to have a local cache of the most common sideloaded apps on devices, but for anything else, an Internet connection is required. Google suggests alternative app stores will be able to use a pre-auth token to bypass network calls, but it's still deciding how that will work.

The financial arrangement has been murky since the initial announcement, but it's getting clearer. Even though Google's largely automated verification process has been described as simple, it's still going to cost developers money. The verification process will mirror the current Google Play registration fee of $25, which Google claims will go to cover administrative costs. So anyone wishing to distribute an app on Android outside of Google's ecosystem has to pay Google to do so. What if you don't need to distribute apps widely? This is the one piece of good news as developer verification takes shape. Google will let hobbyists and students sign up with only an email for a lesser tier of verification. This won't cost anything, but there will be an unclear limit on how many times these apps can be installed. The team in the video strongly encourages everyone to go through the full verification process (and pay Google for the privilege). We've asked Google for more specifics here.

AI

Tech Companies To K-12 Schoolchildren: Learn To AI Is the New Learn To Code 43

theodp writes: From Thursday's Code.org press release announcing the replacement of the annual Hour of Code for K-12 schoolkids with the new Hour of AI: "A decade ago, the Hour of Code ignited a global movement that introduced millions of students to computer science, inspiring a generation of creators. Today, Code.org announced the next chapter: the Hour of AI, a global initiative developed in collaboration with CSforALL and supported by dozens of leading organizations. [...] As artificial intelligence rapidly transforms how we live, work, and learn, the Hour of AI reflects an evolution in Code.org's mission: expanding from computer science education into AI literacy. This shift signals how the education and technology fields are adapting to the times, ensuring that students are prepared for the future unfolding now."

"Just as the Hour of Code showed students they could be creators of technology, the Hour of AI will help them imagine their place in an AI-powered world," said Hadi Partovi, CEO and co-founder of Code.org. "Every student deserves to feel confident in their understanding of the technology shaping their future. And every parent deserves the confidence that their child is prepared for it."

"Backed by top organizations such as Microsoft, Amazon, Anthropic, Zoom, LEGO Education, Minecraft, Pearson, ISTE, Common Sense Media, American Federation of Teachers (AFT), National Education Association (NEA), and Scratch Foundation, the Hour of AI is designed to bring AI education into the mainstream. New this year, the National Parents Union joins Code.org and CSforALL as a partner to emphasize that AI literacy is not only a student priority but a parent imperative."

The announcement of the tech-backed K-12 CS education nonprofit's mission shift into AI literacy comes just days after Code.org's co-founders took umbrage with a NY Times podcast that discussed "how some of the same tech companies that pushed for computer science are now pivoting from coding to pushing for AI education and AI tools in schools" and advancing the narrative that "the country needs more skilled AI workers to stay competitive, and kids who learn to use AI will get better job opportunities."
AI

NYT Podcast On Job Market For Recent CS Grads Raises Ire of Code.org (geekwire.com) 71

Longtime Slashdot reader theodp writes: Big Tech Told Kids to Code. The Jobs Didn't Follow, a New York Times podcast episode discussing how the promise of a six-figure salary for those who study computer science is turning out to be an empty one for recent grads in the age of AI, drew the ire of the co-founders of nonprofit Code.org, which -- ironically -- is pivoting to AI itself with the encouragement of, and millions from, its tech-giant backers.

In a LinkedIn post, Code.org CEO and co-founder Hadi Partovi said the paper and its Monday episode of "The Daily" podcast were cherrypicking anecdotes "to stoke populist fears about tech corporations and AI." He also took to X, tweeting: "Today the NYTimes (falsely) claimed CS majors can't find work. The data tells the opposite story: CS grads have the highest median wage and the fifth-lowest underemployment across all majors. [...] Journalism is broken. Do better NYTimes." To which Code.org co-founder Ali Partovi (Hadi's twin), replied: "I agree 100%. That NYTimes Daily piece was deplorable -- an embarrassment for journalism."

Slashdot Top Deals