AI

What If Vibe Coding Creates More Programming Jobs? (msn.com) 81

Vibe coding tools "are transforming the job experience for many tech workers," writes the Los Angeles Times. But Gartner analyst Philip Walsh said the research firm's position is that AI won't replace software engineers and will actually create a need for more. "There's so much software that isn't created today because we can't prioritize it," Walsh said. "So it's going to drive demand for more software creation, and that's going to drive demand for highly skilled software engineers who can do it..." The idea that non-technical people in an organization can "vibe-code" business-ready software is a misunderstanding [Walsh said]... "That's simply not happening. The quality is not there. The robustness is not there. The scalability and security of the code is not there," Walsh said. "These tools reward highly skilled technical professionals who already know what 'good' looks like."
"Economists, however, are also beginning to worry that AI is taking jobs that would otherwise have gone to young or entry-level workers," the article points out. "In a report last month, researchers at Stanford University found "substantial declines in employment for early-career workers'' — ages 22-25 — in fields most exposed to AI. Stanford researchers also found that AI tools by 2024 were able to solve nearly 72% of coding problems, up from just over 4% a year earlier."

And yet Cat Wu, project manager of Anthropic's Claude Code, doesn't even use the term vibe coding. "We definitely want to make it very clear that the responsibility, at the end of the day, is in the hands of the engineers." Wu said she's told her younger sister, who's still in college, that software engineering is still a great career and worth studying. "When I talk with her about this, I tell her AI will make you a lot faster, but it's still really important to understand the building blocks because the AI doesn't always make the right decisions," Wu said. "A lot of times the human intuition is really important."
Programming

Are Software Registries Inherently Insecure? (linuxsecurity.com) 41

"Recent attacks show that hackers keep using the same tricks to sneak bad code into popular software registries," writes long-time Slashdot reader selinux geek, suggesting that "the real problem is how these registries are built, making these attacks likely to keep happening." After all, npm wasn't the only software library hit by a supply chain attack, argues the Linux Security blog. "PyPI and Docker Hub both faced their own compromises in 2025, and the overlaps are impossible to ignore." Phishing has always been the low-hanging fruit. In 2025, it wasn't just effective once — it was the entry point for multiple registry breaches, all occurring close together in different ecosystems... The real problem isn't that phishing happened. It's that there weren't enough safeguards to blunt the impact. One stolen password shouldn't be all it takes to poison an entire ecosystem. Yet in 2025, that's exactly how it played out...

Even if every maintainer spotted every lure, registries left gaps that attackers could walk through without much effort. The problem wasn't social engineering this time. It was how little verification stood between an attacker and the "publish" button. Weak authentication and missing provenance were the quiet enablers in 2025... Sometimes the registry itself offers the path in. When the failure is at the registry level, admins don't get an alert, a log entry, or any hint that something went wrong. That's what makes it so dangerous. The compromise appears to be a normal update until it reaches the downstream system... It shifts the risk from human error to systemic design.

And once that weakly authenticated code gets in, it doesn't always go away quickly, which leads straight into the persistence problem... Once an artifact is published, it spreads into mirrors, caches, and derivative builds. Removing the original upload doesn't erase all the copies... From our perspective at LinuxSecurity, this isn't about slow cleanup; it's about architecture. Registries have no universally reliable kill switch once trust is broken. Even after removal, poisoned base images replicate across mirrors, caches, and derivative builds, meaning developers may keep pulling them in long after the registry itself is "clean."

The article condlues that "To us at LinuxSecurity, the real vulnerability isn't phishing emails or stolen tokens — it's the way registries are built. They distribute code without embedding security guarantees. That design ensures supply chain attacks won't be rare anomalies, but recurring events."BR>
So in a world where "the only safe assumption is that the code you consume may already be compromised," they argue, developers should look to controls they can enforce themselves:
  • Verify artifacts with signatures or provenance tools.
  • Pin dependencies to specific, trusted versions.
  • Generate and track SBOMs so you know exactly what's in your stack.
  • Scan continuously, not just at the point of install.

Programming

Google's Jules Enters Developers' Toolchains As AI Coding Agent Competition Heats Up 2

An anonymous reader quotes a report from TechCrunch: Google is bringing its AI coding agent Jules deeper into developer workflows with a new command-line interface and public API, allowing it to plug into terminals, CI/CD systems, and tools like Slack -- as competition intensifies among tech companies to own the future of software development and make coding more of an AI-assisted task.

Until now, Jules -- Google's asynchronous coding agent -- was only accessible via its website and GitHub. On Thursday, the company introduced Jules Tools, a command-line interface that brings Jules directly into the developer's terminal. The CLI lets developers interact with the agent using commands, streamlining workflows by eliminating the need to switch between the web interface and GitHub. It allows them to stay within their environment while delegating coding tasks and validating results.
"We want to reduce context switching for developers as much as possible," Kathy Korevec, director of product at Google Labs, told TechCrunch.

Jules differs from Gemini CLI in that it focuses on "scoped," independent tasks rather than requiring iterative collaboration. Once a user approves a plan, Jules executes it autonomously, while the CLI needs more step-by-step guidance. Jules also has a public API for workflow and IDE integration, plus features like memory, a stacked diff viewer, PR comment handling, and image uploads -- capabilities not present in the CLI. Gemini CLI is limited to terminals and CI/CD pipelines and is better suited for exploratory, highly interactive use.
Android

Google Confirms Android Dev Verification Will Have Free and Paid Tiers, No Public List of Devs (arstechnica.com) 29

An anonymous reader quotes a report from Ars Technica: As we careen toward a future in which Google has final say over what apps you can run, the company has sought to assuage the community's fears with a blog post and a casual "backstage" video. Google has said again and again since announcing the change that sideloading isn't going anywhere, but it's definitely not going to be as easy. The new information confirms app installs will be more reliant on the cloud, and devs can expect new fees, but there will be an escape hatch for hobbyists.

Confirming app verification status will be the job of a new system component called the Android Developer Verifier, which will be rolled out to devices in the next major release of Android 16. Google explains that phones must ensure each app has a package name and signing keys that have been registered with Google at the time of installation. This process may break the popular FOSS storefront F-Droid. It would be impossible for your phone to carry a database of all verified apps, so this process may require Internet access. Google plans to have a local cache of the most common sideloaded apps on devices, but for anything else, an Internet connection is required. Google suggests alternative app stores will be able to use a pre-auth token to bypass network calls, but it's still deciding how that will work.

The financial arrangement has been murky since the initial announcement, but it's getting clearer. Even though Google's largely automated verification process has been described as simple, it's still going to cost developers money. The verification process will mirror the current Google Play registration fee of $25, which Google claims will go to cover administrative costs. So anyone wishing to distribute an app on Android outside of Google's ecosystem has to pay Google to do so. What if you don't need to distribute apps widely? This is the one piece of good news as developer verification takes shape. Google will let hobbyists and students sign up with only an email for a lesser tier of verification. This won't cost anything, but there will be an unclear limit on how many times these apps can be installed. The team in the video strongly encourages everyone to go through the full verification process (and pay Google for the privilege). We've asked Google for more specifics here.

AI

Tech Companies To K-12 Schoolchildren: Learn To AI Is the New Learn To Code 43

theodp writes: From Thursday's Code.org press release announcing the replacement of the annual Hour of Code for K-12 schoolkids with the new Hour of AI: "A decade ago, the Hour of Code ignited a global movement that introduced millions of students to computer science, inspiring a generation of creators. Today, Code.org announced the next chapter: the Hour of AI, a global initiative developed in collaboration with CSforALL and supported by dozens of leading organizations. [...] As artificial intelligence rapidly transforms how we live, work, and learn, the Hour of AI reflects an evolution in Code.org's mission: expanding from computer science education into AI literacy. This shift signals how the education and technology fields are adapting to the times, ensuring that students are prepared for the future unfolding now."

"Just as the Hour of Code showed students they could be creators of technology, the Hour of AI will help them imagine their place in an AI-powered world," said Hadi Partovi, CEO and co-founder of Code.org. "Every student deserves to feel confident in their understanding of the technology shaping their future. And every parent deserves the confidence that their child is prepared for it."

"Backed by top organizations such as Microsoft, Amazon, Anthropic, Zoom, LEGO Education, Minecraft, Pearson, ISTE, Common Sense Media, American Federation of Teachers (AFT), National Education Association (NEA), and Scratch Foundation, the Hour of AI is designed to bring AI education into the mainstream. New this year, the National Parents Union joins Code.org and CSforALL as a partner to emphasize that AI literacy is not only a student priority but a parent imperative."

The announcement of the tech-backed K-12 CS education nonprofit's mission shift into AI literacy comes just days after Code.org's co-founders took umbrage with a NY Times podcast that discussed "how some of the same tech companies that pushed for computer science are now pivoting from coding to pushing for AI education and AI tools in schools" and advancing the narrative that "the country needs more skilled AI workers to stay competitive, and kids who learn to use AI will get better job opportunities."
AI

NYT Podcast On Job Market For Recent CS Grads Raises Ire of Code.org (geekwire.com) 71

Longtime Slashdot reader theodp writes: Big Tech Told Kids to Code. The Jobs Didn't Follow, a New York Times podcast episode discussing how the promise of a six-figure salary for those who study computer science is turning out to be an empty one for recent grads in the age of AI, drew the ire of the co-founders of nonprofit Code.org, which -- ironically -- is pivoting to AI itself with the encouragement of, and millions from, its tech-giant backers.

In a LinkedIn post, Code.org CEO and co-founder Hadi Partovi said the paper and its Monday episode of "The Daily" podcast were cherrypicking anecdotes "to stoke populist fears about tech corporations and AI." He also took to X, tweeting: "Today the NYTimes (falsely) claimed CS majors can't find work. The data tells the opposite story: CS grads have the highest median wage and the fifth-lowest underemployment across all majors. [...] Journalism is broken. Do better NYTimes." To which Code.org co-founder Ali Partovi (Hadi's twin), replied: "I agree 100%. That NYTimes Daily piece was deplorable -- an embarrassment for journalism."

Programming

New Claude Model Runs 30-Hour Marathon To Create 11,000-Line Slack Clone (theverge.com) 61

Anthropic's Claude Sonnet 4.5 ran autonomously for 30 hours to build a chat application similar to Slack or Teams, generating approximately 11,000 lines of code before stopping upon task completion. The model, announced today, marks a significant leap from the company's Opus 4 model, which ran for seven hours in May.

Claude Sonnet 4.5 performs three times better at browser navigation and computer use than Anthropic's October technology. Beta-tester Canva deployed the model for complex engineering tasks in its codebase and product features. Anthropic paired the release with virtual machines, memory, context management, and multi-agent support tools, enabling developers to build their own AI agents using the same building blocks that power Claude Code.
AI

Professor Warns CS Graduates are Struggling to Find Jobs (yahoo.com) 77

"Computer science went from a future-proof career to an industry in upheaval in a shockingly small amount of time," writes Business Insider, citing remarks from UC Berkeley professor Hany Farid said during a recent episode of Nova's "Particles of Thought" podcast.

"Our students typically had five internship offers throughout their first four years of college," Farid said. "They would graduate with exceedingly high salaries, multiple offers. They had the run of the place. That is not happening today. They're happy to get one job offer...." It's too easy to just blame AI, though, Farid said. "Something is happening in the industry," he said. "I think it's a confluence of many things. I think AI is part of it. I think there's a thinning of the ranks that's happening, that's part of it, but something is brewing..."

Farid, one of the world's experts on deepfake videos, said he is often asked for advice. He said what he tells students has changed... "Now, I think I'm telling people to be good at a lot of different things because we don't know what the future holds."

Like many in the AI space, Farid said that those who use breakthrough technologies will outlast those who don't. "I don't think AI is going to put lawyers out of business, but I think lawyers who use AI will put those who don't use AI out of business," he said. "And I think you can say that about every profession."

Programming

Will AI Mean Bring an End to Top Programming Language Rankings? (ieee.org) 51

IEEE Spectrum ranks the popularity of programming languages — but is there a problem? Programmers "are turning away from many of these public expressions of interest. Rather than page through a book or search a website like Stack Exchange for answers to their questions, they'll chat with an LLM like Claude or ChatGPT in a private conversation." And with an AI assistant like Cursor helping to write code, the need to pose questions in the first place is significantly decreased. For example, across the total set of languages evaluated in the Top Programming Languages, the number of questions we saw posted per week on Stack Exchange in 2025 was just 22% of what it was in 2024...

However, an even more fundamental problem is looming in the wings... In the same way most developers today don't pay much attention to the instruction sets and other hardware idiosyncrasies of the CPUs that their code runs on, which language a program is vibe coded in ultimately becomes a minor detail... [T]he popularity of different computer languages could become as obscure a topic as the relative popularity of railway track gauges... But if an AI is soothing our irritations with today's languages, will any new ones ever reach the kind of critical mass needed to make an impact? Will the popularity of today's languages remain frozen in time?

That's ultimately the larger question. "how much abstraction and anti-foot-shooting structure will a sufficiently-advanced coding AI really need...?" [C]ould we get our AIs to go straight from prompt to an intermediate language that could be fed into the interpreter or compiler of our choice? Do we need high-level languages at all in that future? True, this would turn programs into inscrutable black boxes, but they could still be divided into modular testable units for sanity and quality checks. And instead of trying to read or maintain source code, programmers would just tweak their prompts and generate software afresh.

What's the role of the programmer in a future without source code? Architecture design and algorithm selection would remain vital skills... How should a piece of software be interfaced with a larger system? How should new hardware be exploited? In this scenario, computer science degrees, with their emphasis on fundamentals over the details of programming languages, rise in value over coding boot camps.

Will there be a Top Programming Language in 2026? Right now, programming is going through the biggest transformation since compilers broke onto the scene in the early 1950s. Even if the predictions that much of AI is a bubble about to burst come true, the thing about tech bubbles is that there's always some residual technology that survives. It's likely that using LLMs to write and assist with code is something that's going to stick. So we're going to be spending the next 12 months figuring out what popularity means in this new age, and what metrics might be useful to measure.

Having said that, IEEE Spectrum still ranks programming language popularity three ways — based on use among working programmers, demand from employers, and "trending" in the zeitgeist — using seven different metrics.

Their results? Among programmers, "we see that once again Python has the top spot, with the biggest change in the top five being JavaScript's drop from third place last year to sixth place this year. As JavaScript is often used to create web pages, and vibe coding is often used to create websites, this drop in the apparent popularity may be due to the effects of AI... In the 'Jobs' ranking, which looks exclusively at what skills employers are looking for, we see that Python has also taken 1st place, up from second place last year, though SQL expertise remains an incredibly valuable skill to have on your resume."
Programming

Bundler's Lead Maintainer Asserts Trademark in Ongoing Struggle with Ruby Central (arko.net) 7

After the nonprofit Ruby Central removed all RubyGems' maintainers from its GitHub repository, André Arko — who helped build Bundler — wrote a new blog post on Thursday "detailing Bundler's relationship with Ruby Central," according to this update from The New Stack. "In the last few weeks, Ruby Central has suddenly asserted that they alone own Bundler," he wrote. "That simply isn't true. In order to defend the reputation of the team of maintainers who have given so much time and energy to the project, I have registered my existing trademark on the Bundler project."

He adds that trademarks do not affect copyright, which stays with the original contributors unchanged. "Trademarks only impact one thing: Who is allowed say that what they make is named 'Bundler,'" he wrote. "Ruby Central is welcome to the code, just like everyone else. They are not welcome to the project name that the Bundler maintainers have painstakingly created over the last 15 years."

He is, however, not seeking the trademark for himself, noting that the "idea of Bundler belongs to the Ruby community." "Once there is a Ruby organization that is accountable to the maintainers, and accountable to the community, with openly and democratically elected board members, I commit to transfer my trademark to that organization," he said. "I will not license the trademark, and will instead transfer ownership entirely. Bundler should belong to the community, and I want to make sure that is true for as long as Bundler exists."

The blog It's FOSS also has an update on Spinel, the new worker-owned collective founded by Arko, Samuel Giddins [who Giddins led RubyGems security efforts], and Kasper Timm Hansen (who served served on the Rails core team from 2016 to 2022 and was one of its top contributors): These guys aren't newcomers but some of the architects behind Ruby's foundational infrastructure. Their flagship offering is rv ["the Ruby swiss army knife"], a tool that aims to replace the fragmented Ruby tooling ecosystem. It promises to [in the future] handle everything from rvm, rbenv, chruby, bundler, rubygems, and others — all at once while redefining how Ruby development tools should work... Spinel operates on retainer agreements with companies needing Ruby expertise instead of depending on sponsors who can withdraw support or demand control. This model maintains independence while ensuring sustainability for the maintainers.
The Register had reported Thursday: Spinel's 'rv' project aims to supplant elements of RubyGems and Bundler with a more modular, version-aware manager. Some in the Ruby community have already accused core Rails figures of positioning Spinel as a threat. For example, Rafael FranÃa of Shopify commented that admins of the new project should not be trusted to avoid "sabotaging rubygems or bundler."
Oracle

Disastrous Oracle Implementation At Europe's Largest City Council. (theregister.com) 133

Longtime Slashdot reader whoever57 writes: Birmingham City Council, the largest such entity in Europe, has been declared effectively bankrupt. There are a couple of reasons for this, but one of them is a disastrous project to replace the city's income management system using Oracle. The cost of this has risen to $230 million, while the initial estimate was $24 million. There was a failed rollout of the new system earlier this year. "Original plans for the replacement of SAP with Oracle Fusion set aside a 19.965 million-euro budget for three years implementation until the end of the 2021 financial year," reports The Register. "Go-live date was later put back until April 2022 and the budget increased to 40 million euros. After the council realized it would need to reimplement all of Oracle, the budget for running the old system and introducing the new one increased to 131 million euros."

"In a hastily convened Audit Committee meeting this week, councilor heard how that date has now been put back until November, expressing their anger that the news hit the media before they were told." Testing failed with only a 73.3% pass rate and 10 severe deficits, "below the acceptance criteria of a 95 percent pass rate and zero severe deficits.
AI

OpenAI, Oracle, SoftBank Plan Five New AI Data Centers For $500 Billion Stargate Project (reuters.com) 27

An anonymous reader quotes a report from Reuters: OpenAI, Oracle, and SoftBank on Tuesday announced plans for five new artificial intelligence data centers in the United States to build out their ambitious Stargate project. [...] ChatGPT-maker OpenAI said on Tuesday it will open three new sites with Oracle in Shackelford County, Texas, Dona Ana County, New Mexico and an undisclosed site in the Midwest. Two more data center sites will be built in Lordstown, Ohio and Milam County, Texas by OpenAI, Japan's SoftBank and a SoftBank affiliate.

The new sites, the Oracle-OpenAI site expansion in Abilene, Texas, and the ongoing projects with CoreWeave will bring Stargate's total data center capacity to nearly 7 gigawatts and more than $400 billion in investment over the next three years, OpenAI said. The $500 billion project was intended to generate 10 gigawatts in total data center capacity. "AI can only fulfill its promise if we build the compute to power it," OpenAI CEO Sam Altman said in a statement. The Tuesday's announcement, expected to create 25,000 onsite jobs, follows Nvidia saying on Monday that it will invest up to $100 billion in OpenAI and supply data center chips. OpenAI and partners plan to use debt financing to lease chips for the Stargate project, people familiar with the matter said.

Programming

Dedicated Mobile Apps For Vibe Coding Have So Far Failed To Gain Traction (techcrunch.com) 15

An anonymous reader quotes a report from TechCrunch: While many vibe-coding startups have become unicorns, with valuations in the billions, one area where AI-assisted coding has not yet taken off is on mobile devices. Despite the numerous apps now available that offer vibe-coding tools on mobile platforms, none are gaining noticeable downloads, and few are generating any revenue at all. According to an analysis of global app store trends by the app intelligence provider Appfigures, only a small handful of mobile apps offering vibe-coding tools have seen any downloads, let alone generated revenue.

The largest of these is Instance: AI App Builder, which has seen only 16,000 downloads and $1,000 in consumer spending. The next largest app, Vibe Studio, has pulled in just 4,000 downloads but has made no money. This situation could still change, of course. The market is young, and vibe-coding apps continue to improve and work out the bugs. New apps in this space are arriving all the time, too. This year, a startup called Vibecode launched with $9.4 million in seed funding from Reddit co-founder Alexis Ohanian's Seven Seven Six. The company's service allows users to create mobile apps using AI within its own iOS app. Vibecode is so new, Appfigures doesn't yet have data on it. For now, most people who want to toy around with vibe-coding technology are doing so on the desktop.

Education

Why One Computer Science Professor is 'Feeling Cranky About AI' in Education (acm.org) 64

Long-time Slashdot reader theodp writes: Over at the Communications of the ACM, Bard College CS Prof Valerie Barr explains why she's Feeling Cranky About AI and CS Education. Having seen CS education go through a number of we-have-to-teach-this moments over the decades — introductory programming languages, the Web, Data Science, etc. — Barr turns her attention to the next hand-wringing "what will we do" CS education moment with AI.

"We're jumping through hoops without stopping first to question the run-away train," Barr writes...

Barr calls for stepping back from "the industry assertion that the ship has sailed, every student needs to use AI early and often, and there is no future application that isn't going to use AI in some way" and instead thoughtfully "articulate what sort of future problem solvers and software developers we want to graduate from our programs, and determine ways in which the incorporation of AI can help us get there."

From the article: In much discussion about CS education:

a.) There's little interest in interrogating the downsides of generative AI, such as the environmental impact, the data theft impact, the treatment and exploitation of data workers.

b.) There's little interest in considering the extent to which, by incorporating generative AI into our teaching, we end up supporting a handful of companies that are burning billions in a vain attempt to each achieve performance that is a scintilla better than everyone else's.

c.) There's little interest in thinking about what's going to happen when the LLM companies decide that they have plateaued, that there's no more money to burn/spend, and a bunch of them fold—but we've perturbed education to such an extent that our students can no longer function without their AI helpers.

Programming

Secure Software Supply Chains, Urges Former Go Lead Russ Cox (acm.org) 19

Writing in Communications of the ACM, former Go tech lead Russ Cox warns we need to keep improving defenses of software supply chains, highlighting "promising approaches that should be more widely used" and "areas where more work is needed." There are important steps we can take today, such as adopting software signatures in some form, making sure to scan for known vulnerabilities regularly, and being ready to update and redeploy software when critical new vulnerabilities are found. More development should be shifted to safer languages that make vulnerabilities and attacks less likely. We also need to find ways to fund open source development to make it less susceptible to takeover by the mere offer of free help. Relatively small investments in OpenSSL and XZ development could have prevented both the Heartbleed vulnerability and the XZ attack.
Some highlights from the 5,000-word article:
  • Make Builds Reproducible. "The Reproducible Builds project aims to raise awareness of reproducible builds generally, as well as building tools to help progress toward complete reproducibility for all Linux software. The Go project recently arranged for Go itself to be completely reproducible given only the source code... A build for a given target produces the same distribution bits whether you build on Linux or Windows or Mac, whether the build host is X86 or ARM, and so on. Strong reproducibility makes it possible for others to easily verify that the binaries posted for download match the source code..."
  • Prevent Vulnerabilities. "The most secure software dependencies are the ones not used in the first place: Every dependency adds risk... Another good way to prevent vulnerabilities is to use safer programming languages that remove error-prone language features or make them needed less often..."
  • Authenticate Software. ("Cryptographic signatures make it impossible to nefariously alter code between signing and verifying. The only problem left is key distribution...") "The Go checksum database is a real-world example of this approach that protects millions of Go developers. The database holds the SHA256 checksum of every version of every public Go module..."
  • Fund Open Source. [Cox first cites the XKCD cartoon "Dependencies," calling it "a disturbingly accurate assessment of the situation..."] "The XZ attack is the clearest possible demonstration that the problem is not fixed. It was enabled as much by underfunding of open source as by any technical detail."

The article also emphasized the importance of finding and fixing vulnerabilities quickly, arguing that software attacks must be made more difficult and expensive.

"We use source code downloaded from strangers on the Internet in our most critical applications; almost no one is checking the code.... We all have more work to do."


Slashdot Top Deals