'Vibe Coding Kills Open Source' (arxiv.org) 106
Four economists across Central European University, Bielefeld University and the Kiel Institute have built a general equilibrium model of the open-source software ecosystem and concluded that vibe coding -- the increasingly common practice of letting AI agents select, assemble and modify packages on a developer's behalf -- erodes the very funding mechanism that keeps open-source projects alive.
The core problem is a decoupling of usage from engagement. Tailwind CSS's npm downloads have climbed steadily, but its creator says documentation traffic is down about 40% since early 2023 and revenue has dropped close to 80%. Stack Overflow activity fell roughly 25% within six months of ChatGPT's launch. Open-source maintainers monetize through documentation visits, bug reports, and community interaction. AI agents skip all of that.
The model finds that feedback loops once responsible for open source's explosive growth now run in reverse. Fewer maintainers can justify sharing code, variety shrinks, and average quality falls -- even as total usage rises. One proposed fix is a "Spotify for open source" model where AI platforms redistribute subscription revenue to maintainers based on package usage. Vibe-coded users need to contribute at least 84% of what direct users generate, or roughly 84% of all revenue must come from sources independent of how users access the software.
The core problem is a decoupling of usage from engagement. Tailwind CSS's npm downloads have climbed steadily, but its creator says documentation traffic is down about 40% since early 2023 and revenue has dropped close to 80%. Stack Overflow activity fell roughly 25% within six months of ChatGPT's launch. Open-source maintainers monetize through documentation visits, bug reports, and community interaction. AI agents skip all of that.
The model finds that feedback loops once responsible for open source's explosive growth now run in reverse. Fewer maintainers can justify sharing code, variety shrinks, and average quality falls -- even as total usage rises. One proposed fix is a "Spotify for open source" model where AI platforms redistribute subscription revenue to maintainers based on package usage. Vibe-coded users need to contribute at least 84% of what direct users generate, or roughly 84% of all revenue must come from sources independent of how users access the software.
Not the Only Model (Score:5, Insightful)
Re: (Score:3)
Re: (Score:3)
This is fair but the pitch/summary still seems very misleading, which I think is the above person's real point.
It's not the funding that keeps open source alive, it's the funding that keeps some smaller open source alive.
The summary acts like this is some existential threat, it's not. I mean I'm typing this into Firefox, on GNOME, on Fedora, on Linux.
Virtually no part of that stack is impacted by what they're talking about, but it's phrased like all of those things are about to implode. They're not.
Not Vibe coding (Score:3)
It is a multiple year decline in the number of high school to college to newly graduated college degree holders focusing on AI instead of computer science, full stack development and what was the mainstream computer jobs before the current AI round.
The largest loss is that there will not be anywhere the number of blog posts, stackoverflow questions, etc. for the more recent open source technologies since somewhere from 5% to 50% (pulling numbers out of the air) of the rookie and mid-level experience questio
Re: (Score:3)
Re: (Score:2)
What else are you using other than Firefox, GNOME, and Linux? Note I didn't say "Fedora" because Fedora is a bundle of systems.
To give you an idea of why this is, actually, an existential threat, here's the relevant xkcd: https://xkcd.com/2347/ [xkcd.com]
I've been testing the latest Debian FWIW. Right now Firefox doesn't display video on most websites (YouTube excepting.) Why? Because it relies on the ffmpeg libraries, and they're not currently bundled in a form on Forky that Firefox can link to.
You think GNOME, Firef
Re: (Score:3)
I suspect a large majority of the money spent towards open source is in the form of support contracts, yes, but large contracts paid by large companies to large projects. The problem is that a majority of the *projects* are small, often single person, and *those* do not have a good way of funding their work. There is no web of small companies paying small projects keeping the greater open source community healthy, and so smaller projects have to look to other ways to fund work.
Re: (Score:2)
fund fund fund... or just volunteer, especially if it's just something small.
Source-available commercial software (Score:4, Insightful)
Yes. Source-available commercial software can even be fully publicly forkable [devwheels.com].
For most software and users the libre aspect of open source is more important than the gratis aspect.
Re: (Score:3)
I feel like your model of open source is not really open source. You're describing companies or individual programmers whose aim is commercial software, but who decided that they would flood the "market" with free samples to try and hook users into paying for upgrades later. It's the old shareware model in sheep's clothing.
My model of open source is someone (or a group of people) writing code for themselves, and being generally ok with other people benefiting too. There's no reason to ask for money/donati
Equilibrium will be found (Score:5, Interesting)
AI is dumb, it cannot innovate. Without humans creating new training data, it will fall behind. AI must find a way to avoid being fatal to the host it feeds on.
Re: Equilibrium will be found (Score:2)
Re: (Score:2)
Does it degrade or stay stagnant.
Can do both, either, or also neither.
It's a complication that has to be accounted for, though.
It can reduce perplexity, but that can very much be a not good thing.
What I can say, is that OSS survived just fine before bullshit like enshittified revenue models become popular, and it will survive the AIpocalypse.
Will OSS change? Yes.
Open source used to be fun (Score:2)
The reason people made open source projects and devoted time to maintaining them, was purely because it was enjoyable to do so. In the age of AI, that is gone. I no longer want to release my work for the benefit of others, because now it is only feeding this infernal bullshit machine, which will steal my work and sell it as its own. And if that wasn't bad enough, it will send a deluge of slop bugreports and phishing attacks.
AI needs to die. The bubble can't burst soon enough.
Re: (Score:1, Flamebait)
In the age of AI, that is gone.
No, it's not.
I no longer want to release my work for the benefit of others, because now it is only feeding this infernal bullshit machine, which will steal my work and sell it as its own.
lol. And that's how I know you were not involved in open source.
Get the fuck out of here, poser.
Re: (Score:1)
If you had demonstrated knowledge of the syntactical bit of genius called the "comma", maybe. But as it was? No.
Re: (Score:2)
You could always try to do something meaningful and original with your work yourself.
Re: (Score:2)
Decompilers are not great to use; however, for an AI which already doesn't understand context it probably could do about as well if trained for decompiled work - it will figure out your binary and not need your source. Possibly would as well with that as it does with source code. Sure it won't work as reliably but good enough...
There are many binaries to train them on. more than source...
Re:Equilibrium will be found (Score:5, Informative)
Your post denies the status quo. That isn't how any of this works at all.
It's a distillation process. AI absolutely can (and is) being trained on AI-generated, AI-augmented, AI-processed, and AI-sintered data.
I'd suggest you familiarize yourself with where things are today (as opposed to 6 months ago, or 2 years ago). If you haven't reevaluated state of the art in the past 2 months with any depth, you're gravely behind.
Re: Equilibrium will be found (Score:3)
Looking at AI through rose-colored glasses isn't helpful either. I find the experience and results mixed in terms of productivity. Depending on what metrics are used for productivity and what problem is being solved.
It may feel like your busy when your engaged in writing prompts and pasting results and getting updates back quickly.
But measuring over the long term, it's been hard to see much improvement for us. We're not getting projects done sooner. Our team is not able to work on more projects at once. Bug
Re: (Score:2)
I need to only look at my git commit and feature velocity over the past 12 months to know precisely how useful it's been. Proof is in the pudding. I'm not fooling myself by copypasta'ing things. (I very rarely copypasta things. I design things and my prompts take 20+ minutes to complete.)
I'm shipping actual production product at any increasing rate, and the code and architectural quality is better than most of the people I've worked with in my career (because I'm the gatekeeper).
AI isn't getting more expens
Re: (Score:2)
Re: (Score:2)
I'm basing this off my own adoption and experience. I'm generally an extremely cautious late-to-adopt person, which makes this all the more hilarious to me.
The Akira License (Score:5, Insightful)
Re:The Akira License (Score:5, Funny)
Does the Akira License involve biker gangs and racing around on a motorcycle?
Re:The Akira License (Score:5, Funny)
Re: The Akira License (Score:2)
Re: The Akira License (+4, Funny)
93 Escort Wagon 19 hours ago
Does the Akira License involve biker gangs and racing around on a motorcycle?
Re: The Akira License (+3, Funny)
Pseudonymous Powers 19 hours ago
I was picturing more giant mounds of pulsing formless protoplasm.
This is humour beyond of the creative capabilities of modern AI!
One of these things is not like the other (Score:5, Insightful)
This has nothing to do with "vibe coding".
I'm also unclear on what "documentation traffic" and "bug reports" has to do with a project earning money. Is this about seeing advertising? Because I'm not going to contribute to a project if that requires me to look at advertising.
Re: (Score:3)
If the project is funded through donations, if the donation link is never seen by human eyeballs; the donation link never gets clicked.
The given example of Tailwind monetized by having the base framework be free, but having a collection of tools including a Component Library be licensed. So if an LLM is pumping out examples rather than the docs, the user isn't going to see the value add subscription.
If a project has advertising and you see it, then you are already contributing. But if the LLM is has to be "
Re: (Score:2)
The problem is that the value that is added is simply removed by the LLM.
There's little reason to pay for the Component Library if an LLM can build them without doing so.
What they're suffering is the equivalent of having a Ford Production Line paradigm shift in the number of "experts" that no longer need to pay them.
Re: (Score:2)
Re: (Score:2)
And it also has not much to do with open source, given that the code behind the Stack Overflow website is closed-source.
Re: (Score:2)
What? (Score:5, Insightful)
"Open-source maintainers monetize through documentation visits"
Open source is monetized through documentation visits? Not only most open source programmers know how to use an adblocker, but I also saw very few (thanks god!) documentation pages with ads. What are these people talking about?
Re: (Score:3)
You go to the documentation site and somewhere is a "contribute" button that asks for a donation. Sometimes people realize that they appreciate the project and donate money. AI does no such thing.
Oh yes, I remember Stack Overflow (Score:2, Interesting)
That was that website where I couldn't answer questions in an area regarding which I am an expert, unless I had a certain amount of "reputation". Rest in peace.
Re:Oh yes, I remember Stack Overflow (Score:5, Informative)
Bullshit. Stack Overflow does not have, and never had, a minimum reputation requirement for answering questions. You can create an account and immediately answer a question with 1 reputation (the starting default).
There is an exception for answering protected questions [stackoverflow.com] (of which there are an exceedingly small percentage). To answer a protected question you need 10 reputation, which corresponds to a single upvote.
Re: (Score:2)
Bullshit
And this is exactly the sort of antagonistic and aggressive (and unnecessary) response that turned me off trying to contribute to Stack Overflow
Re: (Score:1)
And this is exactly the sort of antagonistic and aggressive (and unnecessary) response that turned me off trying to contribute to Stack Overflow
Bullshit, because writing "bullshit" on Stack Overflow would get flagged as "rude or abusive", and removed almost immediately.
Besides the opening statement that you seem to take great offense with, I'm just presenting objective facts, and the facts happen to completely contradict the gaslighting presented in the post I was replying to.
For the rest, I have no stake in the game.
Re: (Score:2)
But it is still full of passive-aggressive responses, senior users who use it to denigrate newbies and unhelpful "already answered" replies that do nothing to make it feel welcoming or supportive.
Re: (Score:2)
I think you make a fair point. It was somewhat the victim of its own success, in the sense that
1) its stated goal (providing a Q&A site with lasting value) conflicted with its perceived goal (providing help to anyone able to differentiate a mouse from a keyboard); and
2) after a while, most of the questions had already been answered, and would routinely turn up in search results.
Regular users, conditioned by the fact that, for a decade, useful Stack Overflow answers would pop up for pretty much any progr
Re: (Score:2)
Call it bullshit all you want. I'm sure I'm completely remembering it wrong when I recall writing out an answer and being told I couldn't post it because I didn't have sufficient reputation. And I'm sure I'm completely remembering it wrong that I decided to try posting in other forums there where I could collect some reputation. I'm sure I did that for just no reason and my recollection is totally wrong and you're a fucking genius.
Where does innovation come from? (Score:5, Insightful)
If you think about most success open source project, including Linux itself, they usually start small with people seeing them as a potential future solution to a problem they are working on or just something interesting to play with. They start slowly and grow in usability and interest increases. Eventually it becomes something truly useful and usage becomes widespread.
AI and vibe coding breaks that process at the early stages because there is no longer the humans looking at new things and taking a chance on something new and unfinished. Open source relies on people looking forward but AI can only look backwards. A future driven by vibe coding looks like it will free of innovation. Sounds boring to me.
Re:Where does innovation come from? (Score:4, Interesting)
To say that open source is being killed by vibe coding is just... crazy. It's simply wrong.
There are numerous vibe coding apps now which were written entirely with vibe coding. An entirely new paradigm of development exists today which didn't exist even a year ago - claude code, opencode (omo/slim), codex, cursor, and on and on. Then you've got the agentic stacks, and everything else that's largely open and free. "Come use this vibe coded thing I built this weekend! It's live in production, you can see it working - and here's the git repo!"
Coding skills are no longer a barrier to iteration and improvement. There are so many cool projects out there now being done by people who have an idea and see a business case and want to fill it.
Aside from the projects, I've already taken 2 libraries myself and forked them to change (and improve) functionality for my specific use cases. I'm assuming they don't want my changes, but they can always pull them back if they want. They can see I've got a fork. My willingness to deal with "well they may not accept my changes" + slowing my own velocity is low. The repo is public, the commit comments are better than anything I've personally done in the past.
"AI and vibe coding breaks that process at the early stages because there is no longer the humans looking at new things and taking a chance on something new and unfinished"
Um... have you even tried vibe coding? You can one-shot a project in 20 minutes. I've done it numerous times - an old project I spent weeks writing specifications for, boom, done. I also now have a very useful data indexer which integrates smb shares with MacOS finder. Any sort of idea can now quickly come to fruition in a couple hours with a good set of prompts. Want to make an antiquated database format convertible to a newer platform, and reimplement the frontend? I once had to take a 15-year old physical SCO system running a proprietary database over to a virtual environment, 10 years back. It was a painful process, because SCO and failing hardware. But today? Once I got to that point I could've reimplemented it anew in a couple days, allowing those companies to expand the software capabilities they paid hundreds of thousands for at the time, to something which suited their current business needs (which were a paper and spreadsheet process).
A mildly capable office tech could take an existing git repo of their project tree and maintain it/add features and maintain the product well enough using vibe coding, instead of languishing for a decade, like they had to previously.
You seem to be missing the fact that LLMs have vastly exceeded prior functionality. Today, the frontier models are easily 2x what they were in October. October was easily 2x what they were in May of last year. May of last year? 2x as capable as they were the year prior. We're approaching exponential improvements, and models have been solving previously-unsolved NP hard problems: that's innovation.
If you have an idea, it can be done with vibe coding today if you have the intelligence and creativity to do it. Simple as. If you don't, you can't - and won't.
Re: (Score:2)
So says my *.ai
Re: (Score:2)
Re: (Score:2)
A LLM can only give answers based on what it was trained on i.e. the past. I creates nothing new, instead it rapidly pulls together solutions from existing knowledge.
AI has learned the language from those code examples and repositories. What it does with the language is often (not always, mind you) original.
LLMs have learned English (and other languages) from vast amounts of written text. It is easy to use LLMs to create work that is original (say, prompt it to create a poem in Shakespearean style about AI utilizing tennis racket to paint a house - or whatever). Similarly with software development, AI has learned the syntax and coding styles for different programming la
Re: (Score:2)
AI has learned the language
AI has learned the pattern of the language. It's a small but very meaningful difference.
Re: (Score:2)
AI has learned the language
AI has learned the pattern of the language. It's a small but very meaningful difference.
Not unlike many people. https://blog.thelinguist.com/p... [thelinguist.com]
Re: (Score:2)
Good point.
Re:Where does innovation come from? (Score:5, Insightful)
Um... have you even tried vibe coding? You can one-shot a project in 20 minutes. I've done it numerous times - an old project I spent weeks writing specifications for, boom, done.
Have you even spent 5 seconds thinking about what you wrote here? If you need to spend weeks writing a specification and "vibe coding" it takes 20 minutes, it means you are either completely incompetent at writing specifications or your vibe-coded project simple doesn't do what you wrote in weeks. It's as simple as that.
Re: (Score:2)
It wasn't a concerted "weeks" of effort, it was ideation over the period of weeks until I had a coherent and complete architecture. But thanks for your concern.
You realize that architecting software correctly takes time, right? Often much more than weeks, for any significant effort. You can't just use react and node for everything.
Re: (Score:2)
>> If you have an idea, it can be done with vibe coding today if you have the intelligence and creativity to do it. Simple as. If you don't, you can't - and won't.
I think there are a couple of definitions of 'vibe coding'. One is where a person who is not very familiar with writing software gets to have an app of some kind created by just going through an exploring sequence of pretty vague prompts. It's very cool that people can do that now.
Another definition is where an experienced programmer can get
Re: (Score:2)
Yep. I've got an "ideas" folder in Apple Notes I've been curating for years - brief ideas I've had while falling asleep and need to jot down, ideas that came to me on the bus or the toilet, something I wish existed, etc.
I've been slowly working through that folder and implementing them. It's been amazing.
Re: (Score:2)
Coding skills are no longer a barrier to iteration and improvement. There are so many cool projects out there now being done by people who have an idea and see a business case and want to fill it.
This landed for me because I’m that person, just without the “coder” label.
I’m a sysadmin. If you widen the error bars enough to include shell scripting, sure, I “code,” but I’ve never had the patience or focus to be a real developer. Historically, that meant a lot of ideas stayed in the “would be nice” bucket unless they were directly tied to work and justified the learning curve.
Then I decided I wanted a tiny app to rearrange my desktop icons into a ci
Re: (Score:2)
Reasoning is merely the exercise of logical rules. The agents instantiate those rules for the transformer models and orchestrate - nothing more.
I don't understand your argument. It's irrational.
Re: (Score:1)
There is nothing irrational, in pointing out: an LLM can not produce code.
You need more than that.
And it is a bit stupid and irrational to point out: "Reasoning is merely the exercise of logical rules. The agents instantiate those rules for the transformer models and orchestrate" because that is the exact same you do with your brain. You are just better at it than Agents are. But: they are 100 times faster.
Re: (Score:2)
So the stupid arguments, that LLMs can not reason: are just stupid.
Because it is not the LLM that is reasoning, it is the family of agents on top of it.
Your words, not mine.
Re: (Score:1)
Correct.
And what is your point? You forget to make one.
Re: (Score:2)
Depends upon the domain in which the reasoning is requested. Reasoning out optimizations, resource usage, parallel planning, and similar known-heuristic designs for creating software? Yes, it definitely can reason. Add in the evolutionary algorithms and the theorem provers, and, yes, it can even innovate. Gemini DeepAlpha has some neat breakthroughs, for example. It's early days, but we're past the point of "it cannot reason at all." The Vision-Language-Action (VLA) models do fairly sophisticated planning f
Re: (Score:2)
Reasoning is working through a system of logical rules and priorities based on first principles.
Machines absolutely can and do reason, now. Far better than most people, and certainly faster.
Vibe coding is a lagging indicator (Score:4, Insightful)
I don't know how the next Python is going to get any traction, if table stakes for adoption is "language is understood by LLMs".
The current generation of coders won't use it if their LLM of choice doesn't understand it.
LLMs won't understand it if there's no training data, which comes from users.
I've always told people that coding would be automated last, if ever.
Apparently I was wrong, and will have to settle for being part of the last generation of coders that can actually read and understand code without LLM support.
Re: (Score:1)
Re: (Score:2)
I don't know how the next Python is going to get any traction, if table stakes for adoption is "language is understood by LLMs".
That’s a real constraint, but it’s not a new one, and it’s not uniquely LLM-shaped. Every language that ever got traction had to clear table stakes that weren’t technical purity: documentation quality, tutorials, books, community examples, tooling, package ecosystem, and the ability for a new user to get from zero to “it runs” without burning a week. LLMs just become another on-ramp, not the whole highway. The paper actually argues that vibe coding isn't a "lagging indic
Re: (Score:2)
The real risk isn’t that a generation can’t read code. The risk is that we stop expecting them to. If we treat LLMs as training wheels instead of prosthetic eyesight, we get a generation that ships faster and understands deeper. If we treat them as a replacement for learning, we get brittle systems and brittle people. That’s not a technology outcome. That’s a cultural and educational choice.
The ability to read and write code without support was in decline long before LLMs - FizzBuzz
Re: (Score:2)
The real risk isn’t that a generation can’t read code. The risk is that we stop expecting them to. If we treat LLMs as training wheels instead of prosthetic eyesight, we get a generation that ships faster and understands deeper. If we treat them as a replacement for learning, we get brittle systems and brittle people. That’s not a technology outcome. That’s a cultural and educational choice.
The ability to read and write code without support was in decline long before LLMs - FizzBuzz as a low bar dates from Spolsky in 2005.
If we're hoping for the right cultural and educational choices to save us ... we're screwed.
I don’t disagree with your pessimism regarding the low bar, but I think you are missing a key distinction between developers and coders. When I was hired as a sysadmin by Raytheon three decades ago, the interview wasn't a test of whether I could follow a manual; my maths-heavy CS degree already checked the regurgitation box. The interview was about the size of Window's symbol table, the differences between Windows and Unix thread management, and a healthy dose of formal logic —the "minimum numb
Re: (Score:1)
The innovation comes from the guy/girl telling the vibe coding platform what to do.
That is a no brainer or not?
The Agents produce code you would otherwise write by hand.
What is the farking difference if the next for loop is spit out by an AI, I use Eclipse auto complete, or write it in vi by hand?
None, nothing, nada. It is the exact same sequence of characters.
Re: (Score:2)
But how does vibe coding contribute back to improving
Re: (Score:1)
The user of an LLM / coding Agent has to share his work.
Just like he did (or did not?) when he was not using an LLM / Coding Agent.
P.S. you do not use LLMs for coding. You use an Coding agent. Very big difference.
fix for what? (Score:1)
"One proposed fix is a "Spotify for open source" model where AI platforms redistribute subscription revenue to maintainers based on package usage."
Proposed fix for what? And what incentive would motivate "AI platform" developers to involve themselves with "subscription revenue to maintainers"?
Open source, in particular GPL, is communistic, do we assume AI is as well? Because it sure doesn't seem like it. And why does open source funding need to be fixed? RMS created the GPL to compel others to give sour
Re: (Score:2)
I wonder if they know how "fair" Spotify distributes the money.
I'd rather say everyone who can afford it should have a day per year where they consider what open source projects they valued most in the past year and then distribute the amount of money they can afford between these projects. That also allows to reward projects that treat you nicely and show the projects that treated you badly that you remember.
Change the Paradigm (Score:5, Interesting)
It's more accurate to say that the entire concept of libraries and frameworks is now obsolete. Why bother building and maintaining libraries of tested code when you can just generate it from scratch every time? The only reason we used libraries was to keep things maintainable and reusable. If you can just get an LLM to generate bespoke code on demand, and have it do exactly what you want and nothing else, then every piece of software can be a snowflake... unique and fragile, but infinitely replaceable. It's a paradigm shift, that's for sure.
Re: (Score:2)
If it comes true. I doubt it will. The reason we maintain libraries is because complexity grows nonlinearly with the size of a system. Keeping the systems small with well-defined interfaces manages that problem.
AI isn't magic. A computer might have greater capacity for tracking down complex behaviour than a human does but it isn't infinite. And the current systems, just like humans, do a lot better when they have good libraries to stick together than they do if you ask them for a big bare metal monolith.
Re: (Score:3)
Re: (Score:3)
If you can just get an LLM to generate bespoke code on demand, and have it do exactly what you want and nothing else, then every piece of software can be a snowflake... unique and fragile, but infinitely replaceable.
I do not think you fully understand why things like libraries exist. Each byte stored takes "disk" space. You have thousands of programs at your beck and call. If every single program recreated the portions of ntdll.dll that it needed, the amount of space required to store it all would balloon rapidly. (sorry for the Microsoft example, but it will be more universally understood)
In other words, one of the reasons there is shared code is because we live in a real world and need to store things physically.
But,
AI does not kill open source--it steals from it (Score:2)
Not really (Score:2)
1) We have needed a more sustainable financing model for FOSS that matters for quite a while now.
2) "Vibe "coding" will not matter in the long run. Its results are just too abysmally bad. People still need a few years to understand as those of low / no skills are always late in understanding the blatantly obvious. (Dunning & Kruger have an explanation for that ...)
Re: (Score:1)
It does not matter if the shiny web site - which works perfectly fine - but is an abomination of spaghetti HTML/CSS/JS was coded by an incompetent web developer.
Or was generated by an AI agent based on an LLM fed with abomination code from incompetent web developers.
The result is the same.
First case requires you to communicate with a human, and you never know how bad his code is.
Second case requires you to "communicate" with an AI agent, and you do not care how bad his code is.
Way number tow, is 10 to 100 t
Vibe is killing itself? (Score:2)
I don't know of a single company sharing proprietary code. So where will vibe be if there is no code to scan?
On another note. A while back people were worried that open source licensed code would make it in commercial licensed code and would create some legal issues. Isn't this also an issue with vibe generated code?
To me it seems that it's more of an issue that vibe creates a problem for proprietary code. GPL to vide makes vibe code into GPL. Share and share alike.
OTOH (Score:1)
I love pulling a repo and having Claude explain it and figure out how to run it.
It used to take forever to get an unfamiliar repo to work because Linux.
This is actually one of the best uses of LLMs, getting someone else's software to work on my machine.
The fear of getting knee deep in the weeds because I don't understand the repo completely, is now gone.
AI coding (Score:4, Insightful)
I think at the rate this is happening, no source should be closed, or proprietary. All "closed source" companies should be REQUIRED to open up ALL their source code so AI can "index it" and anyone can use it. Else all AI companies should not be allowed to "index" any source code that is not in the public domain or under a license that is very lax.
Re: (Score:3)
Re: (Score:2)
That's not how the licenses are written. The licenses state if you change something you must give back, AI learning from Opensourced code is no different than a human doing the same other than at a faster rate. If you GPLed your code you can't expect anything back from anyone who uses AI to write code that is not a modification of your code. MIT licenses say do what you want with this code, so the rule applies even more to them.
Re: (Score:2)
No, GPL states that if you distribute anything, you must also provide the source code. Presence or absence of change is irrelevant.
See section 4, "Conveying Verbatim Copies", section 5, "Conveying Modified Source Versions", and section 6, "conveying non-source forms".
If a person doesn't accept the terms of the GPL, they may instead treat the software as if the authors said "All Rights Reserved", which means no distribution.
Re: (Score:3)
Being the AI is just using code scraped from public sources, including public GitHub, GitLab etc.. repositories. How are any Copyright licenses being handled I wonder.
Stop right there, this basic premise is false. AI has learned the language from those code examples and repositories. What it does with the language is often (not always, mind you) original.
LLMs have learned english (and other languages) syntax from examples, it is easy to use them to create work that is original (say, prompt it to create a poem in Shakespearean style about AI utilizing tennis racket to paint a house - or whatever). Similarly, AI has learned the syntax and coding styles for different PROGR
Forks are killing (open source) software (Score:1)
Re: (Score:2)
I fork projects to keep my own copy cause I forget where I find things, if you don't want your project forked don't put it somewhere where it can get forked.
AI doesn't kill open source it enhances it (Score:2)
TERRIBLE Conclusions Drawn Here (Score:3)
This is a textbook example of taking anomalies out of context and drawing some giant (false) conclusion.
First, Stack Overflow is unique: you can't compare it to any other site or project. Their decline started long before AI (with their decision to encourage chasing newcomers off the site), and AI just hastened it.
Second, Tailwind had a uniquely bad product/business model. Other OSS projects are doing fine, because they have products (eg. support contracts) people actually want. In contrast, better and free (open source) Tailwind UI libraries exist; the only reason people used Tailwind's (worse) library was because they found it on the docs page.
Tailwind's case has NOTHING to do with any other project ... unless they too are financially dependent on a terrible product that people only barely want, and will only buy if they see it mentioned on a docs page.
I would say AI will transform open source. (Score:2)
As someone who has been a software for 30+ years, and who now uses Claude to do 90% of their work I disagree that it is killing open source. I've not once in 30 years had the time or the energy to make an opensource project. Since using Claude I have made 3, all GPL2 Licensed. They are small, not world changing, but it has empowered me to give back somehow. Opensource projects use to be passion projects, until people went to monetize them.
Stackoverflow. Let's be honest, 75% of what was there was crap
Theft Engine Steals Things, Film At Eleven (Score:2)
Why do we need Tailwind? (Score:2)
I think the best one was when I wrote a parametric CAD program recently (not a web project, C# on the web this time), and I needed a specialized PDF export engine because the existing ones cost money or sucked. So, I told the LLM what I w
This is tragedy of the commons, AI style (Score:5, Interesting)
This paper reads like any one of dozens of papers I had to digest for game theory classes back in college. Granted, that was thirty years ago and “optimization theory” has replaced game theory in the course catalogs, but the bones are the same: Nash is still hiding under the floorboards, tapping out equilibria with a broom handle. What the authors are really doing here is describing a potential tragedy of the commons, and dressing it in modern clothing. In their setup, open source is the shared pasture: maintainers are the shepherds doing the unglamorous work of reseeding and mending fence lines, and users are the cows. Vibe coding adds a new kind of cow, one that grazes constantly and at scale while leaving fewer of the footprints that normally pay the shepherds back: attention, bug reports with reproduction steps, patches, docs corrections, donations, consulting leads, the whole informal economy that kept a lot of projects alive. If that return channel dries up, the equilibrium shifts: fewer shepherds bother staying out in the rain, the pasture degrades, and everyone ends up worse off even though the short-term output looks amazing.
Nothing about that is conceptually novel. What’s novel is the pressure profile. I watched Red Hat go from an interesting way to monetize Linux in 1994 to a $34B IBM acquisition a quarter century later, which tells you there’s real money in selling stability, support, and risk management around a free codebase. But this paper is pointing at a different failure mode: not “open source can’t be monetized,” but “open source can be consumed so efficiently that the incentives to maintain it get vacuumed away.” The paper’s real kicker is what they call the software-begets-software effect. We’ve all seen this: a healthy ecosystem of libraries makes building the next tool trivial. That’s a virtuous cycle that helped FOSS explode. But the authors’ math shows this loop has a reverse gear. If vibe coding starves maintainers of the attention currency they need to keep the lights on, the ecosystem doesn't just stagnate—it contracts. Entry falls, variety shrinks, and the cost of building new software starts to climb because the foundation is rotting. We’re essentially using AI to strip-mine the very topsoil we need for the next harvest.
The models in the paper may be a bit too tuned to represent all of FOSS, sure. But where they’re right, they’re right in the way you can't really argue against. If vibe coding siphons off funding, leaving some critical cluster of FOSS coders unwatered long enough, FOSS could be on a fast track to that tragedy of the commons.
This is not just about Open Source (Score:2)
If you realize, that for traditional software industry, it is a little programming, then huge marketing, and after that endless cash in, that model wil
The real eason AI threatens Open Source (Score:2)
Tailwind CSS is a particularly hard case (Score:2)
I think the problems Tailwind is facing are a perfect confluence of factors. CSS in general has gotten a lot better over the last several years; you need way less detailed knowledge to implement a visual design than you used to. The CSS spec is also very well-documented, and there's basically infinity CSS out there for models to train on.
Tailwind's value proposition is that they make it easier to implement a consistent-looking visual style without writing a bunch of CSS; in particular, they handle the trick
GPL allows scraping, but not extraction (Score:2)
I had a fairly negative knee-jerk reaction to all the “AI scraping FOSS for training data is bad" comments surfacing in this thread. I realize that was because my FOSS instincts are still basically Stallman-era: public code is the point, reuse is the point, and the GPL exists to make sharing legally certain. If you put code out there under an open license, people reading it, learning from it, and building on it is not a bug. It’s the whole design. That should (and maybe legally does) include u