Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
Programming

Stack Overflow Went From 200,000 Monthly Questions To Nearly Zero (stackexchange.com) 124

Stack Overflow's monthly question volume has collapsed to about 300 -- levels not seen since the site launched in 2009, according to data from the Stack Overflow Data Explorer that tracks the platform's activity over its sixteen-year history.

Questions peaked around 2014 at roughly 200,000 per month, then began a gradual decline that accelerated dramatically after ChatGPT's November 2022 launch. By May 2025, monthly questions had fallen to early-2009 levels, and the latest data through early 2026 shows the collapse has only continued -- the line now sits near the bottom of the chart, barely registering.

The decline predates LLMs. Questions began dropping around 2014 when Stack Overflow improved moderator efficiency and closed questions more aggressively. In mid-2021, Prosus acquired Stack Overflow for $1.8 billion. The founders, Jeff Atwood and Joel Spolsky, exited before the terminal decline became apparent. ChatGPT accelerated what was already underway. The chatbot answers programming questions faster, draws on Stack Overflow's own corpus for training data, and doesn't close questions for being duplicates.

Stack Overflow Went From 200,000 Monthly Questions To Nearly Zero

Comments Filter:
  • by shm ( 235766 ) on Monday January 05, 2026 @09:48AM (#65902907)

    I attempted to answer some questions in which I happened to have some domain knowledge, but didn't have the time or the inclination to meet their requirements.

    In many ways, they were like Wikipedia.

    • by CAFED00D ( 1337179 ) on Monday January 05, 2026 @09:53AM (#65902923)
      I had the same experience. They have a serious gatekeeping issue there. I got downvoted and I was the ONLY person answering!
      • Re: (Score:3, Interesting)

        by cellocgw ( 617879 )

        Did you even stop to consider that your (only) answer might have been wrong?
        We don't vote on an answer being better or worse than others: we vote on their objective accuracy.

        • by thegarbz ( 1787294 ) on Monday January 05, 2026 @12:35PM (#65903463)

          That's the problem with blind moderation. How is he to know. Contributing is more meaningful than moderating. The value is in the discussion and information exchange, not in the number.

          I actually think Slashdot has this backwards. Moderation is undone by posting. I think posting is an essential part of moderation.

          • by flink ( 18449 ) on Monday January 05, 2026 @01:43PM (#65903663)

            I always thought that it should give you a choice: if you post to a discussion you've moderated, then your moderations get de-anonymized or your moderation gets undone. You also shouldn't be able to moderate in a thread you've posted to.

            • Slashdot shouldn't have anonymous moderation to begin with. It's a bad idea. This ain't voting.

            • Why not though? Moderating as part of a discussion should be perfectly normal. If I think you're an idiot I should be able to mark you as one while also explaining to the world (and you) why that is the case.

              • Why not though? Moderating as part of a discussion should be perfectly normal. If I think you're an idiot I should be able to mark you as one while also explaining to the world (and you) why that is the case.

                Because many (most?) people use moderating just to bury posts they disagree with. Even if they're factually relevant. Which is generally what creates silos. That' encourages an environment that is the exact opposite of "discussion". More "soapboxing", less critical thought.

              • explaining to the world (and you) why that is the case.

                I wish I could explain my moderation. I'm pretty light usually using about 5 of 15 allocated. But if I comment and moderate, you know the rules.

                I'm sure I moderate rsilvergun the most and have probably used all the categories, occasionally even ones that begin with the letter "i".

                By the way, when I have mod points, if someone starts their post with "mod me flamebait" or similar, I don't even read the rest of the post. I simply grant their wish and move on.

          • It's not a discussion site, it's a q&a site. Bad answers waste people's time if they're not flagged.
            • Bad answers waste people's time if they're not flagged.

              And then get repeated at the next Q&A if a discussion doesn't take place to explain why the answer is bad, wasting more people's time in the future. This is even MORE IMPORTANT for a Q&A site. There's no learning or benefit of understanding when the answer is "Bzzzzt WRONG! - deleted"

        • by AmiMoJo ( 196126 )

          From the user's point if view they get down votes and no hint as to why. If they make the mistake of asking, it just attracts more down votes.

          Down votes are unrecoverable. Even if the answer is fixed, very few people come back to undo their vote.

          Many highly up voted questions are closed due to being "off topic" or "duplicates" (they aren't). Many up voted answers are terrible or flat out wrong.

        • Have you stopped to consider the condescending prick who moderated the answer out of existence could have been wrong?

          I'm so glad I learned C programming well before Stack Overflow existed.

      • Gatekeeping (quality assurance) is their only value-add. It is the essential problem with crowdsourcing (if you manage to draw a crowd in the first place).
        • by jsonn ( 792303 )
          Except on most topics, the QA abysmally failed on StackOverflow. Speed of reply mostly matters more than correctness. The number of programming questions where the top answers are plainly wrong is legendary.
    • They're worse than Wikipedia because if someone has some information to add to a Wikipedia article which doesn't have some demon editor squatting atop it like the throne of Nyarlathotep, they can just go ahead and add it right away without having to ask anyone or jump through any stupid hoops.

      I'm not going to lurk and troll their stupid shitty slow site in order to find stuff that I can meet their points requirement with.

    • concur. there was some good information there, but asking and answering was a really onerous process to figure out, and i never did quite get the hang of answer vs. comment.

    • by abulafia ( 7826 )
      Yes, the Wikipedia similarity is because of the similarity of the contribution models. The core problem is moderation - you need rules tight enough to keep overall quality high enough that people trust it enough to want to visit. But doing that raises the cost for all contributors high enough that a lot of folks don't want to put up with it.

      Kind of an idle question, but I wonder if Spolsky saw it coming or got lucky.

    • I have a nonzero reputation [stackoverflow.com] on Stack Overflow, but I haven't written an Answer since May 2024 and the last of my Answers that has more than one upvote is from 2020. I for one hope it does weather the storm, but the high bar to making a "good" contribution is a challenge.

      And the comparison to Wikipedia is apt in some ways; but Wikipedia also has a lot of content on history, politics, and religion, and in the past had (perhaps still has) some editors who wanted to make sure certain modern social causes were

      • I receive regular upvotes for answers from years ago. I also receive upvotes for recent answers, but fewer than in the past.

        I think mostly what has happened is the questions that meet the standards of Stack Overflow have largely been asked and answered. There's just less stuff that you can't already find answers for.

        New languages and technology will probably see more questions and answer, but no one needs to post a new question about JQuery (which surprisingly many people still use and discuss on Stack Over

    • Whatâ(TM)s even more puzzling to me is the level of incompetence of their leadership. People have been complaining loudly about what a miserable experience SO was to use for over a decade and company leadership was happy to sit back and do nothing about it. They absolutely deserve what theyâ(TM)re getting.

      Atwood and Spolsky cashing out at the top got incredibly lucky given what a terrible job they did running the site.

    • Worse than Wikipedia, because anyone can post to Wikipedia, even if it's reverted 5 minutes later. Not on these sites -- you need brownie points first.

    • by labnet ( 457441 )

      This seems to happen in all orgs.
      The creative producers are 10% who trail blaze.
      Then the usually well intended bureaucrats arrive with their clip boards to keep the order ultimately destroying the thing they think they are helping.

      Be thankful with tech another alternative can spring up with not so much friction, unlike government where the rot hangs around until it metastasises the institution.

    • I didn't answer many questions, but when I did, they were upvoted, giving me the ability to moderate. In my experience (and I can't speak about your posts specifically) most down-voted answers were appropriately down-voted.

      • Yeah, I'm with you. I can't remember coming across objectively bad answers that were upvoted or many good answers that were downvoted. Sometimes answers can be dated with lots of upvotes, then you can come in with an updated answer and it won't see tons of votes, but it will get a few to raise it up to the top 3 or 4 answers.

  • by mlheur ( 212082 ) on Monday January 05, 2026 @09:57AM (#65902947)

    I used to give free IT Support in yahoo chat groups in the 1998-2003 era. People stopped asking for help in those html-based real-time chat forums sometime around when Stack Overflow was getting started.

    I think this news is just evolution that has no "right or wrong", only "kill or be killed, eat or be eaten".

    • by karmawarrior ( 311177 ) on Monday January 05, 2026 @10:52AM (#65903147) Journal

      My concern is that the replacement really isn't sustainable. ChatGPT is not going to get better, the concept is ultimately technically limited to the level of reliability we see today, and without people publishing answers to more current questions, it's going to get stale and lose its usefulness pretty quickly.

      I still find it hard to understand why that's so difficult for generative AI boosters to grasp. How will it be more accurate in 2035 if most of the information it bases its output on comes from 2025?

      And if, like some, you're all going to pretend that people will still post anything but political rants and other non-technical stuff to the Internet in 2035, explain why StackOverflow is dead.

      Congrats Altman, Nadella, Musk, and Pichai - we had a good thing going, you killed it. Assholes.

      • by mlheur ( 212082 )

        Totally agree.

        Also, where will AI gain new knowledge after authors & content-generators stop creating "novel" content, and stop applying the real intelligence that feeds the artificial one?

        • That is the question few ask (and fewer try to answer). Even if human authors continue to publish new content, it is increasingly diluted by all the AI generated content out there (which becomes the training data for future versions of the AI models; rinse & repeat). Unless new AI models use a totally different way of obtaining/generating training data, they will never be any smarter than they are right now (and will likely get dumber).

      • Man I long for the usenet of yore. Reddit is as good as it gets, and it is sometimes an adequate replacement.

        Incidentally, AI *could* be awesome for technical questions if there were decent documentation out there for systems (plus source code for software systems) and the LLM had access to it.

      • I still find it hard to understand why that's so difficult for generative AI boosters to grasp. How will it be more accurate in 2035 if most of the information it bases its output on comes from 2025?

        This is not hard. What do you do when Google doesn't get you what you need. How does Google get more accurate if everybody just googles stuff.

        You go to official documentation pages, to official user community forums, and then to unofficial community sites like SO, github, reddit, slack, discord, etc. Future accuracy will depend on availability of those sources, same problem search engines have today. Something might be provided on an unindexed corporate slack channel, then it migrates to some blog. Because

      • Every library/tool writer:

        ChatGPT, write concise useful documentation for this tool/library.

        Writing any docs (good or bad) used to take time. Writing any docs no longer really does. And if it's not already good at it, it'll probably continue to get better.

      • Actually, ChatGPT (and others) are *not* stuck in the present. For example, they can read documentation for a new version of a framework or language, and write code based on that. Even if people aren't asking new questions on SO, people *will* be writing new docs to describe their things. And AI will be trained on those. And with time, people will learn to write their documents in a way that is specifically targeted at LLMs.

    • by evslin ( 612024 )

      I recall using Experts Exchange before Stack Overflow became a thing.

      And by "using" Experts Exchange, I mean a link to a post on their site would come up when I would search for something online. The answer was ostensibly paywalled, but all you had to do was view source and the answer was right there, buried in the HTML. You just had to search the document source for "accepted answer" or whatever

  • by RobinH ( 124750 ) on Monday January 05, 2026 @10:00AM (#65902963) Homepage
    I was involved with Stackoverflow in the early days and it really was a gamechanger. It was so much better than trying to find your answer by slogging through forums, that it really took off. And gamifying the answering of questions really helped it grow fast. But there's always this point where the community gets swamped with rules-obsessed bureaucrats and drives the original contributors away, and that's what happened with me. The mods all became power-trippers obsessed with closing questions and keeping people in line, rather than the original purpose of helping people. And while Stackoverflow was in decline, LLMs are a massive nail in its coffin. Why would ask a question on Stackoverflow, only to be treated like garbage, when you can ask a chatbot the same question and have it confidently tell you a subtly wrong answer, but politely?
    • If you keep asking, will the LLM keep trying to get it right, until it does?

      • by wagnerer ( 53943 ) on Monday January 05, 2026 @11:57AM (#65903347)

        For complex issues I've found it to start circling the drain.

        First answer, obviously wrong. I reply that doesn't seem correct, are you sure. LLM replies with kiss ups and says I'm so smart for checking and it's obviously wrong this is the correct answer.

        Second answer looks self consistent and seems to solve the case. But after going moving forward with it see that it's an algorithm extremely fragile to edge cases that causes it to blow up. Point out the cases and instead of fixing the core issues it comes up with solutions for each unique edge case I present where it fails giving a massive amount of spaghetti code.

        Third push and I get the first answer back again.

        The biggest value I see is that I'm very well informed about the issue now and code the thing myself in a manner that is efficient and resilient to edge cases.

        Honestly it reminds me of grad school when I had to grade undergraduate papers. I spent the bulk of my time trying to figure out what rationale the student used to come up with the wrong answers in order to figure out how much partial credit to award. It gave me a very thorough understanding of the core concepts. Still remember one hard mid-term where every student's answer was different to a problem that only had one solution. Had to dig deep into all the concepts of that question only to realize that the subtle calculations of production efficiencies were a moot point since the parameters specified in the question would have caused a massive explosion in real life obliterating all the products.

      • Well, it will keep trying, for sure. Each time you ask, it will give you a little different answer, especially if you qualify and rephrase your question. It doesn't get annoyed with your follow-up questions and cut you off, unlike what happened on SO.

    • """The mods all became power-trippers obsessed with closing questions and keeping people in line, rather than the original purpose of helping people"""

      This..

      I've been on the hunt for some obscure pickle I was in and found exactly the question I needed answered showing up in my search results - I go clicking on it to see what the responses were only to see that there are no responses and a mod said "this does not meet the requirement for bla bla"

      Most recently annoying one - someone was asking on StackOverflo

    • "The mods all became power-trippers obsessed with closing questions and keeping people in line, rather than the original purpose of helping people."
      This. (Not sure about all the mods, but at least some.)

    • My experience with SO reflects yours as well. Further, I'd say that the percentage of correct answers from LLMs, is roughly the same as correct answers from SO. BUT AI will helpfully customize the answer to fit my exact code, saving me time because I don't have to go rename a bunch of stuff to try it out.

      • by RobinH ( 124750 )
        I have to admit that there have been a couple times when a query punched into Google provided me with an AI answer (typically a one-line... "how do I format this date in a certain format") that was slightly faster than following the next link down to a StackOverflow answer and reading that one, but I really have to wonder if the unbelievable amount of human effort and capital that's been poured into this technology is worth it, when it doesn't really solve a problem that I had. It solves a problem that Goo
        • Well, since Google is the one paying for the time and effort, it only has to be worth their money for *their* purposes.

          Oh have no fear, they will absolutely find ways to monetize this new platform. And sites will find ways to get people to visit. Frankly, if sites provided content solely to get advertising revenue, I feel very little compassion for them. They can die, as far as I'm concerned.

  • From my experience, I mostly find answers in several year old postings and even that not consistently enough. There is also a lot of bad answers. Not that LLM answers are any better. There is quite a bit of slop produced both by humans and machines.

    • Tend to agree. I never asked a q on stackoverflow, but searches have ended up there. Often the question matches what I am looking for, the answers range from decent info to gibberish. I expect part of the downturn prior to AI was there were so many questions already answered, why ask again? The information repository is 90% complete, so it becomes more of a read only site. I'll still check them for answers. I'd actually put stack above AI in terms of answer quality. And as with anything online, answers are
  • ...now that stackoverflow and other types of forums are soon to be gone? I think at one point in the future, when answers to questions cannot be found online, AI will not be able to find a solution.

    • I see a corollary where corporations enact mass layoffs in name of profits and efficiency, only to wonder why their customers no longer have the disposable income to buy their products.

      • Recall Henry Ford was just about the only CEO who went the other way. He dropped the price and gave his workers significant raises so they could afford to buy the product he was selling.

        Then some assholes in SCOTUS handed the keys to the Dodge Brothers.

    • For one thing, AI can train directly on the documents that come with a product, or describe a programming language or framework. They don't *just* rain on human conversations like SO. You can point AI to a language document and ask it to write code based on what it finds there, and it will do it. Over time, I expect documentation writers will learn to write documents specifically designed for AI to train on.

  • by Anonymous Coward

    Oh well, I guess we're back to sending all our questions to Expert Sex Change.

  • by Mahalalel ( 1503055 ) on Monday January 05, 2026 @10:13AM (#65903009)
    I've been on both sides of Stackoverflow over the years. After a while I got tired of responses consisting of "why would anyone want to do that?" Thanks for nothing.

    I tried contributing. After several times of the mods deleting my detailed answers because I didn't add some tag or format something following the 37 requirements for a post, I gave up. I felt like everyone was missing the point of being helpful and just wanted to follow some rules for the sake of following rules.
    • by dmomo ( 256005 )

      I had a similar issue here. Another problem is that I found was the notion of the "accepted" answer. This is great in theory, but only works as a snapshot in time because languages evolve. There was no simple way to post the same question again, or pin the "accepted" answer to a language or software package version.

      I often went down the path of implementation suggested only to find that the problem was solved in a more modern way after a version update. The community just couldn't keep up with the evolution

    • There are no tags on answers â¦

    • Oh yes, the good old "why do you want to do that" reply, so you explain why you want to do that only to be told by someone else that you'll be better off buy a new PC or Reinstall the OS or something completely different, in the end you end up going elsewhere like Reddit etc and getting told how to do what you want to do - Great!!!

      Now, it's just ask on of the suitable AI chatbots and you get your answer right back along with 99% working code

  • by Carcass666 ( 539381 ) on Monday January 05, 2026 @10:13AM (#65903011)

    While I am not "vibe" coder I find myself increasingly using AI to get questions to answers that I used to post on SO, and getting pretty good results. Asking a question on StackOverflow has always been a last resort, because that is the way they want it. There is often as much energy expended on why a question shouldn't be answered as opposed to answering it. The mechanisms to search for an existing answer are not good enough to keep you from DEITY FORBID posting a duplicate. Also, the site is not good at dealing with stale content. The "accepted" answer is often no longer the currently correct answer. The level of snark and condescension is significant, and while I don't mind a bit of sarcasm now and then, I don't want to get in continual debates with someone on whether what I'm trying to do is worth doing.

    Contrast the experience with AI tools. They do not yell at me for asking a question that could have been found via web search. I do not have to wait, I get an immediate answer. Yes, these tools are overly sycophantic, and sometimes they are wrong. But 90% of the time I will get a workable response and I can choose to have my code updated based upon the answer. In those cases where I don't get a workable result, I either post an issue on the repo (open source) or contact the vendor (closed), which isn't all that different from what I do with SO

    Peer-based discussion are useful, and if sites like Stack Overflow disappear, we've lost something. SO can perhaps remain relevant by being better at facilitating deeper design and architecture discussions and leave the simple "how do I" questions to AI.

  • It was just a majority of low-skilled people asking simple questions and a minority of moderately knowledgeable people trying to make themselves look like gurus by answering simple questions. Real experts were few and far between.

    At best, it was a source of inspiration as to what might be wrong with your code. You still had to take that inspiration and write the code yourself if you wanted anything good since most answers were either irrelevant, incorrect or highly specific to someone's use case.

    It's sort o

  • i gave up on the site pretty quickly as most of the “answers” were either non-answers or attitude.

  • by cathector ( 972646 ) on Monday January 05, 2026 @10:36AM (#65903097)

    IDK,
    i bumped in to the occasional overly-rigid process hiccup at SO but by and large i found it to be a tremendously effective tool, both for asking and answering Qs. imo, the last fifteen years or so of software development owe an awful lot to SO. the rigid rules people are complaining about here (eg closing questions as dupes) definitely contributed to SO's effectiveness.

    that said, i'm using past-tense for SO. as much as i owe SO, the many AI helpers are easier and usually better.

    it does beg the obvious question of how the AIs are going to get training material for future technologies.

    • I'm sure your post would be deleted on SO for lack of capital letters.

    • it does beg the obvious question of how the AIs are going to get training material for future technologies.

      Prior to Stack Overflow you had to be really good at reading docs, reading code, know the right forums / IRC servers, occasionally just reach out to someone, etc. -- it doesn't even seem possible to quantify the impact it's had as a fulcrum for organized, vetted, expert dev knowledge. Stack Overflow is dead now, and I don't think AI will ever give us a full replacement for what it provides.

  • by dcollins ( 135727 ) on Monday January 05, 2026 @10:37AM (#65903099) Homepage

    The "monthly question volume has collapsed [to] about 300" appears to be an artifact of the last category being the current month, January 2026, even though we're only in the 5th day as of this posting. (Specifically, 321 as of Jan-5 @ 10:30 AM EDT). I expect the number will be in the 2,000+ range by the end of the month, matching both recent trends and a simple extrapolation from the first few days. (Which would still be the worst full month on record.)

  • by rsilvergun ( 571051 ) on Monday January 05, 2026 @10:39AM (#65903107)
    I'm not sure what's going to happen. I know that the billionaires who built out these AI coding tools aren't just going to let the whole system collapse so it makes me wonder seriously how they're going to train their AI in the absence of tons and tons of free access to programmers asking questions and programmers answering questions.

    It doesn't really matter right now because it'll take a few years at least for that to be a problem but still.
    • You find a way to insert the word "billionaires" into every post, don't you!

      Well, to answer the question, AI doesn't just train on human Q&A sites. It also trains just fine on original documentation, like documents that describe the syntax of a language or framework.

      • So you're thinking is too childish and simplistic for me to use the phrase ruling class. So I have to fall back on billionaires.

        If you had graduated the 6th grade without getting a participation trophy I wouldn't have to do that but well, here we are. Imagine how much better human civilization would be if they had just held you back a couple years
        • Whether we call them billionaires or the ruling class, the subject here is AI training, which makes the discussion of "the rich" off-topic.

  • I usually only go to Stack Overflow when I have legit difficult questions I've put a lot of time into. Last summer, I had some serious Python-Qt6 issues. I tried to be as detailed as possible and quickly got downvoted and the question was closed. I created a very clean, slim minimal example and the 2nd question got downvoted and said it wasn't minimal enough. What's worse, the close questions get removed and not even the original poster than see them in their history!!!. I had to fish them out of my e-mail.
    • I think more people are doing hints-in-comments these days because they don't want their responses sucked up by the LLMs

      All that does is hurt classic search and help LLMs.

  • Aggressive moderation is responsible for declining engagement over the past decade. Today, well, nobody would have gone there in the first place.

    Previous Decline: When you're knowledgeable enough to properly ask the question and not be shut down by moderators you are already capable of figuring it out for yourself, so asking it is worthless unless you're just looking for likes or whatever. I guess I'm jaded because I have a history of initial configuration issues with weird conflicts breaking everything bef

    • by La Gris ( 531858 )

      Aggressive moderation was what made it worth compared to the web forum posts that contained very low quality unmoderated, not-curated, and too often broken/deleted and scattered content.

      In the late 80's early 90's, the main source of knowledge was Usenet NNTP forums.
      In the late 90's to early 2000's it started to shift to forums, problem forums were low quality, did not last

      When StackOverflow arrived, it was refreshing, curated content, limited duplicates, made it a quality source. And yes I was also answere

  • "draws on Stack Overflow's own corpus for training data" -- and just where is AI going to get its training data on future issues that come up? You can't exactly feed it a manual and hope it will understand and be able to extrapolate answers, it just regurgitates what's already been posted with a certain degree of "confidence". So if SO goes, because everyone is relying on LLMs, it becomes and oroborous situation.

  • by Wookie Monster ( 605020 ) on Monday January 05, 2026 @10:54AM (#65903157)
    Have LLMs killed the goose that laid the golden egg? They need to be trained on data from sites like SO, but if no one is asking new questions, there won't be any new answers, and so no new training data will become available. And if all the traffic to SO is training bots, they won't make any money from ad revenue, and so they're toast.
    • I think this might be one of the reasons why microsoft et al want full snooping of your machine. Because they can use as training data the whole loop: your questions to the AI, and what you do with the answers.
    • Have LLMs killed the goose that laid the golden egg? They need to be trained on data from sites like SO, but if no one is asking new questions, there won't be any new answers, and so no new training data will become available. And if all the traffic to SO is training bots, they won't make any money from ad revenue, and so they're toast.

      Why would people stop asking new questions? Either LLMs have answers for everything, or they don't and people will ask questions somewhere else. You can't have it both ways.

      SO isn't the last place on the internet to find an audience that might have an answer to a question you can't get answered any other way. If LLMs answer ENOUGH questions where the profitability of running a dedicated Q&A site is impossible, people will ask hard questions on special interest sites, blog comments, GitHub, slack, discor

    • LLMs can also be trained on official documentation, it isn't just human forum posts.

  • Stack Overflow was awesome, but now I have an API and documentation expert on _everything_ about software development sitting in a chatroom giving me expert answers and advice right when I ask for it. It's like Linus Torwalds and an army of software project leads are just sitting there waiting to answer when I come with my problem. I don't even search anymore, I ask the question, add layers and "if not, then how" and "how would you do this specific thing" and buddy AI spits out an essay on how to do it and

  • was from imposing so many constraints on question asking that it drove people away. Many SO users and mods are so pretentious that they will drive any new users away. In the early days it was not so, you didn't have to be an expert or have an in-depth knowledge of how SO works to post a question. Now users that post questions are also shut down if they aren't great questions, and not many help users improve questions. I understand that there needs to be some kind of quality maintained, but I am an active us

  • Anecdotally, Slashdot comment counts are also down from what I remember them being. Same on other forums like hackernews. Stackoverflow had its problems, but AI slop lowered the incentive for anyone to contribute online.
  • by Curlsman ( 1041022 ) on Monday January 05, 2026 @01:14PM (#65903587)

    I learned to read as a child so by college I would read the vendor documentation, programmer notes, and application guides, the existence of made a significant purchase recommendation around the Win 2 & 3 migration from DOS: don’t buy something that doesn’t have a printed manual. Acrobat and PDF replaced printing, but it was always the content I used.

    Now I deal with (mostly) managers who want to know what numbers add up to 2, what’s does “add up to” mean (because they overheard the question in a Teams meeting they weren’t involved in), and can it run on Linux by someone else such that they can continue do as close to nothing more as possible (really asking for telepathy so the response is returned effortlessly).

    Asking for any such documentation of internal projects now leads to requests for time travel back to the last upgrade rushed into production because using a calendar the schedule projects in advance is not a manager job requirement.

    The best thing I imagine an AI could say is to pull your head out of your ass. Across my 40 year career, there are several times I should have been told that and every one would have been better off.

  • "...Stack Overflow improved moderator efficiency and closed questions more aggressively." I don't know about moderator efficiency, but it got to the point where I could hardly ask a question that wasn't closed by a moderator. I pushed back on one and got it re-opened, but the work to do so left me not wanting to ask another question.

  • "The decline predates LLMs. Questions began dropping around 2014 when Stack Overflow improved moderator efficiency and closed questions more aggressively"

    By making it hostile to the people it is intended to for.

  • I think Google has more to do with this than people think. Google has been heavily promoting Reddit for everything now that they get to scrape their data to train their AI. Stack Overflow results almost always come below Reddit results now. And I definitely don't believe for a second that Reddit is a higher quality result for the shit I'm looking for.

  • With so few questions being asked, they must be getting really high quality answers and quickly. Now's the time to be asking questions! Lots of volunteers standing by awaiting your questions, wanting to stand out from the hundreds of other board volunteers.

  • 1) A piece of code segfaults. I don't understand why. I post it as Stackoverflow question to make sure there is no obvious mistake with the code itself.
    2) First responder wants complete compileable example.
    3) I answer them that the code itself doesn't crash always and it's part of a large project so I cannot post the full code.
    4) First responder makes sarcastic response involving crystal balls.
    5) 4 downvotes. Ask for topic closure.
    6) Someone finds the actual error with the code and posts the correct solutio

  • Like Wikipedia, they were great in the early days. Then the site was taken over by pedantic idiots, and it was no longer worth the effort to contribute.

"Would I turn on the gas if my pal Mugsy were in there?" "You might, rabbit, you might!" -- Looney Tunes, Bugs and Thugs (1954, Friz Freleng)

Working...