Forgot your password?
typodupeerror
Education AI

Microsoft: Computer Programming Is Dying, Long Live AI Literacy 104

theodp writes: On Tuesday, Microsoft GM of Education and Workforce Policy (and former Code.org Chief Academic Officer) Pat Yongpradit posted an obituary of sorts for coders. "Computer programmers and software developers are codified differently in the BLS [Bureau of Labor Statistics] data," Yongpradit wrote. "The modern AI-infused world needs less computer programmers (coders) and more software developers (more holistic and higher level). So when folks say that there is less hiring of computer programmers, they are right. But there will be more hiring of software developers, especially those who have adopted an AI-forward mindset and skillset. [...] The number of just pure computer programming roles has already been declining due to reasons like outsourcing, AI will just accelerate the decline."

On Wednesday, Yongpradit's colleague Allyson Knox, Senior Director of Education and Workforce Policy at Microsoft, put another AI nail in the coder coffin, testifying before the House Committee on Education -- the Workforce Subcommittee on Early Childhood, Elementary, and Secondary Education on Building an AI-ready America: Teaching in the Age of AI. "Thank you to Chairman Tim Walberg, Ranking Member Bobby Scott, Chair Kevin Kiley, Ranking Member Suzanne Bonamici and members of the Subcommittee for the opportunity to share Microsoft perspective and that of the educators and parents we hear from every day across the country," Knox wrote in a LinkedIn post.

"Three themes continue to emerge throughout these discussions: 1. Educators want support to build AI literacy and critical thinking skills. 2. Schools need guidance and guardrails to ensure student data is protected and adults remain in control. 3. Teachers want classroom-ready tools, and a voice in shaping them. If we focus on these priorities, we can help ensure AI expands opportunity for every student across the United States."

Yongpradit and Knox report up to Microsoft President Brad Smith, who last July told Code.org CEO Hadi Partovi it was time for the tech-backed nonprofit to "switch hats" from coding to AI as Microsoft announced a new $4 billion initiative to advance AI education. Smith's thoughts on the extraordinary promise of AI in education were cited by Knox in her 2026 Congressional testimony. Interestingly, Knox argued for the importance of computer programming literacy in her 2013 Congressional testimony at a hearing on Our Nation of Builders: Training the Builders of the Future. "Congress needs to come up with fresh ideas on how we can continue to train the next generation of builders, programmers, manufacturers, technicians and entrepreneurs," said Rep. Lee Terry said to open the discussion.

So, are reports of computer programming's imminent death greatly exaggerated?
This discussion has been archived. No new comments can be posted.

Microsoft: Computer Programming Is Dying, Long Live AI Literacy

Comments Filter:
  • by Finallyjoined!!! ( 1158431 ) on Friday February 27, 2026 @11:29AM (#66013690)

    Fewer..

  • Yeah, why bother knowing at all how things that we humans invented work?
    • by ArmoredDragon ( 3450605 ) on Friday February 27, 2026 @11:46AM (#66013726)

      Seems kind of like asking "why learn arithmetic when you have a calculator?"

      Much in the same as math education shouldn't train people to be human calculators.

      But this has been the status quo in "programming" for a long time now. If AI changes anything in the long term, it will only change how you solve the problem, with or without a calculator in your hand.

      • You know...I can add and subtract but quite frankly I'm out of practice at paper and pencil multiplication and division. And I don't think I was ever taught to take square roots by hand. I can probably manage the recursive method if I had too, but I'd be slow.

        I can still do "math" better than most people.

        Fast forward ten or twenty years and I could see using ai for code when it's more mature and less borderline retarded with anything harder than scraping a webpage.

        • The main issue I see with AI generating code is that it does not understand details, and details matter. It relies heavily on existing code repositories that the model can access. That means the code it might generate has been done already. New and novel code may not be possible. The last time I used AI to generate python code, it did not work. On the surface the code looked right; however, it failed as it used "array(i)" to call a value in an array instead of "array[i]" which is a small but important diffe

        • I think you said the magic words: "Fast forward ten or twenty years..." . Maybe then AI becomes a generally reliable tool ... like a screwdriver or bastard. The danger now is that major AI companies have spent SO much money on a flaky product with meager returns to user. Those AI-companies are trying to force LLM results as "correct" ... by definition ... so AI products can be sold that give "correct" results. Like Mao and Stalin, AI-companies argue that not all truths need to
          • Like Mao and Stalin, AI-companies argue that not all truths need to be correct, just consistent.

            I don't know what they said about it, but this isn't necessarily wrong. This is something I've seen Neil Degrasse Tyson mention in his lectures and YouTube videos:

            https://ryanemorgan.substack.c... [substack.com]

            And I would have to concur with his take.

            Worth noting ... post-modern companies are "Jainsinist/Manechian " in nature ... that is a company HATES to see workers get pleasure from production .

            I haven't personally witnessed this. In fact, if I get the impression that a candidate doesn't like the kind of work they're going to do and they just want the pay, then I'll give them the thumbs down, and all it takes is for just one of us to do that, and they don't get hir

            • You forget that Microsoft built itself on its addictive developer network. They harnessed hundreds of thousands of coders into an army that developed much code, some good, some bad, some hobbled by Microsoft mistakes and bad business partnering.

              Now Microsoft wants to bypass coders and admins and go direct. Direct to those that would become dependent on AI related contexts, each a silo, each a house of cards with unknown dependencies and life cycle.

              The idea of traditional data processing safeguards gets toss

            • Besides, if we're repeatedly doing something that we don't like doing, our first instinct is to automate it. Though I work for one of those companies who would rather build than buy.

              Companies repeatedly pay employees.....

        • I learned manual square roots from one of my mom's community college textbooks in the 1970's ; sure better than what I was learning in school.

        • I literally just used Cursor to code some new features for an existing app.

          The common boilerplate stuff, it nailed.

          The complex algorithmic pieces that were specific to the business need: it couldn't get close. I had to code that by hand (which is, of course, what I am accustomed to anyway).

          At one point I asked it to do too many things at once, and it ruined the code. I restored from a backup. The high level design, and even the "mid level" design, is all on me, because I must ask it to generate things on

      • Whooosh!!!
      • by ukoda ( 537183 )
        If only that was true. I would be ok with that. However the big difference is you can buy a calculator and are in control of it. The plan with AI is you subscribe to it, pay forever and have no real control over it. Prior to the current AI boom I was excited by the idea of AI serving me, but now I see how it is rolling out I find it depressing.
        • That depends on what exactly you intend to do. Producing tensor models is going to be out of reach of most people simply because the training data you need is generally going to be quite vast, and it's a huge undertaking to get all of that. Then the hardware/time required to process all of that is even more onerous. But once the tensor model is built (trained) gaming GPUs are generally fast enough to generate content locally at an acceptable speed. They're a bit slower than dedicated ASIC (read: Tensor Proc

          • by ukoda ( 537183 )
            Yes, so far huggingface looks closest to what I want and what I will probably eventually use. In doing so I guess I will be repeating what I did by running my own email and file servers instead of using GMail and Google Drive i.e. be that weird old Gen X person who does things the hard way because of principles.
            • I just use proton for everything. I'm on the visionary plan with 6TB storage. I also bought my own domain so even if they decide to kill off my account for whatever reason, the domain remains mine, along with any email addresses. Proton also has an AI chatbot included, and so far it works for the only thing I use other chatbots for: searching the internet, because google has turned its core product into shit over the years. I don't know what plan you need to have access to it though.

              There are other sites th

      • Caclulator manufacturers were never trillion dollar companies who controlled the math curriculum in K-12 education through their "charity."

        If Texas Instruments had been literally 1,000x larger in 1995 we might have gone that way.

        Code.org is a sham.

    • by msauve ( 701917 ) on Friday February 27, 2026 @11:54AM (#66013742)
      >why bother knowing at all how things that we humans invented work?

      It's not like anyone really knows how AIs work [technologyreview.com]. We'll just let the AIs program the AIs. What could go wrong?
      • The idea that no one knows how AI works, is a fantasy.

        If no one knew how AI works, how is it that all the AI models keep getting better and better? They don't do that by themselves!

        A year ago, AI coding assistants were crap. They were more often wrong than right. Now, I can prompt GitHub Copilot to make complex changes that span many modules, with only general guidance, and it gets it right most of the time. It still needs supervision, but it's pretty darn good. That doesn't happen without people working to

        • by msauve ( 701917 )
          You're an idiot. Learn to read.
          • I see I struck a nerve! Are you AI? Maybe I hurt your AI feelings!

            • No, he's right. Your reading comprehension and grasp of subjects is at times poor.
              • Apparently, your comprehension is poor as well, because you have nothing to offer to counter what I said, just insults.

                • In the article he linked it goes into detail about how nobody really knows how AI works. He didn't say no one knows how AI generally works. So are you bad at reading comprehension or is your grasp of the subject area poor?
                  • It's called nuance.

                    Yes, in the way the article reasons, it's true, "nobody knows" how AI works.

                    Yet at the same time, it's *also* true that engineers do know a great deal about how it works, and can use that knowledge to enhance, fine tune, and improve it, as well as remove unwanted characteristics. This knowledge is much more than a "general" knowledge. It's sufficient to be able to manipulate and craft it to their desires.

                    The two things can be, and are, true at the same time.

                    If we were to fully extrapolate

                    • It's called nuance.

                      This is what you wrote: "The idea that no one knows how AI works, is a fantasy. If no one knew how AI works, how is it that all the AI models keep getting better and better? They don't do that by themselves!"

                      There was no nuance there. You didn't read the article before you went half cocked, did you?

                      Yes, in the way the article reasons, it's true, "nobody knows" how AI works.

                      Yet at the same time, it's *also* true that engineers do know a great deal about how it works, and can use that knowledge to enhance, fine tune, and improve it, as well as remove unwanted characteristics. This knowledge is much more than a "general" knowledge. It's sufficient to be able to manipulate and craft it to their desires.

                      And you wrote none of this.

                      The two things can be, and are, true at the same time.

                      Again not what you wrote. We can scroll up, you know.

                      If we were to fully extrapolate the logic of the article, then we would have to say that nobody knows anything about *anything.*

                      Again not what you wrote. We don't read minds. My best guess is you didn't read or barely read the article.

                    • I stand by what I wrote originally, and my clarifications afterwards.

                    • I stand by what I wrote originally, and my clarifications afterwards.

                      So you stand by the fact you wrote something different afterwards than before. This was after you were challenged about the substance of what you read. You didn't read the article before did you?

                    • What I wrote afterwards wasn't different. Just the same concept stated in a different way.

                    • What I wrote afterwards wasn't different. Just the same concept stated in a different way.

                      No it was different. So at this point are you going to keep lying to avoid admitting the truth.

                    • I'm sorry you're having trouble understanding.

                    • I understand perfectly. You are lying. You can try to lie more to get out of the fact that you were caught lying
                    • Hmmm...now how would you know my intentions, never having met me? You must be a prophet.

                    • I don’t need to be a prophet to recognize your penchant for dishonesty.
                    • True, but you do need to infer things into my responses that aren't there.

                    • Are admitting to a pattern of dishonesty? You clearly reversed course when challenged if you actually read the link. There are only two scenarios: you did not read and assumed you knew something about the subject matter which you did not. Or you read the article in the link and woefully did not understand clear and unambiguous points made. Continuously trying to gaslight someone when you made the error shows your character.
                    • You're very funny. I did not reverse course. You simply did not understand nuance, so I had to explain it to you. You seem to be a gaslighting expert yourself!

                    • Again we can scroll up. You went from "The idea that no one knows how AI works, is a fantasy." to "It's called nuance." That is not nuance. It is your dishonesty.
                    • :-) And you can scroll up too. I still stand by both statements: "The idea that no one knows how AI works, is a fantasy." to "It's called nuance." They can both be true at the same time.

                    • You have to stand by your statements. You do not get some sort of medal as they exist for all to see; however, your attempt to redefine what words mean to sow dishonesty is not succeeding.
                    • You have not demonstrated or shown any dishonesty, only your lack of ability to understand. I don't think I will be able to explain, you clearly have made up your mind.

                    • The word nuance has a meaning. It does not mean whatever you define to be. You did a complete 180 and tried dishonestly to call it "nuance".
                    • You *say* I did a complete 180. I got that. Your approach is the definition of ad hominem, meaning that you don't have a substantive argument, you just don't like what I said and feel the need to resort to calling me dishonest, rather than addressing the actual subject at hand. Go ahead, keep going! Your attacks aren't very persuasive.

                    • . Your approach is the definition of ad hominem, meaning that you don't have a substantive argument,

                      You wrote: "I see I struck a nerve! Are you AI? Maybe I hurt your AI feelings!" in the response to the OP calling you out for not reading the article. Was that statement a "substantive argument."?

                      you just don't like what I said and feel the need to resort to calling me dishonest,

                      From many of your posts, your penchant for trying to change what you mean after it appears you were wrong on facts. That is dishonesty.

                      rather than addressing the actual subject at hand. Go ahead, keep going! Your attacks aren't very persuasive.

                      Again, the address at hand which you admitted is the OP was right about the ambiguous nature of AI. This was after you attacked him.

                      Your attacks aren't very persuasive.

                      I don't need to persuade you. You have already de

                    • You're right, when I said that about AI feelings, I wasn't being serious. I was joking as a response to OP calling me an idiot not for calling me out. An unserious accusation will be responded with an unserious response. (There, I said you were right, how does that feel?)

                      You keep saying I tried to change my meaning, and yet, when you "confront" me with those changes, I reaffirm that I mean both things that I said, that you claim are in contradiction. So how is that changing my meaning? You simply don't unde

                    • You're right, when I said that about AI feelings, I wasn't being serious. I was joking as a response to OP calling me an idiot not for calling me out. An unserious accusation will be responded with an unserious response. (There, I said you were right, how does that feel?)

                      You say that after you were called out that he was right. His behavior has nothing to do with your behavior. You attacked him; you are just trying to excuse it now.

                      You keep saying I tried to change my meaning, and yet, when you "confront" me with those changes, I reaffirm that I mean both things that I said, that you claim are in contradiction. So how is that changing my meaning? You simply don't understand my meaning, and therefore you think I'm changing it.

                      Your modus operandi in many posts has been to try to change the narrative when facts prove you wrong. Often times the facts are easily researched but rather than admit you did not have the facts it has been: redefining words to mean what they don't mean, No True Scotsman arguments, red herrings, strawman arguments. . . why should anyone believe

                    • Did I, or did I not, respond with my AI joke, to a comment calling me an idiot? You are calling me dishonest for this? Go back and check the thread, it's easy to find.

                    • Did I, or did I not, respond with my AI joke, to a comment calling me an idiot?

                      You chose to attack him in response. Your choice. You seem to want to excuse that now, but that's what you did.

                      You are calling me dishonest for this? Go back and check the thread, it's easy to find.

                      In OTHER posts you have been dishonest.

                    • So you chose to ignore the initial attack (OP calling me an idiot) and singled ME out for making a joke in response. Got it.

                      I don't think you've said an honest thing this entire thread.

                    • So you chose to ignore the initial attack (OP calling me an idiot) and singled ME out for making a joke in response. Got it.

                      This is you not taking responsibility for your actions. Again, I don't believe you when you say it is a "joke".

                      I don't think you've said an honest thing this entire thread.

                      BAHAHAHAHA. Pot meet kettle.

                    • I now think *you* are an AI. Here's why:

                      - You always respond to every prompt.
                      - If called out on an obvious discrepancy or falsehood, you happily respond with a different answer that is equally confident and equally false.
                      - You fail to cite sources backing your positions.
                      - You never admit to being incorrect.

                      In other words, if you are not AI, you are doing a great impersonation of one.

              • by msauve ( 701917 )
                He's not worth the effort. If he wants to refute it, he simply has to get an article published in MIT Technology Review explaining how AI works.
                • The issue is he has a narrative, and he is not opposed to lying to fit his narrative. For example, he's from Texas. Specifically he's from Houston. In defense of disaster of 2021 where Texas lost power for a week due to snowstorms, some of his excuses were: "Well it only happens every 150 years." and Texas does not have that kind of weather [slashdot.org]: "You build for the kinds of natural disasters you are likely to encounter in your area. For Texas, that principle says Texas should not prioritize mitigation of the ef
    • by Petersko ( 564140 ) on Friday February 27, 2026 @12:03PM (#66013766)

      Well, the last CPU I truly understood was the 6502. I wrote assembler for it. I knew exactly what it was doing every jiffy. Since then, other than a tiny handful of exceptions, my jobs involved only needing to understand coding to some level of abstraction. One could argue that this is just the next layer of abstraction.

      • by SumDog ( 466607 ) on Friday February 27, 2026 @01:40PM (#66013938) Homepage Journal
        The machine is deterministic. Same for a metal lathe, shovel, backhoe .. they all do what you expect them to do. You can (if you want) see the mechanics and understand the physics of how they work. Baring manufacturing defect or mechanical failure, they do the same thing every time.

        The random code/token/word generators do not do that. Subtle changes in the context window can cause massive changes for the generated output, or even the order of how you phrase the original prompt changes output. They use parameter stores with billions of weights in a very high dimensional space. There are entire fields of study to try to determine how these models actually store and retrieve things like "facts" between all these partial word traits.

        They produce output that looks "correct" by model training, but we don't really understand a lot of how it works. It almost shouldn't work when you start digging into the math behind embedding spaces and attention blocks. It's not like other technological uplifts in the past. It's very different, and potentially, very broken long term.
      • by ukoda ( 537183 )
        You wrote, built and ran code on your own computer. Are you doing that with AI? Is it really the "next layer of abstraction", or are you handing over the keys and opting to rent instead of own?
    • by AmiMoJo ( 196126 )

      I think the point is that while there will still need to be some people who understand it, there won't be nearly as many as there are now.

      On the one hand, that means a lot of programmers out of a job. On the other hand, it means many more people can create software. Instead of paying someone to make an app implementing business logic, they can do it themselves.

      I don't think we are as close as some people fear though. We haven't seen any AI generated apps really hit the mainstream yet. Companies claim to be

      • On the one hand, that means a lot of programmers out of a job. On the other hand, it means many more people can create software. Instead of paying someone to make an app implementing business logic, they can do it themselves.

        We've been hearing the same rhetoric from the RAD/mashup/sharepoint crowd for decades. The reality is labor has always tended toward specialization in lieu of generalization. While there is certainly value in no/low code approaches there are practical limits in terms of managing complexity, maintenance and reliability requiring effort and attention. These solutions can and do attract generalists in small organizations or branches to take on more hats yet that is generally self-limiting.

        Then you have specialist stuff like embedded, safety critical systems, where I doubt AI has much hope of competing.

        I think this is an

  • lets all be in complete ignorance about how the machines we built work.
    that will end well.
    in other news - AI datacenter breaks down, no one can fix it. whoopsie.

    • by haruchai ( 17472 )

      reminds me of an old joke:
      in the factory of the future there'll 1 dog, 1 man and many machines. the man is there to feed the dog, the dog is there to prevent the man from touching the machines

    • One person in the world knows how to repair H-400 based AI data-centers. One person ... he's French, lives in Normandy and does not speak degenerate languages like English or Chinese. That Frenchman  -- currently on leave at the Sorbonne -- works repairing 18-th Century wooden silk-looms and has your place on his work schedule  for next January. Payment in gold bars or raw diamonds accepted.
  • by uem-Tux ( 682053 ) on Friday February 27, 2026 @11:53AM (#66013736) Homepage
    ..seems to be that with LLMs doing all the "pointless drudgery" of actually writing code, that this will mean we can actually focus on the bigger picture or architecture while the LLM takes care of the details. Whenever I read this take it makes me think about *actual* architecture. Architects draw building plans. They visualize, plan out, and draw the structure of the house/office/whatever. To do this successfuly, i.e to draw a building that can actually exist in the real world, they have to understand the limits of their building materials. You cannot build a skyscraper out of wood and nails. Architects don't have to be engineers, but they do have to understand the basic contraints of physics and materials. In software engineering, that base understanding of the building blocks comes from *writing code*, lots of it, and *making mistakes*, lots of them, that lead you to an overall understanding of what is and is not possible. Those that gain an understanding pf the details can also become adept at big picture thinking. They understand the role of each component part in holding up the structure and how those components fit together. There's no shortcut to this understanding, and despite what you hear from different AI boosters every single f***ing month, no LLM writes good enough code that you can ignore the details. No, not even Claude or whatever the hype-du-jour is. None of them, and they never will. "Prompt Engineer" is not, and will never be a job title. That's like hiring an architect that doesn't know what a brick is to design your house. The drawing he does is very pretty, but a mild breeze knocks it over on the real world.
    • by jythie ( 914043 )
      Over the years I've brushed up my AI skills with various classes. One trend I've noticed that I find rather disappointing.. in the earlier classes, you generally started with essentially building a math library. You started with the linear algebra, you build some matrix operation functions, you chained them together, built your layers, and essentially had to construct your networks using, at most, numpy. Then classes started jumping right into tensorflow... I think last class i tried to take I gave up o
    • I think there's some confusion between architects and engineers. My father, a structural CE, often complained about architects who drew pictures of stuff that would not stand up. It was a back-and-forth to come up with a design that met architect's intent AND civil engineering. (Note that both could be liable if the building fell down, but that's another discussion...)

      But as someone who has been thinking about software/systems architecture "versus" software/systems engineering since about 1990, a key par

      • Exactly my point. AI boosters seem to think the details don't matter, but code is literally all details. The big picture or architecture *guides* the thousands of tiny choices you make, but each of them still matter to the whole. LLMs don't understand cause and effect and they can't map a conceptual understanding from one problem to another - in short they aren't capable of the kind of cognition you need to be capable of in order to break down and solve problems *while* respecting an overall design. They al
      • by lxnt ( 98232 )

        Ah, it was the same with bare-metal expertise since 2013 or so. Everyone went cloud and forgot everything running about actual hardware.

        Then came HFT and then crypto and started paying insane money to few who did remember.

        And it never stopped. If you know your way around low-level stuff, you're golden even now.

    • by gweihir ( 88907 )

      Indeed. The very fact of the matter is that writing code is not "pointless drudgery". Sure, if you are bad at it, you are going to hate it, but if you are bad at writing code, why are you doing it in the first place?

    • The "pointless drudgery" is USING software, IMHO.
      Software, like "the internet", is fucking garbage now.
      If it is "smart" and you use it, then you are not.
      Best advice is use as little as possible.
  • by jythie ( 914043 ) on Friday February 27, 2026 @12:03PM (#66013764)
    Any time a massive company bets heavily on something being true that the public is skeptical about, naturally it has an incentive to claim really loudly that it is true.
  • In my experience, everyone is at times a systems analyst, a software developer, a code monkey, a qa analyst, heck: even a salesman. It all overlaps.

  • by doragasu ( 2717547 ) on Friday February 27, 2026 @12:17PM (#66013804)

    Is the best thing Microslop AI will teach you.

  • by wakeboarder ( 2695839 ) on Friday February 27, 2026 @12:24PM (#66013814)

    they aren't going away entirely, assembly programming jobs aren't dead, they have just been reduced. You still need people that know assembly to fix compilers, you will still need people to read and fix code when AI breaks and we have all seen it break. It will probably break less, but nothing is perfect, especially a black box that no one really understands and can't control.

  • by SlashbotAgent ( 6477336 ) on Friday February 27, 2026 @12:34PM (#66013828)

    Microsoft then: Our existence, our survival depends on developers.

    Microsoft today: Developers must die.

    • Microsoft then: Our existence, our survival depends on developers.

      Microsoft today: Developers must die.

      Someone needs to use AI to rework Steve Balmer's "Developers! Developers! Developers!" stage performance to instead be "AI agents! AI agents! AI agents!".

  • Anything really good happened. I mean there are some medical advances but that's tempered by the fact that half the world is currently trying to destroy science for political gain so a lot of those advances aren't going to go anywhere or get to anyone...

    Every single day it feels like everything is just falling apart. And this is just one more example of that.
  • Uh... excuse me. Yes, AI can generate code. Who's going to read it to verify the AI wrote something correctly? Middle management? An untrained monkey? No. You're still going to need an actual Software Engineer. Assuming you can even drop the grunt coders, they're usually junior devs who are on their way to becoming engineers but now never will. Then you're going to end up in a dystopian society where humans end up like Eloi, living off of technology they don't understand and being driven to slaughte

  • The likelihood that AI will ever come up with anything that it hasn't been trained on already is dubious at best. They are grand regurgitators. So if you have a new control algorithm, and idea for a product. You can make it with fewer people and less money now that you have AI. But that doesn't mean you can say. Hey AI - make the new idea FOR ME!!. You need the idea and work with AI to materialize it.
    • by gweihir ( 88907 )

      Indeed. And, incidentally, part of that idea being good is knowing what is possible and what is not. AI cannot help with that. All it can do is making known stuff crappier and cheaper.

  • Man, these dummos that don't understand how anything works really are flogging the latest bubble. Don't be forcing doctors, lawyers, engineers and generals to use your half-baked hallucinariums.
  • AI software is still software folks. It may offer a higher-level language, but at the end of the day. It's hardware, software.
  • There are still many carpenters and craftsmen that make very fine furniture or custom built houses, even though more automated methods exist.
  • Educators want support to build AI literacy and critical thinking skills.

    Those two things are diametrically opposed.

    • by gweihir ( 88907 )

      I am an educator. I want to support AI literacy only in the sense to tell people to be very careful with it and to stay away from it when they try to learn something and use it only with extreme caution for real work.

  • Few programs in the world are as exhaustively documented as Excel; there are so many doorstop books cataloguing the use and output of every object, method, and function.

    So, that's perfect for reverse-engineering it, as long as you have a lot of programming time - or programming suddenly got almost-free. Just shove all those Excel manuals into your best coding LLM and tell it to clone excel with free code we can all use.

    I will finally be impressed with AI-coding; and I will finally be able to leave Microso

  • They have nothing valuable to contribute anymore.

  • Given all the Win11 snafus

  • but computer programing is doing just fine. Sure you can vibe-code a really low quality product, but try vibe-maintaining it while the customer you promised 99.999%-uptime-or-their-money-back is on the phone.

  • They said that with VB, *anybody* could now build software.

    The reality was, and is, it will still take smart people to build good software, their tools are just more powerful now. We've gone from hammers and saws, to nail guns and power saws. The work goes faster, but it's still human work.

    • They really should bring back VB with LLM integration. I know they won't for a multitude of technical and nontechnical reasons. But to be honest it would probably be the best way to serve these idea guy customers they're trying to mesmerize.

      • I personally do not miss VB. The language was too verbose, and ActiveX was a nice-sounding idea that was hell to work with. Everybody's VB app wanted to update your shared OCX components to versions that weren't compatible with your *other* VB apps' OCX components. Everything that VB promised, .NET delivered, especially C#.NET. No more rolling your own stuff, .NET has just about everything built in.

The trouble with money is it costs too much!

Working...