Forgot your password?
typodupeerror
Microsoft AI

Microsoft Exec Asks: Why Aren't More People Impressed With AI? 211

An anonymous reader shares a report: A Microsoft executive is questioning why more people aren't impressed with AI, a week after the company touted the evolution of Windows into an "agentic OS," which immediately triggered backlash.

"Jeez there so many cynics! It cracks me up when I hear people call AI underwhelming," tweeted Mustafa Suleyman, the CEO for Microsoft's AI group. Suleyman added that he grew up playing the old-school 2D Snake game on a Nokia phone. "The fact that people are unimpressed that we can have a fluent conversation with a super smart AI that can generate any image/video is mindblowing to me," he wrote.
This discussion has been archived. No new comments can be posted.

Microsoft Exec Asks: Why Aren't More People Impressed With AI?

Comments Filter:
  • Marketing. (Score:5, Funny)

    by msauve ( 701917 ) on Thursday November 20, 2025 @10:44AM (#65806837)
    Maybe they should rename it "Clippy."
  • Obvious answer (Score:5, Insightful)

    by stealth_finger ( 1809752 ) on Thursday November 20, 2025 @10:48AM (#65806843)
    Because it's not impressive. it's actually quite shit really.
    • “The fact that people are unimpressed that we can have a fluent conversation with a super smart AI that can generate any image/video is mindblowing to me,”

      Where can you actually do that? That's not a thing. These people seriously think they have cortana over there. Apart from they dropped her already.

      • Re: Case in point (Score:5, Informative)

        by crmarvin42 ( 652893 ) on Thursday November 20, 2025 @12:09PM (#65807095)
        Precisely. LLM systems are, ultimately, auto complete on steroids. That they can present a reasonable simulacrum of intelligence, does not change the fact that there is nothing else intelligence involved. No reasoning, no knowledge. Just probability based word assemblies.

        that is why we are not sufficiently impressed for this douche. We see the limitations, and the harms that come from ignoring the limitations, and end up underwhelmed. They are promising something they are not actually delivering.
        • AI is like a whiz-kid who can't tie his own shoes. The bad reputation that AI has is well-deserved. Add in the business executives that drool over lowering their labor costs and shoving employees out the door by something we're supposed to be impressed with and love. Add it to you shaky operating system that barely works on a good day and force people to go through hoops to uninstall it because it gets in their way.

          This is the failure of most tech marketing, believing their own BS, then throwing actual tril

        • by rahmrh ( 939610 )

          Either Autocomplete on steriods or Correlation on steriods. Every useful right answer it gives is not original and simply a mis-mash of other's work. As you say, neither is a sign of any actual intelligence. But the believers say AGI is "right" around the corner. I predict it will get here the week before the Faster-Than-Light Drive goes operational, and the week after we get commercial Fusion working...

        • You're confusing the task with the mechanism. Classic autoconplete uses statistical methods, often using some variant of a Bayesian algorithm. The task is to predict the next word, the method is statistics.

          But if I asked *you* to predict the next word in a sentence, you would not be using a simple statistical method. Neither is the AI. It doesn't have the breadth of multi domain training data that your neutral network has, so it doesn't really think like a human does, but the way it functions is much closer

      • “The fact that people are unimpressed that we can have a fluent conversation with a super smart AI that can generate any image/video is mindblowing to me,”

        Where can you actually do that? That's not a thing. These people seriously think they have cortana over there. Apart from they dropped her already.

        More to the point, many (most?) people probably don't even want to do that or care. For example, I don't need (or want) to have a conversation with a AI/LLM and don't need to generate images/videos or, more precisely, have AI generate them for me - and I can do my own coding.

        • Comment removed based on user account deletion
        • I think this "having a conversation" thing is the crux of the scenario here.

          The computer went from being a calculator, i.e. main use case was to crunch numbers, produce reports, tally sums, all the grunt work of business, to a being a typewriter. The computer went from the back room to everyone's desk, and the main use case was as a word processor. Now we are being told "just talk to your computer". Whether by design, or accident, which one you never know is microsoft's doing, they want the main use case to
    • Re:Obvious answer (Score:5, Insightful)

      by cayenne8 ( 626475 ) on Thursday November 20, 2025 @11:04AM (#65806899) Homepage Journal
      I think because it is not dependable....it still quite often gets things wrong and gives wrong answers.

      Hell, just the other day, it got the wrong songs on an album being discussed, info that is out there on the web for easy verification.

      If you can't trust if for simple things like that, it's then a QC nightmare when you try to trust it for important code or design....where tolerances can mean life/death or at the very least....severe LITIGATION.

      • Re:Obvious answer (Score:4, Insightful)

        by stealth_finger ( 1809752 ) on Thursday November 20, 2025 @11:10AM (#65806915)

        I think because it is not dependable....it still quite often gets things wrong and gives wrong answers.

        Exactly, and they expect you to hand over control of your life and everything wholesale. This time next week it'll be telling you about your appointment on mars with the overseers and trying to suggest snack bars for on the way.

        I would watch a reality show where one of these execs does exactly that for a while. No human assistance, just a laptop with his fucking "agentic" OS whatever they fuck that is supposed to mean. And as a twist yank his network connection for a bit halfway through and see how they get on with that.

      • If I wanted to hear something spew bullshit confidently, I'd watch CSPAN.
      • THIS is exactly my issue.

        It is "Confidently Incorrect" so often that it's frightening to think of people relying on it.

        • by Bert64 ( 520050 )

          People are often wrong too...
          The problem is that we are used to machines being used to do things that machines are good at - eg for predefined math calculations a computer is expected to reliably and quickly get the correct answer every time.

          The problems being targeted by LLMs are not so well defined, so errors can be made wether its done by a human or an LLM. But people are used to the traditional problems solved by computers and expect everything to be the same.

          Instead of assuming an LLM is a reliable mac

          • "Instead of assuming an LLM is a reliable machine that follows a rigid process and produces reliable output every time..." = Isn't that the goal for it (producing reliable output) and replacing bodies at desks?

            Or, should we just blindly accept that an office full of LLM-AI computers might occasionally get something right, and that's the best the company can hope for?

            crmarvin42 had it right up above: "LLM systems are, ultimately, auto complete on steroids. That they can present a reasonable simulacrum of int

            • by Bert64 ( 520050 )

              A current generation LLM is not perfect and cannot replace a skilled employee, at best it can assist a skilled employee to do their work more efficiently.
              If you understand this and have appropriate use cases, then it can absolutely be useful.

              If you're trying to use it for something it's not suited for then it's going to be useless or even detrimental.

        • What I tell all my bosses when they ask me to 'use more AI' is this:

          AI is great, when it doesn't have to be correct.

          I love Suno. I think it's a miracle. AI making clipart, album covers, poetry? Fine. If it's something you enjoy, go ahead. (I have to say that AI 'copying people' and 'displacing artists' etc are separate problems from what I'm talking about, which is 'accuracy.')

          However, if you want a 'fact' then you sure as crap don't want AI. It doesn't know if it's answer is correct or not. I

      • by Bert64 ( 520050 )

        Real intelligence also gets things wrong, people are also subject to bias, and will try to cover their ass once they realise they've fucked up etc.
        Thats why people's work gets quality controlled and reviewed etc, and anything machine generated should be subjected to similar processes.

      • by tchdab1 ( 164848 )

        They're not just wrong, they're stupendously, stupidly, idiotically wrong by street-level human standards. Sure it can find & assimilate amazing things, except when it can't, and then you wonder about what you thought it just did correctly.

    • BINGO! AI writing is only good writing to people who aren’t very good at reading or writing. When I lived in Germany there were web sites advertising to Americans written by Germans. Their writing was too full of superlatives and honestly was somewhat humorous to read. AI writes like a poor version of what they did. Or, there is that podcast generating Notebook LM from Google. Has anyone listened to the crap it generates, like the drivel that it thinks a podcast should sound like? The man and the woma
    • Re:Obvious answer (Score:4, Interesting)

      by dbialac ( 320955 ) on Thursday November 20, 2025 @11:27AM (#65806963)
      I've tried to replicate a cable stayed bridge so I can make plans of it for a model. I got it to the point where it replicated it perfectly in the first 3D rendering. And then when I tried to make plans off of it, everything went to shit. After everything went to shit, it couldn't revert it to the previous instructions of what it had rendered, it just kept it's current state and forgot about everything else. It couldn't even render the same image again. I'm 3 days in and I still don't have anything useful other than that one single great looking 3D rendering. I've finally reverted to having it take minor measurements from a photo, places where I have to draw lines to get what I want out of it. When I didn't point draw the lines, the angles and lengths were completely off even though they should have been blatantly obvious since the colors stood out quite clearly.
    • Re:Obvious answer (Score:5, Interesting)

      by CubicleZombie ( 2590497 ) on Thursday November 20, 2025 @11:48AM (#65807041)

      This is an actual prompt I sent through VS Code Github Copilot to Claude Sonnet 4.5:

      "This Angular component uses Google Maps API heatmap to render data. Google's heatmap has been deprecated. Change this component to use deck.gl heatmap instead."

      IT DID IT. First time. No errors. No bugs. ~45 seconds. It even installed the packages.

      How can you not find that amazing?!

      • by vyvepe ( 809573 )
        Can you provide the original source code and the patch provided by AI which did the change?
      • Re:Obvious answer (Score:5, Insightful)

        by Bert64 ( 520050 ) <bert@slashdot[ ] ... m ['.fi' in gap]> on Thursday November 20, 2025 @12:29PM (#65807171) Homepage

        Because you had a specific goal in mind, knew what you were doing, knew about the different heatmap implementations available and gave precise instructions. You could probably have written this by hand yourself and it just would have taken a bit longer to do.

        Problems come up when you have people who don't know what they're doing giving vague instructions to the LLM, and then blindly trusting the output. For instance if you said "draw a heatmap of $DATA" who knows what it would have come back with? it may well have tried to use the deprecated google api because there are likely a lot of examples online and in the LLM's training data.

        LLMs are great when they're used to augment people who are already skilled in the art, and can generally help them save time doing a lot of the repetitive stuff. They're not some magic wand allowing someone with zero experience to achieve great results.

        • To be honest, the first thing I did was go to ChatGPT and ask it to list alternatives to Google heatmap. It gave a list of several and a chart comparing all of them. I picked one and had Sonnet implement it. It's like having a team of junior developers working for me that never complain about anything.

          What AI can't do is to take a whole feature off the backlog and implement it. Yet.

          • by Bert64 ( 520050 )

            What AI can't do is to take a whole feature off the backlog and implement it. Yet.

            It can in some cases, depending on various factors like the codebase it's working with, the nature of the feature and how well you describe it.

            You will often need to refine the prompts, or prompt it further to address bugs or things it decided to implement in a strange way. It also tends to work better with code bases that are smaller or more modular, and with code that was developed using an ai assistant rather than existing code bases.

            You're right about it being like junior developers, it's good for getti

    • This is true.

      And, the MS guy is missing the point. We don't want AI out of Windows.

      What we do want is an Operating System that:
      - Is secure,
      - Lets us do what WE (the user) wants
      - Doesn't spy on us, and
      - Gets the heck out of the way,
      - Is configurable and respects our configuration decisions,
      - Obeys out instructions.

      There are a few that do that. These seem to be close cousins
      There is one that almost does that. This is not too far related to the above, you can see the heritage.
      There is one that used to come clo

    • by Touvan ( 868256 )

      1000% this. Those folks who think it's impressive are revealing their own mediocrity. It's shite. For real. Those of us who have real skills can tell - and those who don't (mostly, middle managers) cannot.

    • Re: (Score:3, Interesting)

      Because it's not impressive. it's actually quite shit really.

      But it's not though and this makes me sad.

      Look, come at it from an academic perspective. After years of research into canine linguistics. somebody created the world's most eloquent talking dog. And darn it that dog can paint too. This is really really really cool! Compare what we had 5 or 10 years ago, it's really impressive. A dog! And it can talk! go play with the doggie, it's fun (to be warned it's a bit racist and might bite). Also you know i

  • by jdawgnoonan ( 718294 ) on Thursday November 20, 2025 @10:48AM (#65806845)
    Rich technologists like this guy, who live for their work instead of working for a living, of course cannot understand why normal people who have a firm grasp on technology are not thrilled or impressed with this stuff. For one, we know that these things are not trustworthy even though at work we are being asked by nontechnical leaders to trust these things. Managers are very susceptible to technologist snake oil salesmen. Secondly, in capitalist societies like the US we know that business leaders would happily replace all of us with machines if and when they can, their only downside to us all not working is that we may not have money to spend. Since they cannot understand those of us who work for a living instead of living to work they cannot understand why we don’t all want to be entrepreneurs or engineers and they also do not understand what life is like for people who do not make enough money to have a huge nest egg. Most people live paycheck to paycheck even if they have a moderately high paying job. I have to say, those Microsoft commercials with the people talking to their PC certainly do not represent how I want to use my PC. I do not want my PC to attempt to trick me into thinking that it is intelligent and my companion, I personally think that is gross.
    • Not to mention that humans enjoy creating this stuff themselves and maybe don’t like the idea of having a machine do almost all of the work. When I prompt create an image I do not personally feel like I created it.
    • Nice post. I'd also point out that the biggest constant of my 30 year career in IT is this: "management hates you and wants you dead". That's it. They will believe almost anything that tells them it's okay to fire all the techs and move straight to the frat-boy dream of hookers and blow. They see IT folks as standing between them and the hookers & blow.

      We are the one thing keeping them from the hooker party. If only we could just fire those goddamn techs and ESPECIALLY the shit-eating programmers. Tho
    • Secondly, in capitalist societies like the US we know that business leaders...

      Planned economies would likewise happily replace people with machines, if they could. The set of modern planned economies is very short: North Korea.

      State capitalism -- where the state controls the decisions, but the profit motive remains, includes: China, Laos, Cuba, Turkmenistan, Eritrea... there's not a single country there that give a flying fsck about human rights.

  • Current LLM's (Score:5, Insightful)

    by GoJays ( 1793832 ) on Thursday November 20, 2025 @10:49AM (#65806847)
    Current LLM's are like chatting with a chronic liar. Once you learn everything they say is just randomly spewed nonsense, you eventually stop talking to them or disregard anything they say as a lie. That is AI in it's current form. It's not useful when you have to spend more time fact checking answers, then if you were to just do the damn thing yourself in the first place. That's why it is underwhelming.
    • Re:Current LLM's (Score:5, Insightful)

      by RobinH ( 124750 ) on Thursday November 20, 2025 @11:16AM (#65806931) Homepage

      Exactly. As technologists, we need the output of computers to be precise and accurate. LLMs might be precise, but they're very often inaccurate, and that's not acceptable to us.

      The average person doesn't live in a world where accuracy matters to them. A colleague said she used AI all the time, and I asked her how. She said she often tells it the contents in her fridge and asks it for a recipe that would use those ingredients. She said, "yeah, and it's really accurate too." I don't know how you measure accuracy on a test like that, but it doesn't really matter. If you're just mixing some ingredients together in a frying pan, you probably can't go too far wrong. As long as you don't ask it for a baking recipe, it'll work out.

      And I think that's what's going on. The people who love AI don't know enough to realize when it's wrong, or are just asking it open ended questions, like you would ask a fortune teller, and it spits out something generic enough that you can't disprove it anyway.

      • by N1AK ( 864906 )
        If you ask for a standard baking recipe it'll almost certainly be fine as it'll just rip off the content from GoodFood or some other site. Ultimately if the recipe it produces isn't dangerous and the person asking is happy with the end result then it was a good output. What's the alternative? Assume that if you search for a recipe or use a recipe book then there's 0% chance the recipe won't be underwhelming?
        • by RobinH ( 124750 )
          You don't understand the problem. The LLM won't "rip off" content from a website like GoodFood. That's now how it works. It doesn't copy stuff wholesale. It's a text generator that tends to generate text that looks like its training data, in a similar way that a person retelling a story or a joke will retell it from memory, but the memory isn't a facsimile, just like our memory isn't verbatim. When outputting the text, it'll be similar, but it won't be identical. I mean, it might be, but it might outpu
          • by sphealey ( 2855 )

            That's what the big bosses tell us anyway. In a somewhat obscure corner of the human experience where I sometimes hang out there are ~5 web sites of varying ages that write and publish original and meaningful things. But if you search for that obscurity on Google you will now be directed to 847 "sites", "magazine articles", "experts", etc of which 842 are thinly disguised machine-rewritten versions of the 5 real sites - the kind of rewriting I would have instantly flagged as plagiarism back in my TA days -

            • by allo ( 1728082 )

              Welcome to the real world. Wait until you learn about bots copying high rated reddit posts verbatim.

    • That was my experience before ChatGPT 5. With ChatGPT 5, here comes the qualifier: if you use it within its training data range, it's quite good. Within its training data means, doing what other people have done before and is likely to be found on stackoverflow. For example, setting up training a neural network with torch. If you go outside their comfort zone, I agree with you.

      Danger lives when these tools are used in an area where the user even lacks the expertise to factcheck the answer. The responses sou

      • by N1AK ( 864906 )
        That sums up how I talk about it with people generating code. If you can understand what you are asking it to do for you and check it, or have really robust tests and output specifications, then it's helpful. Without those you're basically playing roulette and hoping it doesn't introduce security, accuracy, or performance problems.
    • by N1AK ( 864906 )
      I'm on the cynical side of just how capable AI is but IMO people like you are being equally extreme in overstating the issues. At a minimum it can be useful as a incremental improvement on regular search, outlining what you want to know and asking for trusted resources to validate results. My experience is the use goes beyond that, although it's a long way from the life changing tool that current vendors like to claim thus far.
  • by Polsar ( 158507 ) on Thursday November 20, 2025 @10:50AM (#65806851)

    Why aren't more execs listening to voice of customer feedback? Who asked for an AI button on the keyboard? Despite the "Advancements" it is still a cheap party trick. Get over yourself.

    • by nightflameauto ( 6607976 ) on Thursday November 20, 2025 @11:01AM (#65806893)

      Why aren't more execs listening to voice of customer feedback? Who asked for an AI button on the keyboard? Despite the "Advancements" it is still a cheap party trick. Get over yourself.

      There's the real question, isn't it? At one point, companies were attempting to provide customers with what they wanted, or at the very least, what they said they wanted. Now, especially in tech circles, companies are altering existing products and creating new products that end users are screaming bloody murder angry over, and telling us we should love it. It's more than just bad marketing, it's outright hostility toward customers. And then this motherfucker comes along and asks why we're not impressed when they keep shoveling shit at us we don't want, we've told them we don't want, we keep giving them "backlash over, and they're selling as a way to replace us all at our jobs and in large segments of what we do outside of our jobs as well.

      Fuck this guy sideways. Sick to god damned death of the tech leadership not just being out of touch with the userbase, but outright hostile toward us and then surprised when we don't worship their every hostile move.

      • by laxguy ( 1179231 )

        exactly this!! you said it so much better than ive been able to but you are exactly correct. Tech companies are actively hostile towards their users and creating products no one wanted (not just AI, but algos, recommended, apps, everything is blatantly designed to *use* YOU) while trying to beat us into acceptance, its fucking insane!

    • I agree with what most are saying here. As an old guy (age 65), I noticed quite some time ago that software, generally, rather peaked about 20 years ago. That is, for general things like web browsers, e-mail applications, word processing, accounting, spreadsheets, and the like, we got to the point that there was no real reason to pony up money for Version 18.7 of software when Version 11, which you installed five years ago, was still doing just fine. The problem with software (and intangible technology in g
    • CEO: Am I really so out of touch? NO, it's the customers who are wrong!
  • Product design (Score:5, Insightful)

    by ebonum ( 830686 ) on Thursday November 20, 2025 @10:54AM (#65806867)

    They will build what they want. Not what we want.

    • To be expected. Austin recently floated a property tax increase that was over the state limit, so had to get it voter approved. It was rejected, soundly, like 66% voted no. Which in Austin is surprising, because generally voters approve this stuff. But tone deaf council just doesn't see that tax rates in Austin are killing the middle class. 10K+ prop tax bills are kind of the norm without the increase. But the mayors response to the sound defeat was typical. It was, "We did not explain the increase correctl
  • by Junta ( 36770 ) on Thursday November 20, 2025 @10:55AM (#65806869)

    super smart

    If that CEO thinks the behaviors of the LLMs are "super smart", then I really wonder about his level of intelligence...

    IT's certainly novel and different and can handle sorts of things that were formerly essentially out of reach of computers, but they are very much not "smart".

    Processing that is dumb but with more human-like flexibility can certainly be useful, but don't expect people to be in awe of some super intelligence when they deal with something that seems to get basic things incorrect, asserts such incorrect things confidently, and doubles down on the same mistakes after being steered toward admitting the mistakes by interaction. I know, I also described how executives work too, but most of us aren't convinced that executives have human intelligence either.

    • by sinij ( 911942 ) on Thursday November 20, 2025 @11:31AM (#65806977)
      Significant part of CEO job responsibilities is to generate polished bullshit. AI is really good at that, hence this CEO is impressed.
    • The CEO has been given all sorts of impressive demos where it works well, so his view of the technology is skewed.

      Add in that he's financially motivated to believe that they've created something amazing with all of their investment, and you have a recipe for someone to be very detached from reality.

  • Nothing but Clippy (Score:5, Interesting)

    by Kisai ( 213879 ) on Thursday November 20, 2025 @10:56AM (#65806871)

    Unfortunately nobody is being impressed with AI because the companies being the most pushy with it, have bad intentions.

    Like let me explain something simple. I want a human-sounding TTS voice. Because these godawful AI companies want to make as much money as possible, they charge by the syllable. For something that doesn't even sound good.

    If I go find an actor/actress that I like the sound of their voice of, and want to create a weird golem of a voice, what I'd do is get several 48khz 16-bit recordings from audio books of that actor, run it through the training (because I have their voice and the book they are reading) and then find a performance style of that actor/actress I want (from maybe a movie or or television show) and thus "skin" that voice to sound like that performance. That will give me a 95% reasonable sounding voice for all the words from the books they read, and a 10% accuracy on words that they never ever said before.

    But these godawful voices that google, microsoft and amazon have, sound like they were trained on 10000 ebooks at 22khz and averaged out the tonal sound in a way that you can always tell it's a godawful AI voice because they always sound like a worn audio cassette tape.

    This same happens with image generation and text generation. It doesn't sound human, it doesn't look human created, it just looks like a mashup of things that are designed to pass the minimum standard of "I can hear/read it", not actually parse out creativity.

    Like I'll give some AI's a few points for solving a "better than absolutely nothing", like with translation of text, or auto-dubbing foreign voices, or allowing a programmer to figure out how to write something in a programming language they don't particularly like, but what these companies are offering is a lot of "AI will replace you", not "AI will help you"

    If I had unlimited money, I'd hire all the programmers, artists, voice actors, animators, I need to make a project, but I do not have tha tmoney. But I certainly am not going to spend money on an AI to crap-shoot "barely passable" every time.

    • by sphealey ( 2855 )

      "If I go find an actor/actress that I like the sound of their voice of, and want to create a weird golem of a voice, what I'd do is get several 48khz 16-bit recordings from audio books of that actor, run it through the training (because I have their voice and the book they are reading) and then find a performance style of that actor/actress I want (from maybe a movie or or television show) and thus "skin" that voice to sound like that performance. That will give me a 95% reasonable sounding voice for all t

  • by Sique ( 173459 ) on Thursday November 20, 2025 @10:57AM (#65806873) Homepage
    I can explain it very easily. I don't want to talk to a machine. I don't want my car to listen to my conversation with the people riding with me. I don't want smart home assistants listening to my TV program. I don't want my tools telling me what to do. I don't want YouTube to automatically translate video titles.

    Just because something is impressive does not mean I want it around me. That we can build a nuclear fusion device is impressive. But I don't want a hydrogen bomb exploding in my backyard.

  • super smart (Score:5, Funny)

    by awwshit ( 6214476 ) on Thursday November 20, 2025 @10:57AM (#65806875)

    You keep using those words. I do not think they mean what you think they mean.

  • Bullshit Seller (Score:5, Insightful)

    by TwistedGreen ( 80055 ) on Thursday November 20, 2025 @10:57AM (#65806877)

    This guy's completely delusional. Got it.

    Reminds me of the old adage that a salesman is the most likely to get duped by another salesman. He's just buying into his own bullshit.

    • You can't rise to certain levels of a big company's hierarchy without drinking the kool aid. (Or being at least somewhat a sociopath.)

  • by TWX ( 665546 ) on Thursday November 20, 2025 @10:58AM (#65806881)

    For those who learned the lesson to apply themselves to do the work in order to set themselves apart from lazy people, they see enabling lazy people as a slap in the face.

    For those who are smart, they see faux-intelligence or faux-intellectualism out of people who are not capable of applying themselves but expect credit regardless.

    For creative people who have and use skills to support themselves, they see enabling lackluster people who no actual interest in the artform trying to muscle-in.

    For those who need information, they see substandard results that are of even further questionable veracity than what they could find before.

    And for a whole lot of other people, they see something touted as labor-saving, ie, firing them.

  • by ponfgong-e ( 3901857 ) on Thursday November 20, 2025 @11:00AM (#65806889)
    I don't deny "that we can have a fluent conversation with" a computer is extremely impressive. Where I take issue is the "super smart" and "mindblowing". Like many advances in AI this technology teaches us as much about what intelligence isn't as it does about what it is. We used to think if we had machines that could play chess we would have super intelligence. It turns out playing chess wasn't the pinnacle of human achievement we thought it was. Until recently we thought the same of fluent conversation, but again we see that a machine that talk good does not general intelligence make. Intellectually I think it's great we got to do this experiment and see just what comes out the other side when scale transformer models to unimaginable sizes. It turns out some shaky non-deterministic tools that are quite useful for some tasks, but not to be trusted in high stakes situations. Given the promises being made in marketing these things, the amount of money spent, and the fact that people like Sam are telling society that we should diver all our effort to this instead of solving the existential crisis of climate change how can you be shocked that people are underwhelmed.
  • by nashv ( 1479253 ) on Thursday November 20, 2025 @11:09AM (#65806909) Homepage

    AI works well if you know what you are doing and you use it to take away the tedium. Say coding a 500-line routine that you know how to code, know what it should do and have the ability to tell a shit result from a good one. This is like a Doctor telling a nurse exactly what drug to administer. If you are going to use it actually diagnose the problem and come up with solutions , current LLM models are pretty shit. It's too bad most people who are using LLMs think it can replace actual domain-specific knowledge just because LLMs can fake it so well.

    • Re: (Score:3, Insightful)

      This is exactly the issue - AI is great, as an intern.

      "Oh, this new thing lets it see the whole project in context!" - Great, then why did it just add a bunch of functions that already exist? Also, why did it do that in a completely inappropriate spot?

      "You just need to write a better prompt. You can even define style guides and stuff." - Great. Will that make it stop checking if that value that I clearly defined is null every freaking line?

      "It's just following best practices." - No. It's follow
      • Will that make it stop checking if that value that I clearly defined is null every freaking line?

        What, was the LLM trained on Go?

  • The man lacks any real common sense, and he's further much to enamoured of his own ideas. All of his type of tech bros speak of "AI" like it's actually intelligent, as we (generally) understand intelligence, and it's not anything like that or anywhere near that. The machine learning algorithms that make all this run simply can't separate fact from fiction/opinions, lack any empathy/caring, are biased, etc. AI is basically garbage at the moment and if he doesn't recognize why more people aren't impressed he
  • by VorpalRodent ( 964940 ) on Thursday November 20, 2025 @11:12AM (#65806919)
    It feels like a really good extension of search results (hallucinations notwithstanding). I use it daily for little things...and then I go back through and clean it up to be actually usable. But I hate that when I try to point things like that out, I get responses like, "Oh, you just need a better prompt." These are people who couldn't do a proper Google search just a couple years ago, but they're suddenly a full blown engineer.

    On top of that, I've got people I know who have ceded all their thinking ability to ChatGPT, and it's resulted in them sounding like an idiot. One of my supervisors styles himself as a chemist / inventor. Mostly it's benign - he plays with mix ratios to get the result he wants. But lately, he's quite literally gotten himself into arguments with professional industrial chemists because he started letting ChatGPT do all the math and reaction calculations and he can't understand how it can be wrong.

    I've got marketing contacts whose eyes lose focus on a Zoom meeting because they're asking ChatGPT how to do the thing we're talking about, and then instead of asking appropriate follow-up questions to the group, they start spouting nonsense.

    The thing I keep going back to is this: "In your own area of expertise, when you ask it questions, you can readily see the shortcomings. Why then do you treat it as gospel when asking about areas outside of your expertise?"

    Don't get me wrong - I *do* think it's impressive. *Quite* impressive. But in real world scenarios I see it fail *all the time*, and everyone needs to stop pretending that this isn't happening.
    • After all of these years, finally, someone who knows what brand they really used in Guyana. Excellent!
  • by fuzzyfuzzyfungus ( 1223518 ) on Thursday November 20, 2025 @11:20AM (#65806945) Journal
    In a sense his puzzlement is justified; when the tech demo works an LLM is probably the most obvious candidate for 'just this side of sci-fi'; and, while may of the capabilities offered are actually somewhat hollow (realistically, most of the 'take these 3 bullet points and create a document that looks like I cared/take that document that looks like my colleague cared and give me 3 bullet points' are really just enticements to even more dysfunctional communication) some of them are fairly hard to see duplicating by conventional means.

    However, I suspect that his perspective is fundamentally unhelpful in understanding the skepticism: when you are building stuff it's easy to get caught up in the cool novelty and lose sight of both the pain points(especially when you are deep C-Level; rather than the actual engineer fighting chatGPT's tendency to em-dash despite all attempts to control it); and overestimate how well your new-hotness stacks up against both existing alternatives and how forgiving people will or won't be about its deficiencies.

    Something like Windows trying to 'conversational'/'agentic' OS settings, for instance, probably looks pretty cool if you are an optimism focused ML dude: "hey, it's not perfect but it's a natural language interface to adjusting settings that confuse users!"; but it looks like absolute garbage from an outside perspective both because it's badly unreliable; and humans tend not to respond well to clearly unreliable 'people'(if it can't even find dark mode; why waste my time with it?); and because it looks a lot like abdication of a technically simpler, less exciting, job in favor of chasing the new hotness.

    "Settings are impenetrable to a nontechnical user" is a UI/UX problem(along with a certain amount of lower level 'maybe if bluetooth was less fucked people wouldn't care where the settings were because it would just work); so throwing an LLM at the problem is basically throwing up your hands and calling it unsolvable by your UI/UX people; which is the an abject concession of failure; not a mark of progress.

    I think it may be that that he really isn't understanding: MS has spent years squandering the perception that they would at least try to provide an OS that allowed you to do your stuff; in favor of faffing with various attempts to be your cool app buddy and relentless upsell pal; so every further move in that direction is basically confirmation that no fucks are given about just trying to keep the fundamentals in good order rather than getting distracted by shiny things.
  • People who want to use AI are already doing it. They will use the AI that meets their needs the best.

    For casual users, that's GPT, often ChatGPT, because it's ubiquitous. They can access it anywhere and get the same capabilities. That's what they care about: ease of use, ease of access, and consistency.

    It's the same reason people still prefer Windows on the desktop. Except this time, Microsoft didn't get there first. So now they're the latecomer or the afterthought, and they don't like it. Too bad; innovate

  • Most people are very impressed with AI. That's why adoption for performing so many things is as rapid as it is.

    Operating systems is one of the few places where direct AI integration makes little sense. The sole job of operating system is the function as something that connects hardware you have to software you are running. It needs to be maximally predictable by both, so things you actually need running, software that runs on top of the operating system is as stable and as fast as possible.

    Agentic OS is the

  • ... there is NO REAL AI YET ...just smarter algorithms. Nothing that is truly intelligent.

  • You oligarchs aren't engineering AI to work for people. You're engineering AI to work for corporate interests. It takes far more than it gives in return. It's taking our jobs. It's taking our electricity. It's taking our wealth. It's taking our creative works. It's taking our data.

    And what is it giving in return?

    It's giving the executive and corporate leaders at eight companies on our planet a ridiculous amount of wealth. To hell with the dog-and-pony show going on in the foreground.

    Fuck our corpora

  • Like many of us he's enamored with the fictional tech from Star Trek that portrays talking to an intelligent computer and seems like a great idea on screen at least. So futuristic. Computer, please reconfigure my warp core for more power. Done. Best idea ever.

    That and touch panels everywhere! Works so well on a star ship, why not put them in our cars?

    Never mind that copilot, like all LLMs, confidently lies. And "super smart" really means it reads rubbish posted on the internet and pretends it is accurat

  • If Microsoft was a person they would be in jail for assault a long time ago. The Steam Machine would not have been made if we could have a non abusive version of Windows without hacks or secret versions. Forced AI is just as bad as Forced Ads, Forced Edge and Forced Accounts. Just give us our offline, AI-less Windows.
  • Over hyped search engine and information compiling engine, it is not intelligent it is not sentient
  • Microsoft AI tools may be OK but London trust the data privacy issues with integrating AI into everything. So I disable it I use stand alone AI tools a lot. (Mostly Gemini) because I can only give them access to specific data

  • Because AI (by which I mean the generative AI / LLMs that the AI bros are pushing, not the actually useful low-key machine-learning algorithms) is a carnival sideshow. It's cool for about a minute and then proves itself to be trash.

    It's like how ELIZA [wikipedia.org] was fun for about 30 seconds until you realized how dumb it actually was.

  • Now and in the future. Typing crap into Google and just surfing is risky enough. I probably shouldn't have said that.
  • because they didn't ask for the tools, they were forced upon them by tech companies. they are also not impressed because... they have tried using the tools and found the limitations.
  • Windows has slid so far downhill. You have no privacy, it's not your computer: It takes longer to even attempt to set Windows for any kind of privacy--than it takes to install Linux Mint. Even then, who could trust Microsoft at their word? Why is there any data in Edge's cache--on a computer that it never gets used on? The Group-By "Feature" is only for assisting the AI, and has no use for a human. Privacy is so lacking--that I don't even want to plug a drive into a Windows box that has data on it. "My Docu
  • by Dasher42 ( 514179 ) on Thursday November 20, 2025 @11:46AM (#65807035)

    It's not intelligence. It is not acquiring new behaviors and ideas, but regurgitating old ones in ways it often cannot verify or test. That detachment from reality flies with management, but the rack-and-file can't afford such liabilities.

    We don't need large-scale language models to generate sophisticated fabrications. We need small, efficiently fluent interfaces between humans and proven tools and data. The market is going to have to correct to the actual right-sized value of the technology.

  • We're not impressed because you took the world's fastest car and sold it as a teleportation device. You sold us star trek, but delivered a really good F1 car. So yeah...only a dumbass would be fooled. If you said "we've improved search", I'd very much agree. If you're like Beinhoff and Zuckerberg and told us you've replaced most of your development team...without any evidence, yeah, most of us know you're full of shit, especially those of us who use these.
    • by Bert64 ( 520050 )

      You can replace most of your developers once you have a mature product, just shift it into maintenance mode.

  • I mean, HAL would kill you, but at least your personal stuff was yours--not like Microsoft Recall.
  • Maybe, just maybe, passing a turing test isn't the pinnacle of human ingenuity he is pretending it is. It can't actually do anything useful, but unlike the children I'm raising, I have no hand in whether or not it will EVER be able to do anything useful.
  • Most people aren't authors or painters who earn a direct living from their creative work (of which there are very few), but most people put some amount of creative effort into their jobs and livelihoods. Whether it is a financial analyst in a cubicle who develops independent analyses of the prospects of an investment target, a graphic artist who creates flyers and web sites for small businesses, or an electrician who figures out a better way to route cabling through a standard spec house during construction

  • Microsoft Exec: Why Aren't More People Impressed With AI? Microsoft Co-Pilot: People are justifiably unimpressed with AI because it often feels unreliable, lacks human nuance, raises ethical concerns, and sometimes dulls rather than enhances critical thinking.
  • Random number generation is not the AI people expect and never will be; then people realize the amount of resources AI is wasting and it just looks awful.

    You can put lipstick on the laptop's AI, but not that big fat ass pig sitting behind the curtain powering that laptop. We have completely undone all the environmental savings ever made from switching to LED light bulbs, and it isn't even close.
  • Because people like stability. Life is hard enough without the earth shifting under your feet every other day.

  • And they make us do a war dance every morning, praising the virtues of AI. The only really good thing AI has done is got rid of marketing departments.

  • "The fact that people are unimpressed that we can have a fluent conversation with a super smart AI that can generate any image/video is mindblowing to me,"

    Mindblowing is that companies make all the claims about AI that are 100% unfounded. "generate any image/video"... No it can't. "fluent conversation"... Unless I have to constantly remind it about the thing it said two prompts ago that it forgot. And I PAY for AI access.

    It's not anywhere near impressive. It's a party trick at best and dangerously misleadin

  • Because it sucks big green donkey dicks.

    And while Microsoft is at it - how about they quit making security nightmares of THEIR "AI".... That's why it SUCKS! There is little to NO security around ANYONE's AI.

    https://www.windowslatest.com/... [windowslatest.com]

    https://arstechnica.com/securi... [arstechnica.com]

    Even Anthropic CEO Dario Amodei says he's "deeply uncomfortable" with unelected tech elites shaping AI.
    https://www.businessinsider.co... [businessinsider.com]

    I'm sorry but I'm going to say this, the companies building and deploying "AI" have z

The meta-Turing test counts a thing as intelligent if it seeks to devise and apply Turing tests to objects of its own creation. -- Lew Mammel, Jr.

Working...