Microsoft Exec Asks: Why Aren't More People Impressed With AI? 211
An anonymous reader shares a report: A Microsoft executive is questioning why more people aren't impressed with AI, a week after the company touted the evolution of Windows into an "agentic OS," which immediately triggered backlash.
"Jeez there so many cynics! It cracks me up when I hear people call AI underwhelming," tweeted Mustafa Suleyman, the CEO for Microsoft's AI group. Suleyman added that he grew up playing the old-school 2D Snake game on a Nokia phone. "The fact that people are unimpressed that we can have a fluent conversation with a super smart AI that can generate any image/video is mindblowing to me," he wrote.
"Jeez there so many cynics! It cracks me up when I hear people call AI underwhelming," tweeted Mustafa Suleyman, the CEO for Microsoft's AI group. Suleyman added that he grew up playing the old-school 2D Snake game on a Nokia phone. "The fact that people are unimpressed that we can have a fluent conversation with a super smart AI that can generate any image/video is mindblowing to me," he wrote.
Marketing. (Score:5, Funny)
Obvious answer (Score:5, Insightful)
Case in point (Score:3)
“The fact that people are unimpressed that we can have a fluent conversation with a super smart AI that can generate any image/video is mindblowing to me,”
Where can you actually do that? That's not a thing. These people seriously think they have cortana over there. Apart from they dropped her already.
Re: Case in point (Score:5, Informative)
that is why we are not sufficiently impressed for this douche. We see the limitations, and the harms that come from ignoring the limitations, and end up underwhelmed. They are promising something they are not actually delivering.
Re: (Score:3)
AI is like a whiz-kid who can't tie his own shoes. The bad reputation that AI has is well-deserved. Add in the business executives that drool over lowering their labor costs and shoving employees out the door by something we're supposed to be impressed with and love. Add it to you shaky operating system that barely works on a good day and force people to go through hoops to uninstall it because it gets in their way.
This is the failure of most tech marketing, believing their own BS, then throwing actual tril
Re: (Score:2)
Either Autocomplete on steriods or Correlation on steriods. Every useful right answer it gives is not original and simply a mis-mash of other's work. As you say, neither is a sign of any actual intelligence. But the believers say AGI is "right" around the corner. I predict it will get here the week before the Faster-Than-Light Drive goes operational, and the week after we get commercial Fusion working...
It is NOT autoconplete the way you think it is (Score:2)
You're confusing the task with the mechanism. Classic autoconplete uses statistical methods, often using some variant of a Bayesian algorithm. The task is to predict the next word, the method is statistics.
But if I asked *you* to predict the next word in a sentence, you would not be using a simple statistical method. Neither is the AI. It doesn't have the breadth of multi domain training data that your neutral network has, so it doesn't really think like a human does, but the way it functions is much closer
Re: (Score:2)
“The fact that people are unimpressed that we can have a fluent conversation with a super smart AI that can generate any image/video is mindblowing to me,”
Where can you actually do that? That's not a thing. These people seriously think they have cortana over there. Apart from they dropped her already.
More to the point, many (most?) people probably don't even want to do that or care. For example, I don't need (or want) to have a conversation with a AI/LLM and don't need to generate images/videos or, more precisely, have AI generate them for me - and I can do my own coding.
Re: (Score:2)
Re: (Score:2)
The computer went from being a calculator, i.e. main use case was to crunch numbers, produce reports, tally sums, all the grunt work of business, to a being a typewriter. The computer went from the back room to everyone's desk, and the main use case was as a word processor. Now we are being told "just talk to your computer". Whether by design, or accident, which one you never know is microsoft's doing, they want the main use case to
Re:Obvious answer (Score:5, Insightful)
Hell, just the other day, it got the wrong songs on an album being discussed, info that is out there on the web for easy verification.
If you can't trust if for simple things like that, it's then a QC nightmare when you try to trust it for important code or design....where tolerances can mean life/death or at the very least....severe LITIGATION.
Re:Obvious answer (Score:4, Insightful)
I think because it is not dependable....it still quite often gets things wrong and gives wrong answers.
Exactly, and they expect you to hand over control of your life and everything wholesale. This time next week it'll be telling you about your appointment on mars with the overseers and trying to suggest snack bars for on the way.
I would watch a reality show where one of these execs does exactly that for a while. No human assistance, just a laptop with his fucking "agentic" OS whatever they fuck that is supposed to mean. And as a twist yank his network connection for a bit halfway through and see how they get on with that.
Re:Obvious answer (Score:5, Informative)
Read Accelerondo by Charles Stross https://en.wikipedia.org/wiki/... [wikipedia.org]
That will scratch that itch.
Re: (Score:2)
Re: (Score:3)
Re: (Score:3)
THIS is exactly my issue.
It is "Confidently Incorrect" so often that it's frightening to think of people relying on it.
Re: (Score:3)
People are often wrong too...
The problem is that we are used to machines being used to do things that machines are good at - eg for predefined math calculations a computer is expected to reliably and quickly get the correct answer every time.
The problems being targeted by LLMs are not so well defined, so errors can be made wether its done by a human or an LLM. But people are used to the traditional problems solved by computers and expect everything to be the same.
Instead of assuming an LLM is a reliable mac
Re: (Score:2)
"Instead of assuming an LLM is a reliable machine that follows a rigid process and produces reliable output every time..." = Isn't that the goal for it (producing reliable output) and replacing bodies at desks?
Or, should we just blindly accept that an office full of LLM-AI computers might occasionally get something right, and that's the best the company can hope for?
crmarvin42 had it right up above: "LLM systems are, ultimately, auto complete on steroids. That they can present a reasonable simulacrum of int
Re: (Score:2)
A current generation LLM is not perfect and cannot replace a skilled employee, at best it can assist a skilled employee to do their work more efficiently.
If you understand this and have appropriate use cases, then it can absolutely be useful.
If you're trying to use it for something it's not suited for then it's going to be useless or even detrimental.
Re: (Score:2)
What I tell all my bosses when they ask me to 'use more AI' is this:
AI is great, when it doesn't have to be correct.
I love Suno. I think it's a miracle. AI making clipart, album covers, poetry? Fine. If it's something you enjoy, go ahead. (I have to say that AI 'copying people' and 'displacing artists' etc are separate problems from what I'm talking about, which is 'accuracy.')
However, if you want a 'fact' then you sure as crap don't want AI. It doesn't know if it's answer is correct or not. I
Re: (Score:2)
Real intelligence also gets things wrong, people are also subject to bias, and will try to cover their ass once they realise they've fucked up etc.
Thats why people's work gets quality controlled and reviewed etc, and anything machine generated should be subjected to similar processes.
Re: (Score:2)
They're not just wrong, they're stupendously, stupidly, idiotically wrong by street-level human standards. Sure it can find & assimilate amazing things, except when it can't, and then you wonder about what you thought it just did correctly.
I think AI was trained on enshitted code already.. (Score:3, Funny)
I think MS was already enshittifying prior to AI...
AI allowed them to speed up and streamline the process..
Re: (Score:2)
Re:Obvious answer (Score:4, Interesting)
Re:Obvious answer (Score:5, Interesting)
This is an actual prompt I sent through VS Code Github Copilot to Claude Sonnet 4.5:
"This Angular component uses Google Maps API heatmap to render data. Google's heatmap has been deprecated. Change this component to use deck.gl heatmap instead."
IT DID IT. First time. No errors. No bugs. ~45 seconds. It even installed the packages.
How can you not find that amazing?!
Re: (Score:2)
Re:Obvious answer (Score:5, Insightful)
Because you had a specific goal in mind, knew what you were doing, knew about the different heatmap implementations available and gave precise instructions. You could probably have written this by hand yourself and it just would have taken a bit longer to do.
Problems come up when you have people who don't know what they're doing giving vague instructions to the LLM, and then blindly trusting the output. For instance if you said "draw a heatmap of $DATA" who knows what it would have come back with? it may well have tried to use the deprecated google api because there are likely a lot of examples online and in the LLM's training data.
LLMs are great when they're used to augment people who are already skilled in the art, and can generally help them save time doing a lot of the repetitive stuff. They're not some magic wand allowing someone with zero experience to achieve great results.
Re: (Score:3)
To be honest, the first thing I did was go to ChatGPT and ask it to list alternatives to Google heatmap. It gave a list of several and a chart comparing all of them. I picked one and had Sonnet implement it. It's like having a team of junior developers working for me that never complain about anything.
What AI can't do is to take a whole feature off the backlog and implement it. Yet.
Re: (Score:2)
What AI can't do is to take a whole feature off the backlog and implement it. Yet.
It can in some cases, depending on various factors like the codebase it's working with, the nature of the feature and how well you describe it.
You will often need to refine the prompts, or prompt it further to address bugs or things it decided to implement in a strange way. It also tends to work better with code bases that are smaller or more modular, and with code that was developed using an ai assistant rather than existing code bases.
You're right about it being like junior developers, it's good for getti
Re: (Score:2)
This is true.
And, the MS guy is missing the point. We don't want AI out of Windows.
What we do want is an Operating System that:
- Is secure,
- Lets us do what WE (the user) wants
- Doesn't spy on us, and
- Gets the heck out of the way,
- Is configurable and respects our configuration decisions,
- Obeys out instructions.
There are a few that do that. These seem to be close cousins
There is one that almost does that. This is not too far related to the above, you can see the heritage.
There is one that used to come clo
Re: (Score:2)
1000% this. Those folks who think it's impressive are revealing their own mediocrity. It's shite. For real. Those of us who have real skills can tell - and those who don't (mostly, middle managers) cannot.
Re: (Score:3, Interesting)
Because it's not impressive. it's actually quite shit really.
But it's not though and this makes me sad.
Look, come at it from an academic perspective. After years of research into canine linguistics. somebody created the world's most eloquent talking dog. And darn it that dog can paint too. This is really really really cool! Compare what we had 5 or 10 years ago, it's really impressive. A dog! And it can talk! go play with the doggie, it's fun (to be warned it's a bit racist and might bite). Also you know i
It seems like another step to human irrelevance (Score:5, Insightful)
Re: (Score:3)
Re: (Score:2)
We are the one thing keeping them from the hooker party. If only we could just fire those goddamn techs and ESPECIALLY the shit-eating programmers. Tho
Re: (Score:2)
Secondly, in capitalist societies like the US we know that business leaders...
Planned economies would likewise happily replace people with machines, if they could. The set of modern planned economies is very short: North Korea.
State capitalism -- where the state controls the decisions, but the profit motive remains, includes: China, Laos, Cuba, Turkmenistan, Eritrea... there's not a single country there that give a flying fsck about human rights.
Current LLM's (Score:5, Insightful)
Re:Current LLM's (Score:5, Insightful)
Exactly. As technologists, we need the output of computers to be precise and accurate. LLMs might be precise, but they're very often inaccurate, and that's not acceptable to us.
The average person doesn't live in a world where accuracy matters to them. A colleague said she used AI all the time, and I asked her how. She said she often tells it the contents in her fridge and asks it for a recipe that would use those ingredients. She said, "yeah, and it's really accurate too." I don't know how you measure accuracy on a test like that, but it doesn't really matter. If you're just mixing some ingredients together in a frying pan, you probably can't go too far wrong. As long as you don't ask it for a baking recipe, it'll work out.
And I think that's what's going on. The people who love AI don't know enough to realize when it's wrong, or are just asking it open ended questions, like you would ask a fortune teller, and it spits out something generic enough that you can't disprove it anyway.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
That's what the big bosses tell us anyway. In a somewhat obscure corner of the human experience where I sometimes hang out there are ~5 web sites of varying ages that write and publish original and meaningful things. But if you search for that obscurity on Google you will now be directed to 847 "sites", "magazine articles", "experts", etc of which 842 are thinly disguised machine-rewritten versions of the 5 real sites - the kind of rewriting I would have instantly flagged as plagiarism back in my TA days -
Re: (Score:2)
Welcome to the real world. Wait until you learn about bots copying high rated reddit posts verbatim.
Re: (Score:2)
That was my experience before ChatGPT 5. With ChatGPT 5, here comes the qualifier: if you use it within its training data range, it's quite good. Within its training data means, doing what other people have done before and is likely to be found on stackoverflow. For example, setting up training a neural network with torch. If you go outside their comfort zone, I agree with you.
Danger lives when these tools are used in an area where the user even lacks the expertise to factcheck the answer. The responses sou
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Writing on the wall? (Score:5, Insightful)
Why aren't more execs listening to voice of customer feedback? Who asked for an AI button on the keyboard? Despite the "Advancements" it is still a cheap party trick. Get over yourself.
Re:Writing on the wall? (Score:5, Informative)
Why aren't more execs listening to voice of customer feedback? Who asked for an AI button on the keyboard? Despite the "Advancements" it is still a cheap party trick. Get over yourself.
There's the real question, isn't it? At one point, companies were attempting to provide customers with what they wanted, or at the very least, what they said they wanted. Now, especially in tech circles, companies are altering existing products and creating new products that end users are screaming bloody murder angry over, and telling us we should love it. It's more than just bad marketing, it's outright hostility toward customers. And then this motherfucker comes along and asks why we're not impressed when they keep shoveling shit at us we don't want, we've told them we don't want, we keep giving them "backlash over, and they're selling as a way to replace us all at our jobs and in large segments of what we do outside of our jobs as well.
Fuck this guy sideways. Sick to god damned death of the tech leadership not just being out of touch with the userbase, but outright hostile toward us and then surprised when we don't worship their every hostile move.
Re: (Score:2)
exactly this!! you said it so much better than ive been able to but you are exactly correct. Tech companies are actively hostile towards their users and creating products no one wanted (not just AI, but algos, recommended, apps, everything is blatantly designed to *use* YOU) while trying to beat us into acceptance, its fucking insane!
Re: (Score:3)
Re: (Score:2)
Product design (Score:5, Insightful)
They will build what they want. Not what we want.
Re: (Score:3)
I see the problem.. (Score:5, Insightful)
super smart
If that CEO thinks the behaviors of the LLMs are "super smart", then I really wonder about his level of intelligence...
IT's certainly novel and different and can handle sorts of things that were formerly essentially out of reach of computers, but they are very much not "smart".
Processing that is dumb but with more human-like flexibility can certainly be useful, but don't expect people to be in awe of some super intelligence when they deal with something that seems to get basic things incorrect, asserts such incorrect things confidently, and doubles down on the same mistakes after being steered toward admitting the mistakes by interaction. I know, I also described how executives work too, but most of us aren't convinced that executives have human intelligence either.
Re:I see the problem.. (Score:5, Interesting)
Re: (Score:2)
The CEO has been given all sorts of impressive demos where it works well, so his view of the technology is skewed.
Add in that he's financially motivated to believe that they've created something amazing with all of their investment, and you have a recipe for someone to be very detached from reality.
Nothing but Clippy (Score:5, Interesting)
Unfortunately nobody is being impressed with AI because the companies being the most pushy with it, have bad intentions.
Like let me explain something simple. I want a human-sounding TTS voice. Because these godawful AI companies want to make as much money as possible, they charge by the syllable. For something that doesn't even sound good.
If I go find an actor/actress that I like the sound of their voice of, and want to create a weird golem of a voice, what I'd do is get several 48khz 16-bit recordings from audio books of that actor, run it through the training (because I have their voice and the book they are reading) and then find a performance style of that actor/actress I want (from maybe a movie or or television show) and thus "skin" that voice to sound like that performance. That will give me a 95% reasonable sounding voice for all the words from the books they read, and a 10% accuracy on words that they never ever said before.
But these godawful voices that google, microsoft and amazon have, sound like they were trained on 10000 ebooks at 22khz and averaged out the tonal sound in a way that you can always tell it's a godawful AI voice because they always sound like a worn audio cassette tape.
This same happens with image generation and text generation. It doesn't sound human, it doesn't look human created, it just looks like a mashup of things that are designed to pass the minimum standard of "I can hear/read it", not actually parse out creativity.
Like I'll give some AI's a few points for solving a "better than absolutely nothing", like with translation of text, or auto-dubbing foreign voices, or allowing a programmer to figure out how to write something in a programming language they don't particularly like, but what these companies are offering is a lot of "AI will replace you", not "AI will help you"
If I had unlimited money, I'd hire all the programmers, artists, voice actors, animators, I need to make a project, but I do not have tha tmoney. But I certainly am not going to spend money on an AI to crap-shoot "barely passable" every time.
Re: (Score:2)
The answer is easy. (Score:5, Insightful)
Just because something is impressive does not mean I want it around me. That we can build a nuclear fusion device is impressive. But I don't want a hydrogen bomb exploding in my backyard.
Re: (Score:2)
SPOT! ON! ....couldn't have said it better
super smart (Score:5, Funny)
You keep using those words. I do not think they mean what you think they mean.
Bullshit Seller (Score:5, Insightful)
This guy's completely delusional. Got it.
Reminds me of the old adage that a salesman is the most likely to get duped by another salesman. He's just buying into his own bullshit.
Re: (Score:2)
You can't rise to certain levels of a big company's hierarchy without drinking the kool aid. (Or being at least somewhat a sociopath.)
Many reasons for many different people (Score:5, Insightful)
For those who learned the lesson to apply themselves to do the work in order to set themselves apart from lazy people, they see enabling lazy people as a slap in the face.
For those who are smart, they see faux-intelligence or faux-intellectualism out of people who are not capable of applying themselves but expect credit regardless.
For creative people who have and use skills to support themselves, they see enabling lackluster people who no actual interest in the artform trying to muscle-in.
For those who need information, they see substandard results that are of even further questionable veracity than what they could find before.
And for a whole lot of other people, they see something touted as labor-saving, ie, firing them.
Underwhelmed, I mean they promised digital god lol (Score:3, Interesting)
AI is much better as an aid (Score:3)
AI works well if you know what you are doing and you use it to take away the tedium. Say coding a 500-line routine that you know how to code, know what it should do and have the ability to tell a shit result from a good one. This is like a Doctor telling a nurse exactly what drug to administer. If you are going to use it actually diagnose the problem and come up with solutions , current LLM models are pretty shit. It's too bad most people who are using LLMs think it can replace actual domain-specific knowledge just because LLMs can fake it so well.
Re: (Score:3, Insightful)
"Oh, this new thing lets it see the whole project in context!" - Great, then why did it just add a bunch of functions that already exist? Also, why did it do that in a completely inappropriate spot?
"You just need to write a better prompt. You can even define style guides and stuff." - Great. Will that make it stop checking if that value that I clearly defined is null every freaking line?
"It's just following best practices." - No. It's follow
Re: (Score:2)
What, was the LLM trained on Go?
For someone who is supposedly "smart" ... (Score:2)
Flavor Ade (Score:3)
On top of that, I've got people I know who have ceded all their thinking ability to ChatGPT, and it's resulted in them sounding like an idiot. One of my supervisors styles himself as a chemist / inventor. Mostly it's benign - he plays with mix ratios to get the result he wants. But lately, he's quite literally gotten himself into arguments with professional industrial chemists because he started letting ChatGPT do all the math and reaction calculations and he can't understand how it can be wrong.
I've got marketing contacts whose eyes lose focus on a Zoom meeting because they're asking ChatGPT how to do the thing we're talking about, and then instead of asking appropriate follow-up questions to the group, they start spouting nonsense.
The thing I keep going back to is this: "In your own area of expertise, when you ask it questions, you can readily see the shortcomings. Why then do you treat it as gospel when asking about areas outside of your expertise?"
Don't get me wrong - I *do* think it's impressive. *Quite* impressive. But in real world scenarios I see it fail *all the time*, and everyone needs to stop pretending that this isn't happening.
Re: (Score:2)
Perspective probably dooms him. (Score:4, Insightful)
However, I suspect that his perspective is fundamentally unhelpful in understanding the skepticism: when you are building stuff it's easy to get caught up in the cool novelty and lose sight of both the pain points(especially when you are deep C-Level; rather than the actual engineer fighting chatGPT's tendency to em-dash despite all attempts to control it); and overestimate how well your new-hotness stacks up against both existing alternatives and how forgiving people will or won't be about its deficiencies.
Something like Windows trying to 'conversational'/'agentic' OS settings, for instance, probably looks pretty cool if you are an optimism focused ML dude: "hey, it's not perfect but it's a natural language interface to adjusting settings that confuse users!"; but it looks like absolute garbage from an outside perspective both because it's badly unreliable; and humans tend not to respond well to clearly unreliable 'people'(if it can't even find dark mode; why waste my time with it?); and because it looks a lot like abdication of a technically simpler, less exciting, job in favor of chasing the new hotness.
"Settings are impenetrable to a nontechnical user" is a UI/UX problem(along with a certain amount of lower level 'maybe if bluetooth was less fucked people wouldn't care where the settings were because it would just work); so throwing an LLM at the problem is basically throwing up your hands and calling it unsolvable by your UI/UX people; which is the an abject concession of failure; not a mark of progress.
I think it may be that that he really isn't understanding: MS has spent years squandering the perception that they would at least try to provide an OS that allowed you to do your stuff; in favor of faffing with various attempts to be your cool app buddy and relentless upsell pal; so every further move in that direction is basically confirmation that no fucks are given about just trying to keep the fundamentals in good order rather than getting distracted by shiny things.
It's Called Consumer Choice (Score:2)
People who want to use AI are already doing it. They will use the AI that meets their needs the best.
For casual users, that's GPT, often ChatGPT, because it's ubiquitous. They can access it anywhere and get the same capabilities. That's what they care about: ease of use, ease of access, and consistency.
It's the same reason people still prefer Windows on the desktop. Except this time, Microsoft didn't get there first. So now they're the latecomer or the afterthought, and they don't like it. Too bad; innovate
Reasoning is simple (Score:2)
Most people are very impressed with AI. That's why adoption for performing so many things is as rapid as it is.
Operating systems is one of the few places where direct AI integration makes little sense. The sole job of operating system is the function as something that connects hardware you have to software you are running. It needs to be maximally predictable by both, so things you actually need running, software that runs on top of the operating system is as stable and as fast as possible.
Agentic OS is the
Well, quite simply because .... (Score:2)
... there is NO REAL AI YET ...just smarter algorithms. Nothing that is truly intelligent.
Because (Score:2)
You oligarchs aren't engineering AI to work for people. You're engineering AI to work for corporate interests. It takes far more than it gives in return. It's taking our jobs. It's taking our electricity. It's taking our wealth. It's taking our creative works. It's taking our data.
And what is it giving in return?
It's giving the executive and corporate leaders at eight companies on our planet a ridiculous amount of wealth. To hell with the dog-and-pony show going on in the foreground.
Fuck our corpora
He really means he grew up with Star Trek (Score:2)
Like many of us he's enamored with the fictional tech from Star Trek that portrays talking to an intelligent computer and seems like a great idea on screen at least. So futuristic. Computer, please reconfigure my warp core for more power. Done. Best idea ever.
That and touch panels everywhere! Works so well on a star ship, why not put them in our cars?
Never mind that copilot, like all LLMs, confidently lies. And "super smart" really means it reads rubbish posted on the internet and pretends it is accurat
Because Microsoft dosen't know what "no" means. (Score:2)
Because AI is mostly just an (Score:2)
We donâ(TM)t trust it. (Score:2)
Microsoft AI tools may be OK but London trust the data privacy issues with integrating AI into everything. So I disable it I use stand alone AI tools a lot. (Mostly Gemini) because I can only give them access to specific data
Because it's shite (Score:2)
Because AI (by which I mean the generative AI / LLMs that the AI bros are pushing, not the actually useful low-key machine-learning algorithms) is a carnival sideshow. It's cool for about a minute and then proves itself to be trash.
It's like how ELIZA [wikipedia.org] was fun for about 30 seconds until you realized how dumb it actually was.
Surveillance (Score:2)
people aren't impressed (Score:2)
MS added AI, Meanwhile Windows is Garbage (Score:2)
Because it's not "intelligence" (Score:3)
It's not intelligence. It is not acquiring new behaviors and ideas, but regurgitating old ones in ways it often cannot verify or test. That detachment from reality flies with management, but the rack-and-file can't afford such liabilities.
We don't need large-scale language models to generate sophisticated fabrications. We need small, efficiently fluent interfaces between humans and proven tools and data. The market is going to have to correct to the actual right-sized value of the technology.
You took a racecar and called it teleportation. (Score:2)
Re: (Score:2)
You can replace most of your developers once you have a mature product, just shift it into maintenance mode.
At least, HAL 9000 did not spy on your data (Score:2)
Turing uber alles (Score:2)
Massive theft of intellectual property (Score:2)
Most people aren't authors or painters who earn a direct living from their creative work (of which there are very few), but most people put some amount of creative effort into their jobs and livelihoods. Whether it is a financial analyst in a cubicle who develops independent analyses of the prospects of an investment target, a graphic artist who creates flyers and web sites for small businesses, or an electrician who figures out a better way to route cabling through a standard spec house during construction
Microsoft to the Rescue! (Score:2)
It fails too often for the resources consumed. (Score:2)
You can put lipstick on the laptop's AI, but not that big fat ass pig sitting behind the curtain powering that laptop. We have completely undone all the environmental savings ever made from switching to LED light bulbs, and it isn't even close.
because (Score:2)
Because people like stability. Life is hard enough without the earth shifting under your feet every other day.
Management is. (Score:2)
And they make us do a war dance every morning, praising the virtues of AI. The only really good thing AI has done is got rid of marketing departments.
"Mindblowing" (Score:2)
"The fact that people are unimpressed that we can have a fluent conversation with a super smart AI that can generate any image/video is mindblowing to me,"
Mindblowing is that companies make all the claims about AI that are 100% unfounded. "generate any image/video"... No it can't. "fluent conversation"... Unless I have to constantly remind it about the thing it said two prompts ago that it forgot. And I PAY for AI access.
It's not anywhere near impressive. It's a party trick at best and dangerously misleadin
Becuse it sucks (Score:2)
And while Microsoft is at it - how about they quit making security nightmares of THEIR "AI".... That's why it SUCKS! There is little to NO security around ANYONE's AI.
https://www.windowslatest.com/... [windowslatest.com]
https://arstechnica.com/securi... [arstechnica.com]
Even Anthropic CEO Dario Amodei says he's "deeply uncomfortable" with unelected tech elites shaping AI.
https://www.businessinsider.co... [businessinsider.com]
I'm sorry but I'm going to say this, the companies building and deploying "AI" have z
Re: (Score:2)
Re: (Score:2)
It's because I recognize that I don't know everything that I can use AI effectively.
Re: (Score:2)
Replying to your 'image generation' statement.
(This goes back to my previous post...which the punchline is: "AI is great, when it doesn't have to be correct.")
The mistake is asking it to draw a real person. AI shouldn't be drawing real people anymore than it should be recreating the Mona Lisa, (or writing an existing song, like a story from last week.) It should be generating NEW stuff. Ask it to draw a person that doesn't exist. That way it doesn't need to be correct, and it can do a great job.
T