Intel's AI PC Chips Aren't Selling Well (tomshardware.com) 56
Intel is grappling with an unexpected market shift as customers eschew its new AI-focused processors for cheaper previous-generation chips. The company revealed during its recent earnings call that demand for older Raptor Lake processors has surged while its newer, more expensive Lunar Lake and Meteor Lake AI PC chips struggle to gain traction.
This surprising trend, first reported by Tom's Hardware, has created a production capacity shortage for Intel's 'Intel 7' process node that will "persist for the foreseeable future," despite the fact that current-generation chips utilize TSMC's newer nodes. "Customers are demanding system price points that consumers really want," explained Intel executive Michelle Johnston Holthaus, noting that economic concerns and tariffs have affected inventory decisions.
This surprising trend, first reported by Tom's Hardware, has created a production capacity shortage for Intel's 'Intel 7' process node that will "persist for the foreseeable future," despite the fact that current-generation chips utilize TSMC's newer nodes. "Customers are demanding system price points that consumers really want," explained Intel executive Michelle Johnston Holthaus, noting that economic concerns and tariffs have affected inventory decisions.
Uhm, ChatGPT is a website (Score:5, Insightful)
Why would a normal person need an AI chip on their local computer to talk with an online chatbot?
These AI PCs are destined to flop. It's the hardware maker's marketing team preying on the AI-everything media frenzy.
Re: (Score:2)
No it isn't, but there's an instance of ChatGTP hosted on the company's website.
Re: (Score:2)
He was talking about "normal people", to whom it is definitely a web site.
I'm not saying mass adoption of some locally-computed AI feature can't happen, but it certainly hasn't.
Re: (Score:2)
ChatGPT (not GTP) is a website, which is a frontend to models like GPT-4o.
If you use the model via API you have to do things like keeping your chatlogs yourself. ChatGPT is a web client that adds the interface, the chat archive and other convenience features.
Re: (Score:2)
Fashion.
I always like to pass on v1.0 tech (Score:3)
Why would a normal person need an AI chip on their local computer to talk with an online chatbot? These AI PCs are destined to flop. It's the hardware maker's marketing team preying on the AI-everything media frenzy.
The focus on chatbots and such is marketing BS. However in reality, having ML support in the CPU is actually useful. For example, I've seen even the modest ML support on an Apple Watch allows some speech analysis to happen onboard the watch, not having to be sent to a cloud server for processing. So in theory Ultra chips could lead to greater privacy.
I want to stress, greater privacy, "IN THEORY". The "Recall" spyware non-sense completely undermines such hopes.
So I'm leaning towards passing on Ultra C
Re: (Score:2)
You're deluding yourself if you think that's what you're seeing.
Guess again. I know the developers in one of these case. I know how pleasantly surprised they were when the ML model fit on the Apple Watch.
Re: (Score:1)
Re: (Score:2)
Why would anybody need SSE, AVX, or a GPU? Lots of people don't, but they're very handy for lots of things people end up wanting to do.
"AI chips" are just processors that have auxiliary units that can run multiply-add instructions in parallel. That's useful for neural networks but also lots of other things. Many of the big audio, video and image processing packages have support for "NPUs", for example, including Blender, which uses it for rendering. Most of the rest can use a GPU. There are also actual loca
Picture Editing, Video Editing, Sound Editing (Score:2)
I just bought an Arrowlake processor for the future.
Sure they can be used for Chatbots, but where they will really start to shine is in productivity software.
We are starting to see the beginning of things with real time background and better green screen removal.
How about suggestions on what can be cut from your video and transitions to use? What if it learns how you edit and can start making suggestions on how you already work?
Re: (Score:3)
This is how "it" begins... hundreds of millions of PCs with AI-enabled processors, all interconnected via the internet into a huge cybernetic processing array -- then the code drops and *bingo*... game over, "sentience" and the end of mankind's reign on planet earth.
Okay... it's just a dystopian thought based on "The God Question [imdb.com]"
Re:Uhm, ChatGPT is a website (Score:4, Informative)
There are those among us, who want to run a local LLM/AI on their machine. Or more than one, if those are trained for specific purposes (specializations).
A local AI chip would be nice for such persons. I know as I am one of those. Don't care much for LLM/AI running in the cloud. But local ones? Those are fun to play around with.
Yes, ChatGPT is a website, a correct statement from your end. But remember, it most definitely isn't the 'be all, end all' it claims to be. For the times I do have a need for an online AI, I like Claude 3.7 Sonnet much better than the times I tried ChatGPT. Yet, I found that a locally running 30B model with a proper RAGging solution, can give ChatGPT a decent run for their subscription services. And all that without being bogged down by (artificial) limitations and/or prohibitive 'pay-as-you-go'-fees on your credit card. And there is the privacy part, which heavily favors the local LLM/AI over the cloud ones.
You can and should consider a cloud-based AI/LLM as a 'personal assistant who knows...' and local LLM/AI as a 'personal assistant who knows...to be discrete.'.
Re: (Score:2)
Yes. But this is about market share, and programmers into AI are a sliver of a sliver.
Re: (Score:2)
Yes. But this is about market share, and programmers into AI are a sliver of a sliver.
And those programmers are currently creating programs that the rest of the market will be using everyday in a few years time. I can't speak for others but I am very much interested in hardware for running a local LLM. Getting strong market share in that sliver is very much in the interest of hardware manufacturers.
Re: (Score:2)
This surprising trend
ITYM:
This unsurprising trend to anyone but Intel's marketing department
Re: (Score:2)
Because it works offline. Because it's private. Because it's cheaper.
Imagine you want to create text summaries in the background, or mass tag your image collection. It gets quite expensive if you try it with the OpenAI API. But if your PC has an NPU you can do it without high CPU load on your own PC in the background.
Gave All The Profit To The Suits. (Score:1)
Instead of the engineers and then they left and now you are crying about making shitty parts. Classic greedy crybabies.
It's ok. (Score:2)
Intel is requiring employees to be in the office 4 days a week. That will surely fix this problem.
(Note: this is sarcasm)
The main issue is the hybrid architecture (Score:5, Insightful)
The efficiency cores need to be disabled anyway if you do any virtualization, so they are a waste of space on the die that could be used by something doing actual work. At best, they artificially inflate the marketing specs.
Re: (Score:2)
They're quite nice when you're trying to be, you know, energy efficient... I quite like 'em for battery life.
The NPU seems much more useless to me, other than slightly improving greenscreen functionality I have no idea what to use it for.
Re: (Score:2)
In 2-3 years you will have an idea. All kind of programs will outsource the smaller AI workloads to the NPU so you won't need a powerful graphics card just to do some text summarization.
Do consumers care about AI? (Score:3)
The elephant in the room is that none of these companies that are betting their futures on "AI"—whatever they mean by that—has yet to prove that consumers are interested. Last I heard, Apple Intelligence wasn't exactly driving up iPhone sales figures, either. They say the new MacBook Pros have it, too ... great?
I was at Best Buy the other day, and I saw an electric toothbrush that claimed to clean your teeth with AI. It cost $360. Does anybody buy this stuff? Even as gifts? I just can't see how slapping some mostly-meaningless tag onto a product that people are already familiar with, then upping the price, is going to be of interest to any average person.
Only for free low-effort crap (Score:2)
Nobody is willing to pay for it.
Businesses, on the other hand, love it because they can find who contributes nothing to the bottom line (uses AI), and those employees that are actually useful (doing real work without AI).
Re: (Score:1)
Wow, that is utterly delusional, I wrote 1.2 million lines of code last month porting from one language to another. AI writes better code than 70% of developers out there.
Re: Only for free low-effort crap (Score:2)
Re: (Score:2)
When I checked, a month or so ago, Apple "intelligence" was a bad joke.
Because most people couldn't care less? (Score:3)
What would you use one for? (Score:3)
Does Intel really believe end-users will be running or developing AI models on their laptops/desktops? Because while I'd like to have a 5.6 GHz CPU, the likelihood of a non-developer building or running a model on their desktop is between slim and none.
And if you are developing or running an AI model, why wouldn't you buy the higher-performing NVIDIA GPUs?
There really isn't any end-user case for running AI models.
Re: (Score:2)
the likelihood of a non-developer building or running a model on their desktop is between slim and none
Bizarre conclusion. Non-developers will be running local models as soon as these models are incorporated into pre-installed, easy-to-use consumer software. Privacy is a major driver.
Re: (Score:2)
Re: What would you use one for? (Score:2)
Re: (Score:2)
Re: What would you use one for? (Score:3)
The use case is privacy. Lots of companies are never going to let their employees paste corporate data into a third party website. Move that execution to the local machine, and a bunch of new use cases open up.
Re: (Score:2)
Well, every Nvidia GPU of the last few generations does it, probably whenever you play a game, unless you specifically turn it off. Macs, and probably Windows machines too, are constantly doing it for things like searching images. Your video conferencing software is probably running one to clean up the audio and another for the video.
Re: (Score:2)
Ah, Sam Altmann has you convinced that ChatGPT is AI and nothing else hey?
It's absolutely AI. Neural networks and deep learning even. The naive Bayesian spam filter you've had on your e-mail for the last twenty years is also AI, although not a neural network and not particularly deep. And yes you can run it on your CPU. You can run chatGPT on your CPU too, or on a 286 from the 80s. It runs a lot better on hardware that's at least vaguely optimzed for it though.
Re: (Score:2)
You can run chatGPT on your CPU too
Heh, ya. But at about 1 token per 15 years.
Re: (Score:2)
Re: (Score:2)
1,000 CPUs with a fast IPC bus could do the job just fine, as long as they had high local memory bandwidth.
Re: (Score:2)
My, were you around in 1955 at the Dartmouth workshop where the term artificial intelligence was defined? Because your account differs markedly from the written record.
Re: (Score:2)
Wait, so before the 1930s chatGPT was a thing?
I notice you didn't reply to my other reply to you. A Bayesian classifier is a logistic regression model, which is an ANN. They're not even different things. "Pure stat thing" isn't a thing. Not to mention that the "naive" part of naive Bayesian classifier makes pure stat people shiver. It's very much a machine learning "screw the statistical rigor and see what happens" thing.
Re: (Score:2)
Time to stop digging man.
Re: (Score:2)
People repeating this nonsense annoys me, as it annoys everyone who worked in AI prior to 2015 or so, so let me give you, and anybody else reading a little math lesson:
The formula for linear regression, a "statistical technique" if ever there was one is:
Y=X\beta+\epsilon
Which you can plug into a TeX renderer or just look at the picture on Wikipedia:
https://en.wikipedia.org/wiki/... [wikipedia.org]
where Y is the prediction, X is the input, beta is a matrix (or vector) of learned weights and epsilon is the error.
Linear regre
Somewhat unexpected. (Score:2)
Performance was basically a wash; but that's the generation where Intel significantly improved the efficiency situation.
Re: (Score:2)
Meteor Lake kinda sucked. Lunar Lake is actually pretty good (but expensive).
Curious that the summary doesn't mention Arrow Lake.
Re: (Score:2)
But is it powerful enough? (Score:2)
Re: (Score:2)
I don't NEED an AI chip ! (Score:2)
What does an "Intel AI" chip do for me? (Score:2)