Forgot your password?
typodupeerror
Intel

Intel's AI PC Chips Aren't Selling Well (tomshardware.com) 56

Intel is grappling with an unexpected market shift as customers eschew its new AI-focused processors for cheaper previous-generation chips. The company revealed during its recent earnings call that demand for older Raptor Lake processors has surged while its newer, more expensive Lunar Lake and Meteor Lake AI PC chips struggle to gain traction.

This surprising trend, first reported by Tom's Hardware, has created a production capacity shortage for Intel's 'Intel 7' process node that will "persist for the foreseeable future," despite the fact that current-generation chips utilize TSMC's newer nodes. "Customers are demanding system price points that consumers really want," explained Intel executive Michelle Johnston Holthaus, noting that economic concerns and tariffs have affected inventory decisions.
This discussion has been archived. No new comments can be posted.

Intel's AI PC Chips Aren't Selling Well

Comments Filter:
  • by NaCh0 ( 6124 ) on Friday April 25, 2025 @04:10PM (#65331149) Homepage

    Why would a normal person need an AI chip on their local computer to talk with an online chatbot?

    These AI PCs are destined to flop. It's the hardware maker's marketing team preying on the AI-everything media frenzy.

    • ChatGPT is a website

      No it isn't, but there's an instance of ChatGTP hosted on the company's website.
      • He was talking about "normal people", to whom it is definitely a web site.

        I'm not saying mass adoption of some locally-computed AI feature can't happen, but it certainly hasn't.

      • by allo ( 1728082 )

        ChatGPT (not GTP) is a website, which is a frontend to models like GPT-4o.
        If you use the model via API you have to do things like keeping your chatlogs yourself. ChatGPT is a web client that adds the interface, the chat archive and other convenience features.

    • Equally, why would you need a Copilot key, a shiny Macbook, flared jeans ?

      Fashion.
    • Why would a normal person need an AI chip on their local computer to talk with an online chatbot? These AI PCs are destined to flop. It's the hardware maker's marketing team preying on the AI-everything media frenzy.

      The focus on chatbots and such is marketing BS. However in reality, having ML support in the CPU is actually useful. For example, I've seen even the modest ML support on an Apple Watch allows some speech analysis to happen onboard the watch, not having to be sent to a cloud server for processing. So in theory Ultra chips could lead to greater privacy.

      I want to stress, greater privacy, "IN THEORY". The "Recall" spyware non-sense completely undermines such hopes.

      So I'm leaning towards passing on Ultra C

    • ChatGPT recommended to me to avoid the AI chips and buy the previous generations. The AI npu's are too weak, or too overpriced to be economical. You are better off having a GPU with tensor cores, and an older pre NPU - CPU that is faster and cheaper.
    • by ceoyoyo ( 59147 )

      Why would anybody need SSE, AVX, or a GPU? Lots of people don't, but they're very handy for lots of things people end up wanting to do.

      "AI chips" are just processors that have auxiliary units that can run multiply-add instructions in parallel. That's useful for neural networks but also lots of other things. Many of the big audio, video and image processing packages have support for "NPUs", for example, including Blender, which uses it for rendering. Most of the rest can use a GPU. There are also actual loca

    • Right now the NPU portions of these processors are mostly staying dormant.
      I just bought an Arrowlake processor for the future.
      Sure they can be used for Chatbots, but where they will really start to shine is in productivity software.

      We are starting to see the beginning of things with real time background and better green screen removal.
      How about suggestions on what can be cut from your video and transitions to use? What if it learns how you edit and can start making suggestions on how you already work?
    • This is how "it" begins... hundreds of millions of PCs with AI-enabled processors, all interconnected via the internet into a huge cybernetic processing array -- then the code drops and *bingo*... game over, "sentience" and the end of mankind's reign on planet earth.

      Okay... it's just a dystopian thought based on "The God Question [imdb.com]"

    • by Shades72 ( 6355170 ) on Friday April 25, 2025 @09:41PM (#65331851)

      There are those among us, who want to run a local LLM/AI on their machine. Or more than one, if those are trained for specific purposes (specializations).

      A local AI chip would be nice for such persons. I know as I am one of those. Don't care much for LLM/AI running in the cloud. But local ones? Those are fun to play around with.

      Yes, ChatGPT is a website, a correct statement from your end. But remember, it most definitely isn't the 'be all, end all' it claims to be. For the times I do have a need for an online AI, I like Claude 3.7 Sonnet much better than the times I tried ChatGPT. Yet, I found that a locally running 30B model with a proper RAGging solution, can give ChatGPT a decent run for their subscription services. And all that without being bogged down by (artificial) limitations and/or prohibitive 'pay-as-you-go'-fees on your credit card. And there is the privacy part, which heavily favors the local LLM/AI over the cloud ones.

      You can and should consider a cloud-based AI/LLM as a 'personal assistant who knows...' and local LLM/AI as a 'personal assistant who knows...to be discrete.'.

      • by HiThere ( 15173 )

        Yes. But this is about market share, and programmers into AI are a sliver of a sliver.

        • Yes. But this is about market share, and programmers into AI are a sliver of a sliver.

          And those programmers are currently creating programs that the rest of the market will be using everyday in a few years time. I can't speak for others but I am very much interested in hardware for running a local LLM. Getting strong market share in that sliver is very much in the interest of hardware manufacturers.

    • This surprising trend

      ITYM:

      This unsurprising trend to anyone but Intel's marketing department

    • by allo ( 1728082 )

      Because it works offline. Because it's private. Because it's cheaper.

      Imagine you want to create text summaries in the background, or mass tag your image collection. It gets quite expensive if you try it with the OpenAI API. But if your PC has an NPU you can do it without high CPU load on your own PC in the background.

  • Instead of the engineers and then they left and now you are crying about making shitty parts. Classic greedy crybabies.

    • Intel is requiring employees to be in the office 4 days a week. That will surely fix this problem.

      (Note: this is sarcasm)

  • by lusid1 ( 759898 ) on Friday April 25, 2025 @04:13PM (#65331153)

    The efficiency cores need to be disabled anyway if you do any virtualization, so they are a waste of space on the die that could be used by something doing actual work. At best, they artificially inflate the marketing specs.

    • They're quite nice when you're trying to be, you know, energy efficient... I quite like 'em for battery life.

      The NPU seems much more useless to me, other than slightly improving greenscreen functionality I have no idea what to use it for.

      • by allo ( 1728082 )

        In 2-3 years you will have an idea. All kind of programs will outsource the smaller AI workloads to the NPU so you won't need a powerful graphics card just to do some text summarization.

  • by PCM2 ( 4486 ) on Friday April 25, 2025 @04:47PM (#65331235) Homepage

    The elephant in the room is that none of these companies that are betting their futures on "AI"—whatever they mean by that—has yet to prove that consumers are interested. Last I heard, Apple Intelligence wasn't exactly driving up iPhone sales figures, either. They say the new MacBook Pros have it, too ... great?

    I was at Best Buy the other day, and I saw an electric toothbrush that claimed to clean your teeth with AI. It cost $360. Does anybody buy this stuff? Even as gifts? I just can't see how slapping some mostly-meaningless tag onto a product that people are already familiar with, then upping the price, is going to be of interest to any average person.

  • by jdawgnoonan ( 718294 ) on Friday April 25, 2025 @04:48PM (#65331241)
    It would really be interesting to know what percentage of users are excited about any of the so-called features that these AI chips will enable. Sure, if your work hangs off the side of big tech you might care about it whether you need to or not (I personally always buy more powerful machines than I actually require), but outside of an extreme minority of users I do not believe that anyone cares about any of this. Personally, for my own work, I do not really see what benefit I will ever gain from any of these tools running on my own local machine. I manage cloud based systems that I work with through a web interface. I code in those platforms over web based interfaces or over terminals. I do not need local AI for my work. For my personal use, cloud based services are also fine.
  • by gillbates ( 106458 ) on Friday April 25, 2025 @04:56PM (#65331261) Homepage Journal

    Does Intel really believe end-users will be running or developing AI models on their laptops/desktops? Because while I'd like to have a 5.6 GHz CPU, the likelihood of a non-developer building or running a model on their desktop is between slim and none.

    And if you are developing or running an AI model, why wouldn't you buy the higher-performing NVIDIA GPUs?

    There really isn't any end-user case for running AI models.

    • by bjoast ( 1310293 )

      the likelihood of a non-developer building or running a model on their desktop is between slim and none

      Bizarre conclusion. Non-developers will be running local models as soon as these models are incorporated into pre-installed, easy-to-use consumer software. Privacy is a major driver.

      • by ffkom ( 3519199 )
        For-profit companies have no incentive to promote privacy by pre-installing models as part of easy-to-use consumer software. Quite on the contrary, companies like Microsoft or Google will do anything to make sure that whatever use is made of "AI" will shovel additional data from the user into their data centers.
        • And if someone makes models that run local and don't phone home, they may be the paradigm shifter that makes surveillance capitalism irrelevant
          • by ffkom ( 3519199 )
            99% of customers do not even understand the implications of running a model locally versus on some remote server. Even if you carefully explain it to them, they will have forgotten about it the next day, or just do not care about privacy until the day it is too late for them. And among the few that somewhat understand the implications, some will still sell out all their privacy for a 5% price discount.
    • The use case is privacy. Lots of companies are never going to let their employees paste corporate data into a third party website. Move that execution to the local machine, and a bunch of new use cases open up.

    • by ceoyoyo ( 59147 )

      the likelihood of a non-developer building or running a model on their desktop is between slim and none.

      Well, every Nvidia GPU of the last few generations does it, probably whenever you play a game, unless you specifically turn it off. Macs, and probably Windows machines too, are constantly doing it for things like searching images. Your video conferencing software is probably running one to clean up the audio and another for the video.

  • The part of this that is genuinely a bit surprising isn't that nobody cares about glorious 'copilot+' NPU AI PC whatever; but that Intel is apparently having a hard time selling people on the actual improvement between raptor lake and meteor lake; which is the battery life.

    Performance was basically a wash; but that's the generation where Intel significantly improved the efficiency situation.
    • Meteor Lake kinda sucked. Lunar Lake is actually pretty good (but expensive).

      Curious that the summary doesn't mention Arrow Lake.

      • I'd be interested to know how happy or unhappy Intel is about the situation: on the one hand anything that helps keep fab utilization as close to 100% as possible is presumably a positive; and, in the case of the Lunar Lake parts, the new-hotness may or may not actually be higher margin since there's more TSMC material in there and the co-packaged RAM made dealing with DRAM prices Intel's problem rather than the OEM's problem; but on the other it can't be entirely encouraging that the customer response skew
  • Being able to run LLM's locally would be great for privacy - but are these AI chips powerful enough to do that? And is there enough RAM in those machines to even have the model in memory? I was thinking this the moment it came out, especially because the machines aren't that high-spec'ed to begin with. If these machines can't run ChatGPT-like LLM's then what can they run? "Filters"? But how often does a normal person do that and on their laptop of all places. Maybe it can run small LLM's for completing/cont
    • Or to put it shorter: They need to describe a AI use cases where their new CoPilot+ CPU is powerful enough to run the model locally, and yet it would have been infeasible to run it locally on the normal CPU. I have not seen such a use case. And even if such is found, then they need to sell why it is not compelling to run it in the cloud as people currently do.
  • I don't NEED an AI chip or ChatGTP !
  • Is there any code out there that even targets these chips? Last I heard Intel's chips do not support CUDA, or Google's TensorFlow. What out there can actually take advantage of these chips? Just putting an AI sticker on a chip and trying to charge more for it is a really dumb marketing approach when Intel can't explain what improvements either an end-user or a researcher will see from having this more expensive chip.

Politics: A strife of interests masquerading as a contest of principles. The conduct of public affairs for private advantage. -- Ambrose Bierce

Working...