Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel

Intel Says AI is Overwhelming CPUs, GPUs, Even Clouds, So All Meteor Lakes Get a VPU (theregister.com) 63

Intel will use the "VPU" tech it acquired along with Movidius in 2016 to all models of its forthcoming Meteor Lake client CPUs. From a report: Chipzilla already offers VPUs in some 13th-gen Core silicon. Ahead of the Computex conference in Taiwan, the company briefed The Register on their inclusion in Meteor Lake. Curiously, Intel didn't elucidate the acronym, but has previously said it stands for Vision Processing Unit. Chipzilla is, however, clear about what it does and why it's needed -- and it's more than vision. Intel Veep and general manager of Client AI John Rayfield said dedicated AI silicon is needed because AI is now present in many PC workloads. Video conferences, he said, feature lots of AI enhancing video and making participants sounds great -- and users now just expect that PCs do brilliantly when Zooming or WebExing or Teamising. Games use lots of AI. And GPT-like models, and tools like Stable Diffusion, are already popular on the PC and available as local executables.

CPUs and GPUs do the heavy lifting today, but Rayfield said they'll be overwhelmed by the demands of AI workloads. Shifting that work to the cloud is pricey, and also impractical because buyers want PCs to perform. Meteor Lake therefore gets VPUs and emerges as an SoC that uses Intel's Foveros packaging tech to combine the CPU, GPU, and VPU. The VPU gets to handle "sustained AI and AI offload." CPUs will still be asked to do simple inference jobs with low latency, usually when the cost of doing so is less than the overhead of working with a driver to shunt the workload elsewhere. GPUs will get to do jobs involving performance parallelism and throughput. Other AI-related work will be offloaded to VPUs.

This discussion has been archived. No new comments can be posted.

Intel Says AI is Overwhelming CPUs, GPUs, Even Clouds, So All Meteor Lakes Get a VPU

Comments Filter:
    • by byromaniac ( 8103402 ) on Monday May 29, 2023 @08:08PM (#63560089)
      It isn't clear to me exactly what the VPU does, but I imagine that having dedicated 8/16 bit "sigmoid and accumulate" hardware would greatly accelerate neural net evaluation. It could be akin to the "multiply and accumulate" units on dedicated DSPs.
      • Bingo.
        Quick matrix evaluators for running the networks on.

        They're quite helpful in mobile applications (which is why every phone CPU has included them for a long time now), but I wouldn't have thought anyone really cared for PCs. But maybe more NNs are being run on PC hardware these days instead of at the "edge"
      • Sigmoid activations are rarely used today, it's mostly ReLU which is max(0, x) and softmax which is exp() followed by normalisation, usually found in attention and final layers of transformers.
        • increasingly GeLU too (which is basically similar to ReLU but doesnt have the differentiation-at-zero problem) , and something called Swish that I dont know much about except its sigmoid-ish

        • Thanks for the update! When I studied NN in grad school they were neat, but it was hard to see practical applications for them. Fast forward a few decades and a million times more compute power and they've gotten really interesting! :)
      • Lets hope Intel's VPU lives up to the hype. I'm typing this on a 3-year-old Intel-based laptop that can't even play back a Youtube video without it stuttering uncontrollably a minute or two in.
  • by Tyr07 ( 8900565 ) on Monday May 29, 2023 @06:39PM (#63559963)

    I'm not seeing the superior benefit of this. It sounds to me more like selling a most expensive chip with everything combined into one so you have to replace the entire component is one gets outdated, and likely replace the entire motherboard too.

    So this is the netbook / console evolution I'm guessing so you constantly have to pay high prices to replace the entire device? I always found it really advantageous when the items I'm running on my system have a specific bottleneck.

    In the past my choice for upgrade was my GPU as an example for some of the graphical improvements I wanted. Since it was a standard port, I could use the latest graphics card, even if my CPU won't let me get the most out of it at the time. Later when the pricing and product availability is right, I can replace the CPU, ram and motherboard, and keep the GPU. It makes it a lot more affordable to upgrade over time instead of each time I want an upgrade, it's all or nothing, drop 3000$ or spend nothing.

    It sounds like this will lead to purchasing mid grade items and getting slight improvements at an expensive price tag. Maybe you buy the system for 1200$, then two years go down, and you can spend 1300$ for the current model which is 10% faster overall, versus spending 400$ on a new cpu and getting a large improvement depending on the bottleneck for your applications.

    • I actually like specialized additional processors that add more capabilities than the CPU can reasonably handle, like back in the day you'd add a math-coprocessor and other items, I thought that was awesome. I think it'd be great if there was a component slot of motherboards that allowed additions of specialized processors like a VPU for specific tasks that people need them for, for those who can make use of it.

      • I bought a math coprocessor back in the 1980s for about $200 (I think) and came to find out it didn't do shit for shit.

        There were like 3 programs that used it, none of which I owned, but the hyperbole about what it "could" do were too alluring not blow a shitload of money on it.

        It sped up Lotus123 which I didn't use, but I got mad bragging rights....for about an hour until we figured out how utterly useless it was in real life.

        I wouldn't buy something like that today, not because I've gotten smarter, but be

        • by dknj ( 441802 )

          FYI $200 was a ton of money back then. Gas was maybe $0.60/gal. You could feed a family of 6 at Pizza Hut for $16. Gum cost $0.10.

          $200 seems like nothing now, but I remember I ran up a $100 phone bill as a kid when I first found BBS' and my father was about ready to murder me. Now my cell phone bill is $100 every month...

          • by jbengt ( 874751 )

            FYI $200 was a ton of money back then.

            True

            Gas was maybe $0.60/gal.

            The last time gas was $0.60/gal. was in the mid to late 70s. Gas was over a dollar per gallon on average for most of the 80s. [creditdonkey.com]

        • Math coprocessors were unnecessary for most productivity applications. Back when you had to buy them separately they were only really relevant for graphics, science, and gaming. AutoCAD, FALCON, FORTRAN, MATLAB, MS Flight Sim, you get the idea [ctrl-alt-rees.com]. That's actually still true, in that our processors are so fast that most people wouldn't even notice if they had to emulate floats, but now lots of people do gaming and graphics.

        • by Tyr07 ( 8900565 )

          You hit the nail on the head for me.

          If I had applications that would benefit from the math-coprocessor, then it's awesome and I'd buy one. I don't want to pay more money for my CPU overall because it includes a VPU, which I do absolutely nothing with and don't need. Which is why I support the freedom for people who want one to buy items with specific chipsets or slots on their motherboard. The fact they are going hey, all our CPUS will support it..with the GPU...built in..since..uh..people don't want to pay

    • I'm not seeing the superior benefit of this. It sounds to me more like selling a most expensive chip

      That's Intel's business model, yes.

      Their "Why you need this new CPU!" press releases have always been comedy gold aimed at clueless middle-managers.

      Case in point: "Chipzilla is, however, clear about what it does and why it's needed"

      Fear, Uncertainty and Doubt if ever I heard it.

    • It allows for the creation of a new socket. That way, no one can replace the processor with anything less expensive. Also, anyone who wants the new capabilities has to buy a new motherboard, which means more sales for the support chipset.
    • I'm not seeing the superior benefit of this.

      You're not seeing a benefit to hardware acceleration of common tasks? Everyone else has. For the record Intel is the last to the party here. AMD Ryzen AI, Apple Neural Engine, ARM's AI Coprocessor in the Cortex-X4, and that's before we discuss what GPU vendors have been doing all of which include hardware acceleration in their dedicated GPUs.

      It sounds like this will lead to purchasing mid grade items and getting slight improvements at an expensive price tag.

      And that is okay. You shouldn't need a high end CPU and dedicated GPU to run a Teams call simply because it uses AI and video transcoding. There's a world of tasks to

      • by Tyr07 ( 8900565 )

        And that is okay. You shouldn't need a high end CPU and dedicated GPU to run a Teams call simply because it uses AI and video transcoding. There's a world of tasks to optimise in the low-mid range of PCs. I don't give a crap if my work PC is 10% faster, but sign me up for 10% better battery life any day.

        What is this AI that teams is using, and what does it do for me? Also what is it transcoding? Transcoding implies taking one video format and transferring it into another video format codec. It does nothing of the sort, it does basic video decoding which all onboard gpus and process support with very little effort. What, do I need a VPU for advanced AI so they can get better telemetrics from my app usage without straining my CPU with all the bloat?

        I can take a basic, cheap laptop and run teams just fine wit

    • by CAIMLAS ( 41445 )

      No, it's more like having the math coprocessor eliminated on the Pentium and putting it on-dye, as opposed to having it be a separate socket on 486 (if it was even available).

      IE it's basic functionality which everyone will soon use, because everyone wants it - but doesn't want to shell out $$$ to get in the door. It raises the bar, as it were, so that software which was not previously accessible is broadly accessible without discreet hardware for the task.

      It will drive hardware sales in what has become a ve

    • by Targon ( 17348 )
      This is more like a solution to a problem that doesn't really exist. It is somewhat similar to the AI that AMD has added to a single Zen4 notebook chip so far, except that AMD knows that it adds to the cost, so doesn't want to just add it to ALL chips until there is a clear demand or need for it. Or, think AVX-512 when Intel wanted to charge a price premium for a feature that very few people actually use. Now of course, AMD added AVX-512 to all Zen4 based chips, and Intel removed AVX-512 from the cons
  • So how soon can I get Bonzi Buddy back? Powered by local AI running on these VPUs. And maybe Microsoft can bring back Clippy.
    • So how soon can I get Bonzi Buddy back? Powered by local AI running on these VPUs. And maybe Microsoft can bring back Clippy.

      Clippy is already here. It's baked into Windows 11 and getting an assist from Microsoft's version of AI.

  • Get on the hype train ya'll, AI is the wave of the future that will be a trillion dollar industry for everyone forever and can't possibly lose or be overhyped. Toot toot!
    • This hype train will quickly grind to a halt, because when AI will have put everybody out of a job, nobody will have any money to buy what AI will be so much more efficient at producing.

      • I didn't think I'd ever live in a world where people did the hard, repetitive labor and AI wrote the books, music, and poetry.

        But here we are, almost.

        • It's not like that, AI is not capable of autonomous activity in any task. Can't do L5 SDC, can't code without checks, can't write articles without post-editing, can't even translate well enough to be deployed instead of professional human translators. It can do a good-but-imperfect job of all of them. The key word being imperfect. Still needs human in the loop.
          • Still needs human in the loop.

            Yes....but for how long?

            I don't think it's unreasonable to project that eventually they'll be very, very capable at all those things.

            And they'll have one AI checking the results of another, so yeah, I think it's not far-fetched to expect them to get to the point of "good enough", which is the point at which most people won't care, won't know, or where it won't make any difference.

            Will it be 100% across every field? No, but again, at some point it won't matter- it'll be good enough.

      • I didn't know capability increases lead to lack of work. I thought when we can do something we couldn't do before we get busy. Suddenly new applications pop up, and with them new jobs.
        • I didn't know capability increases lead to lack of work.

          If you're breaking rocks, and you go from hitting rocks with rocks to hitting rocks with hammers, you're probably not going to put anyone out of work. In this scenario you probably have other real dumb jobs for them to do.

          If you're weaving fabric, and you go from hand looms to machine looms, what work do you have for those weavers to do? Their skills aren't relevant for anything else. Now you just need someone to load bobbins and hey, you can use child labor for that. (Everything old is new again [nytimes.com].)

          The indu [localhistories.org]

  • by Mspangler ( 770054 ) on Monday May 29, 2023 @07:09PM (#63560013)

    Now there is something we've needed since the '80's.

  • Vector Processing Unit. Because that's what it really is. It's what GPUs have been for a long time.
    • I got one for Christmas: Very Pissed Uncle.
      If it's raining you'll want a Vertically Propagated Umbrella.
      Or an SoC with integrated 32 bit floating DAC: Volume Pumping Unit.
      • Minor objection. You really shouldn't be pumping volume from your DAC. I mean, sure, you can lower impedance until the power output of the DAC causes it to melt... but it's really much easier to use an amplifier ;)
    • Re:Vector (Score:5, Informative)

      by DamnOregonian ( 963763 ) on Monday May 29, 2023 @08:49PM (#63560121)
      No.
      VPU means Vision Processing Unit.
      It's an admittedly stupid name... and one I haven't heard used since they were popular on old TI OMAP parts.
      What it really is, is a chunk of dedicated MMA hardware that makes running NN inference engines really efficient and snappy.

      These days, people like to call them NPUs (Apple), TPUs (NV, Google)
      They're obscenely faster than a normal GPU shader core at this particular line of work, and much more efficient.
      • I mean Vector Processing Unit is what these discreet SIMD/MIMD things should be called. I always thought General Purpose computing on Graphics Processing Unit (GPGPU) was a silly name.
        • Well, the "VPUs" (vision) are a lot more specific than just SIMD/MIMD. They're basically only MMA. They're don't do generic vector ops.
          For GPUs- ya, VPU (vector) would have been a much better name than GPGPU, which is as dumb a name as VPU (vision)

          Companies always make a mess out of shit like that.
    • A very limited vector processing unit.
  • by gavron ( 1300111 )

    Yes, just include the word AI somewhere and watch your stock value rise.

    AI Crypto Blockchain Quantum LLM.

    Please send your deposits in furtherance of my awesome and detailed 5-word business plan above to
    SWIFT CHASUS33XXX ABA 12210024 ACCT 3733+

    KTB

    E

  • by Kelxin ( 3417093 ) on Tuesday May 30, 2023 @04:00AM (#63560517)
    An AI chipset built into every computer? Skynet is going to love this.
  • What could possibly go wrong?

    I say: "Because lead times for key hardware components were unexpectedly short, we are able to deliver six weeks ahead of schedule." [1]
    They hear: "Because [racial slur] for [biological function], we are [what AI thinks my mother did for a living]."

    [1] That's a true statement for a project I'm currently working on.

    • Nothing can go wrong because literally no one is talking AI generated voice synthesis in this case. You very likely already have AI on your conference calls and don't even realise it. AI noise and reverb cancellation is on by default in Teams. The only thing this will do is offload the workload from the CPU giving you a bit more battery life on your work laptop.

  • I thought that this concept was already worked out as Tensor Processing Unit (TPU), but perhaps because TPU has been shipped by Google for couple of years now, Intel came up with thei own naming.
  • The only thing keeping intel in the game currently is the software legacy of compiled apps for windows for x86 / x64. If microsoft got cozy with ARM, it could move into laptops and slowly erode x86 as a platform

It is easier to write an incorrect program than understand a correct one.

Working...