Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AMD

AMD Unveils Ryzen AI and 9000 Series Processors, Plus Radeon PRO W7900 Dual Slot (betanews.com) 41

The highlight of AMD's presentation Sunday at Computex 2024 was "the introduction of AMD's Ryzen AI 300 Series processors for laptops and the Ryzen 9000 Series for desktops," writes Slashdot reader BrianFagioli (sharing his report at Beta News): AMD's Ryzen AI 300 Series processors, designed for next-generation AI laptops, come with AMD's latest XDNA 2 architecture. This includes a Neural Processing Unit (NPU) that delivers 50 TOPS of AI processing power, significantly enhancing the AI capabilities of laptops. Among the processors announced were the Ryzen AI 9 HX 370, which features 12 cores and 24 threads with a boost frequency of 5.1 GHz, and the Ryzen AI 9 365 with 10 cores and 20 threads, boosting up to 5.0 GHz...

In the desktop segment, the Ryzen 9000 Series processors, based on the "Zen 5" architecture, demonstrated an average 16% improvement in IPC performance over their predecessors built on the "Zen 4" architecture. The Ryzen 9 9950X stands out with 16 cores and 32 threads, reaching up to 5.7 GHz boost frequency and equipped with 80MB of cache... AMD also reaffirmed its commitment to the AM4 platform by introducing the Ryzen 9 5900XT and Ryzen 7 5800XT processors. These models are compatible with existing AM4 motherboards, providing an economical upgrade path for users.

The article adds that AMD also unveiled its Radeon PRO W7900 Dual Slot workstation graphics card — priced at $3,499 — "further broadening its impact on high-performance computing...

"AMD also emphasized its strategic partnerships with leading OEMs such as Acer, ASUS, HP, Lenovo, and MSI, who are set to launch systems powered by these new AMD processors." And there's also a software collaboration with Microsoft, reportedly "to enhance the capabilities of AI PCs, thus underscoring AMD's holistic approach to integrating AI into everyday computing."
This discussion has been archived. No new comments can be posted.

AMD Unveils Ryzen AI and 9000 Series Processors, Plus Radeon PRO W7900 Dual Slot

Comments Filter:
  • AMD AI:

    Accelerating OS level spyware since 2024!

    • Re:New ad (Score:5, Insightful)

      by supremebob ( 574732 ) <themejunky AT geocities DOT com> on Monday June 03, 2024 @07:03AM (#64519433) Journal

      Or AMD: We put AI in our product names because that's what the tech investors wanted to see! Now, buy some of our stock instead of Nvidia's.

      • Re:New ad (Score:5, Insightful)

        by slack_justyb ( 862874 ) on Monday June 03, 2024 @07:34AM (#64519467)

        We put AI in our product names

        Yeah the "AI" part is absolute marketing. What they are promoting is their NPU MACs (multiply-accumulate) within the CPUs. On mobile devices this makes sense to have NPUs as software is starting to use that for all kinds of functions. Zoom's blur background function relies on this heavily as an example. For non-mobile devices, there's less a need for NPU as the GPU has pipes for the same task that MACs handle.

        NPUs are (and yes, I know this is a bit of a gross generalization) pretty much stripped down GPUs that handle a specific subset of tasks. Because of this, NPUs are less power hungry than their GPU equals and work well in devices that require a priority on battery life.

        Now. Is a particular Operating System looking to tap into those NPUs to power their "take screen-shots of your desktop at regular intervals"? Absolutely. But if I've read correctly, MS Recall will work on things as bare as SSE2 if need be. That's just something I wanted to add for the person you were replying to saying:

        Accelerating OS level spyware since 2024!

        AVX512 will handle some basic neural net operations just fine. So them putting "AI" in the title is just marketing. But we've been putting shit in CPUs to accelerate local LLMs (albeit much slower than say what can be done on a GPU) for quite some time now.

        I think it's really important to understand this difference.

        AMD's Ryzen AI 300 Series processors for laptops

        Laptops have NPUs for the most part because they are less power hungry than a full blown GPU. However, it's not unheard of for machines to have NPUs and GPUs together. (Kinda touching on that gross generalization here) That's because NPUs being mostly targeted for MACs, they can fit way more into the chiplet than what a GPU can fit in because the GPU also has to have all the stuff for all the other things you could potentially throw at it. But THESE NPUs in this CPU for laptops isn't for that purpose. This is a NPU being tossed in for power savings that's to provide enough power to churn through things like Zoom's real-time video features, Excel's hardware acceleration, and yes, Microsoft's screenshot spyware without draining the battery in the same way a full blown GPU would.

        But the "AI" tag is pure marketing bullshit. I highly doubt someone is going to enjoy running something like Stable Diffusion on **THIS** kind of NPU.

        • by Luckyo ( 1726890 )

          That's the point. It's not going to run complex AI models.

          But it's going to be very optimized for microsoft's spyware AI built into windows.

          • For those of us that don't run Windows, do we care?

            Eventually there will be library support / kernel support in Linux and then we can use this hardware for something more interesting.

        • there's less a need for NPU as the GPU has pipes for the same task that MACs handle.

          You're making assumptions. Generalising "GPU" and assuming that you can do the same tasks at the same efficiency is wrong. *SOME* GPUs have this covered. The majority of GPUs on the market are not fancy latest and greatest RTX series devices. And there is a significant performance and efficiency difference between doing work on something with the correct hardware and without. I only recently used the comparison between an RTX2060 and a GTX1080Ti for using some run of the mill consumer tools such as Topaz AI

          • Exactly.
            People recognition (by image or voice) uses a fraction of a GPU's power, but the GPU still uses power when idle (not much, but at least an order of magnitude more than it should).
            The Google Coral uses 2W, my GPU uses 40W while idle, performing the same tasks.
            Even if someone already owns a potent GPU already, the NPU would still be helpful.
            I temporarily used my main PC for Blue Iris + CodeProject.AI as well as gaming and other stuff at the same time. While I was gaming, there sometimes were crashes a

          • The 2000 series onwards also support half precision really well, so they get the double boost of a load of extra compute units for nn maths plus twice the throughput due to using float 16.

            Thing is, there are inference frameworks which will use shaders and they will work on any random iGPU as well, but because AMD and Intel are being a bit useless they are rather further behind on general compute.

        • If the NPU would be able to process images from video streams or process audio inputs, that would be awesome.
          It would allow me, for example, to get rid of the GPU in my dedicated NVR machine (Blue Iris) and reduce power usage by 40W (out of 70W total that the machine is using right now).
          It would be useful for people who run Frigate, or for people who do have home automation and want to give voice commands to their devices without the data going all the way to "the cloud" and back.
          I always wanted to have the

      • Well, yes and no ...

        AMD's AI processors seem to be meant for Microsoft's "AI PCs" running Windows Copilot, and include custom GPU-like hardware to run "AI" tasks fast, similar to recent iPhone chips.

        Sure, there's a huge marketing element in including "AI" as part of the branding, but there is custom hardware on the chips to back it up, so it's not just a spin.

      • Realistically they presumably had no real choice for their new laptop parts. Putting 'AI' in the name wasn't mandatory; but when they sell most of their laptop CPUs to PC OEMs who ship Windows, they don't really have the luxury of ignoring the fact that MS has decided that 'AI' is the second coming of cyber-jesus and already cycled through multiple rounds of 'AI'-focused branding with NPU performance number requirements(there were 'AI PCs', then 'CoPilot PCs' and 'Copilot+ PCs') which Qualcomm currently shi
  • Blah de fucking blah.

    I don't give a shit about fancy new ai processors. All of their GPUs for ages have been capable of doing AI. They are just really really shit at doing the software platform compared to nvidia.

    At the moment you have a choice between a third party inference framework using shaders which is not well optimised, or the rocm crapshoot to try and run CUDA code which... is optimized for nVidia cards not AMD ones. And probably won't work on your card anyway.

    Hey assholes how about you make pytotc

    • I don't give a shit about fancy new ai processors. All of their GPUs for ages have been capable of doing AI.

      Yeah, except for laptops... which is precisely what this article is talking about. Maybe if you stopped to read it before rushing to angry post you wouldn't be so off topic.

      • Why could you not actually read my post?

        Their iGPUs are already more than capable of doing ML. LIKE I SAID.

        This is just yet another half arsed initiative

        • Except they aren't. Currently 0% of iGPUs on the market are shipped with "AI" specific cores. Look I get it, you may be happy with underperforming inefficient hardware. I too shunned the idea of buying a 3DFX voodoo thinking this whole video accelerator is just a cash grab when the CPU software rasteriser was perfectly fine for all games. - That is the mistake you are repeating now. The only "NPU" you currently get in an AMD laptop is one with an RTX graphics card. ... at least until this CPU comes out.

          And

    • by slack_justyb ( 862874 ) on Monday June 03, 2024 @07:53AM (#64519511)

      or the rocm crapshoot to try and run CUDA code which... is optimized for nVidia cards not AMD ones.

      Not having any issues with ROCm 6.0 for anything thus far. Additionally, PyTorch for ROCm is in MIOpen [github.com] not CUDA.

      ComfyUI works just perfectly on my AMD system and Fedora 40 has ROCm baked in that just works out of the box. I'm not sure what you've tried on, but the AMD stuff works quite well. I've even gotten things like ToonCrafter [github.com] to work. Which, I haven't looked, but anyone using that should absolutely convert the checkpoint over to safe tensors, because you shouldn't run a model with executable code that comes when folks serialize in pickle.

      They.are just thrashing around at random trying to get people to buy their cards without doing the basics, namely making their cards fucking work.

      I mean, alright. You've had some bad experience, we get it. But I've got a machine with a RX 6600 XT, one with a RX 6900 XT, and my day to day driver with PopOS (which does require some fucking around with ROCm as the shit isn't packed in for that Distro) on a RX 7900 XT. All of them work just fine. X11 or Wayland session for the regular graphics. OpenGL and Vulkan for the game stuff. And all the AI ROCm stuff works just fine (but again, you pretty much have to muck around if your distro doesn't have ROCm packed in by default, so there is THAT).

      I'm not saying you're wrong, just saying, I in no way have had that same experience on Linux.

      But I do know that on Windows, things like A1111 and ComfyUI didn't have support on ROCm + Windows, but there were a few forks of them that "sorta had AMD + Windows" support. So yeah, if you're on Windows 10 or 11, yeah AMD is going to look really bad in the AI stuff.

      • That's a pair of pretty big caveats there.

        IF your card supports ROCm and then you have to do some mucking around. With nVidia, I can grab about any GPU from the last 7 years (i.e. 10 series and newer), pip install pytorch and go to town. I know my code will run on anything from a 1050 to an H100 in teh cloud, and anything in between including the rando new work laptop which has some nVidia GPU in which I didn't check because I know it will work.

        AMD are needlessly hobbling their ecosystem. The shaders all wo

  • Or desktop, that is? The models are not going to fit. It is going to eat power like crazy. This is simply irrational.

    • by slack_justyb ( 862874 ) on Monday June 03, 2024 @08:04AM (#64519547)

      I'm doubting that they expect you to run SDXL in a mobile NPU. What this is likely for is things like Zoom's blur background, Google's upscale stuff, Excel HW accel, and so on. Things that just need enough multiply-accumulate (aka dot product stuff) to run decently that's a bit outside of just doing it all in AVX512.

      It is going to eat power like crazy

      Nah the most recent NPUs are really optimized for that kind of stuff. But it's also why running SDXL on it would suck donkey balls. This doesn't implement anywhere near the level of pipeline and transformations that you're thinking it might do.

      But yeah, there's a lot of stuff out there that's using AVX512 and builtin NPUs to do a lot of matrix dot-product calculations, so that's what this kind of hardware is optimizing for without having to bring with it the load that a GPU might require.

      • by sinij ( 911942 )

        What this is likely for is things like Zoom's blur background, Google's upscale stuff, Excel HW accel, and so on. Things that just need enough multiply-accumulate (aka dot product stuff) to run decently that's a bit outside of just doing it all in AVX512.

        Could you please explain to me the benefits of doing Zoom blurring more efficiently at 3K+ price point when existing cheap laptops without this hardware already do well-enough. What is the business case for having best-in-class Zoom blurring?

        • I own their stock. That's the business case.

          Please buy more AMD! Thank you!

        • That's what people said about software-based stuff right before hardware-based stuff appeared, or when it was announced.
          Why use a dedicated GPU, we can do fine with software rendering.
          Why use an ASIC, Bitcoin can be mined just fine on CPU.
          Why do that or the other differently when we already have that and the other already the traditional way?

          History (not only IT history, mind you) is full of examples of the same mentality, as well as manifestations of fear stemming from not understanding what the new produc

          • by Kaenneth ( 82978 )

            This is /., I mostly come here now to laugh at the people afraid of anything invented after Y2K.

        • The price point you're talking about is for their dual-slot GPU. This thread is about an APU (their term for CPU + iGPU) that has NPU instructions baked in. Those are laptop CPUs so they don't have an MSRP of their own per se - they are only sold to OEMs. We'll be able to guess once we see retail prices for laptops made with them.

          • The two top desktop AM5 APUs have an NPU and are sold around 250 to 300 USD. Pro versions are more expensive (and nowhere to be found) and also sport full ECC support, which non pro Ryzen 8000 APUs sadly don't have, which is why I just upgraded to an AM5 with a Ryzen 5 7600 CPU (just 2 rdna2 cores, good enough for that system, upgraded from A10 7800 which didn't play 4k videos well enough)
        • Some people don't like having Zoom eat an entire CPU core (or more) and spin up the laptop fans for a god damn sprint planning meeting.

          If they can make more efficient use of hardware in order to not have that happen, then we're by definition also saving energy, increasing battery life. But I'm sure nobody cares about battery life on a portable device, do they?

        • The answer is that it will do that blurring with less power consumption, which a lot of people do care about. It's not about better blurring. It's about better battery life.

          • by gweihir ( 88907 )

            So essentially specialized hardware for background blurring? That seems like a border-case, not a major selling point.

            • No, for an assortment of tasks that are becoming more common.

              I for one would rather not pay for it any time soon, but it just keeps getting cheaper and more ubiquitous (and cheaper etc.) and eventually it winds up everywhere. My used budget AMD CPU supports former generations of Intel processors' "multimedia" acceleration functions for example.

              I don't really care about having a mobile device do that stuff, and I have a 4060 16GB for doing compute type stuff, which is not exactly a beast but is nice and quie

              • by gweihir ( 88907 )

                Well, yes. But that is still only a "yep, you get some less relevant acceleration engines for free", not a major selling point. Maybe CPUs have just gotten too much not exciting.

  • by Ritz_Just_Ritz ( 883997 ) on Monday June 03, 2024 @07:58AM (#64519529)

    The AI stuff is not interesting to me at all, but I'm about to relegate my older AM4-based system to one of my children and am looking to refresh to an AM5-based system. I've gotten a solid 5 years of drama-free usage from my existing Ryzen 3900x system. This looks like the ticket to refresh unless the pricing it out of whack.

    Best,

  • by sinij ( 911942 ) on Monday June 03, 2024 @08:31AM (#64519601)
    This end-user product seems to be speculating on meeting potential future demand for AI applications. Currently, there is very little in a way of end-point AI product, as most offerings are cloud. More so, past the initial hype, adoption of AI-like features and functionality seems to be marginal to non-existent. That is, people who use AI models locally are even more niche than people who have to do video rendering or CAD processing.

    This leaves me with a question, who are all the potential customers for these laptops and what exactly are they going to be using these for?
  • How fast does it run Quake3?

  • because it will *finally* hang with a 6gb GTX 1060 and maybe even a 1070 (sometimes), and yes that's an 8 year old video card but it's still surprisingly capable.

    The only issue is so far AMD has been careful to keep those GPUs out of their lower end CPUs. Which is a shame. Good integrated graphics would do a *lot* for the PC gaming market. Not that I can blame them, competing with your low end helped kill 3DFX.
    • Neither AMD or Nvidia makes a really low end discrete GPU any more because the onboard CPU graphics are good enough. However the reason that AMD isn't making beefier onboard graphics is due to memory bandwidth limitations more than a desire on their part to avoid competing against themselves. Discrete GPUs have their own VRAM that's got more throughput and is only used by the GPU. The graphics on the CPU die need to share the memory bandwidth with the CPU cores, and it's significantly lower than what dedica
  • Nothing good has come from "AI" software yet.
  • all they did was reduce it from 3 slots to 2.

We are each entitled to our own opinion, but no one is entitled to his own facts. -- Patrick Moynihan

Working...