Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD AI

Could AMD's AI Chips Match the Performance of Nvidia's Chips? (reuters.com) 37

An anonymous reader shared this report from Reuters: Artificial intelligence chips from Advanced Micro Devices are about 80% as fast as those from Nvidia Corp, with a future path to matching their performance, according a Friday report by an AI software firm.

Nvidia dominates the market for the powerful chips that are used to create ChatGPT and other AI services that have swept through the technology industry in recent months. The popularity of those services has pushed Nvidia's value past $1 trillion and led to a shortage of its chips that the Nvidia says it is working to resolve. But in the meantime, tech companies are looking for alternatives, with hopes that AMD will be a strong challenger. That prompted MosaicML, an AI startup acquired for $1.3 billion earlier this week, to conduct a test comparing between AI chips from AMD and Nvidia.

MosaicML evaluated the AMD MI250 and the Nvidia A100, both of which are one generation behind each company's flagship chips but are still in high demand. MosaicML found AMD's chip could get 80% of the performance of Nvidia's chip, thanks largely to a new version of AMD software released late last year and a new version of open-source software backed by Meta Platforms called PyTorch that was released in March.

This discussion has been archived. No new comments can be posted.

Could AMD's AI Chips Match the Performance of Nvidia's Chips?

Comments Filter:
  • Not without CUDA (Score:4, Informative)

    by atomicalgebra ( 4566883 ) on Sunday July 02, 2023 @12:58AM (#63650196)
    Can the AMD chips match Nvidia on a performance basis(ie number of computations per second)? Yes. Will it match the easy of development that Nvidia provides for its GPU's? No. Nvidia has invested significant company resources towards making their GPU's programmable. CUDA is significantly better than OpenCL or its other alternatives. That's why most AI software runs using it.
    • Yes, it should really mention pytorch in the headline
    • Re:Not without CUDA (Score:5, Interesting)

      by illogicalpremise ( 1720634 ) on Sunday July 02, 2023 @02:55AM (#63650328)

      Oh no, it's actually much worse than that. You see AMD don't really need CUDA, what they need to do is stop getting in their own way.

      AMD already have a platform/API that's largely compatible with CUDA called HIP. It's even supported by Pytorch. The API is supposed to help you port CUDA native code to AMD. It's missing some features but would still be relatively useful EXCEPT - AMD deliberately nerf it so it's unusable on cheaper hardware.

      It doesn't matter whether the hardware COULD support HIP / ROCm because AMD hardcode device IDs into the library to make it run only on expensive workstation GPUs. If you want to run it on cheaper and/or older hardware - too bad. I'm convinced this isn't purely a technical limitation, AMD just want people to believe it is. What it's really about is stopping corporate / datacenter users from using cheaper gaming GPUs in place of expensive compute / workstation devices.

      So yeah, instead of competing with Nvidia, AMD are more worried about competing with themselves. The consequence of this is ROCm / HIP is constantly being undermined by AMD themselves.

      The reason I know this? I have a bunch of GPUs using Vega chips with HBM2 memory. These are capable compute devices and If I hack the device IDs into the library code I can use ROCm flawlessly. However, AMD have made it clear they don't want my business so I went with Nvidia hardware instead. AMD can go fuck themselves.

      • AMD already have a platform/API that's largely compatible with CUDA called HIP. It's even supported by Pytorch. The API is supposed to help you port CUDA native code to AMD. It's missing some features but would still be relatively useful EXCEPT - AMD deliberately nerf it so it's unusable on cheaper hardware.

        Ugh.

        Just ugh.

        Fucking AMD.

        OK, so it took them fucking forever to support pytorch. To a large extent no one gives a shit about CUDA, because for a long time now people running GPU code aren't writing CUDA

      • by stikves ( 127823 )

        They want to keep artificial market separation.

        But nvidia does this actual hardware. They push the market into differentiation up until there is a backlash, and pull back a little from there.

        They nerf the 4060 cards (or rather all x060 ones), but they are usually for laptops anyway. They tried to put a low RAM 4080 into the market, received backlash, and later was rebranded 4070 TI. They keep the entire enterprise market separate, with ECC RAM, vGPU and other features that are usually not expected by consum

    • Will it match the easy of development that Nvidia provides for its GPU's? No.

      Not sure about that. Running code with a difficult development tool is significantly easier than running code on no-existent and unobtainable hardware. Try actually going and buying an NVIDIA A100 right now. You'll be waiting months ... which is still better than trying to buy a H100 where no one will give you a delivery date.

      If AMD can actually produce the things and put them to market they will sell right now. No start-up in this space is going to sit around with their thumb up their arse waiting for an "

  • by larryjoe ( 135075 ) on Sunday July 02, 2023 @01:14AM (#63650214)

    AMD GPUs have always been competitive with Nvidia in hardware benchmarks. Yet, at the same time, they have been crushed in market share. That should be a glaring indication that hardware benchmarks aren't the problem for AMD. The biggest problem for AMD is software, including the success of CUDA, the relative stability of Nvidia gaming drivers, and Nvidia's huge lead in AI support software.

    How big is Nvidia's software moat? Look at the MLPerf results to see how far behind not only AMD is but also most other competitors. This benchmark was launched by Google and friends (not including Nvidia), and yet even Google fails to submit results in most categories.

    Unfortunately the software story is also AMD's weakness because it doesn't have the headcount to compete with Nvidia. So, AMD tries to enlist third-party help to blunt Nvidia's software advantage. However, that strategy fails because AI is such a fast moving area that most potential buyers buy Nvidia because they need something that works now. Many such buyers would prefer to support AMD and increased competition in the market, but not at the expense of personally missing out on the current advances in AI.

    • by antdude ( 79039 )

      What happened to AMD's open source drivers? Why aren't they better than before like ATI's days with their closed drivers?

    • AMD GPUs have always been competitive with Nvidia in hardware benchmarks.

      Which benchmarks? At which price point. Completely ignoring AI / raytracing stuff (i.e. ignoring features of a card with cost money and thus stack the argument very favourably for AMD) they have only really been competitive in the low-mid range. In the mid-high they've largely taken lower tier cards and run them to the point of borderline breaking, and in the really high end they aren't present at all.

      • The parent said "always," which can derail a conversation. There have certainly been times when it was Nvidia chasing AMD's lead, and putting two GPUs on a single card in order to try and have a competitive flagship. It's been quite a while since I've followed that market, and so I couldn't tell you when was the last time that Nvidia was behind, but that's not really the point here. The point is that Nvidia is ahead now, when it counts.

        AMD will probably take the lead again in the future, but if that's af
  • by Tablizer ( 95088 ) on Sunday July 02, 2023 @01:34AM (#63650244) Journal

    Nvidia's chips draw people with an average of 5.7 fingers while AMD's draws them with 6.2.

    • Nvidia's chips draw people with an average of 5.7 fingers while AMD's draws them with 6.2.

      On each hand, or on several hands, or coming out elsewhere? It matters.

  • Future path (Score:4, Funny)

    by backslashdot ( 95548 ) on Sunday July 02, 2023 @01:56AM (#63650274)

    So AMDs chips in the future will match nVidia's chips today?

    • by Luckyo ( 1726890 )

      Current gen Nvidia AI chips are very much unobtainium and extremely expensive right now due to LLM rush. This is why the comparison is to previous gen, which is at least somewhat available.

      Remember, you don't get rich digging gold in a gold rush. You get rich selling shovels to miners.

    • With bonus points for actually having Nix-compatible drivers.
  • Craptastic headlines on par with the craptastic content.

  • by Opportunist ( 166417 ) on Sunday July 02, 2023 @06:12AM (#63650504)

    Can either of them produce an affordable GPU that doesn't melt cables because it draws more power than the amps at a rock concert?

    • by bn-7bc ( 909819 )
      You are over estimarind a bit I think, those things can be 1500W+ each and you nurmally have several of them at a concert ( say at least 2) so at a minumum we are talking about 3KW for the amps. Is there any gpu("AI accelerator" currently on the market that draws anywhere near that?
      • Please look up the meaning of exaggeration. Sometimes people do it to stress a point.

        • by DRJlaw ( 946416 )

          Please look up the meaning of exaggeration. Sometimes people do it to stress a point.

          Then yes, yes they can. And they have, and they do. Just don't buy an NVIDIA 4080 or 4090.

          All "affordable GPUs that doesn't melt cables because it draws more power than the amps at a rock concert."

        • People who don't understand hyperbole should be publicly eviscerated in the town square.

  • by cjonslashdot ( 904508 ) on Sunday July 02, 2023 @07:16AM (#63650586)
    They are most likely tensor math chips. Real AI chips are neuromorphic, and don't do any math.
  • As a computer gamer, I feel shame that I helped bring this crap into the world. : (
  • Methinks Betteridge's Law of Headlines [wikipedia.org] applies.

Love makes the world go 'round, with a little help from intrinsic angular momentum.

Working...