Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

New Benchmark Tests Speed of Running AI Models (reuters.com) 9

An artificial intelligence benchmark group called MLCommons unveiled the results on Monday of new tests that determine how quickly top-of-the-line hardware can run AI models. From a report: A Nvidia chip was the top performer in tests on a large language model, with a semiconductor produced by Intel a close second. The new MLPerf benchmark is based on a large language model with 6 billion parameters that summarizes CNN news articles. The benchmark simulates the "inference" portion of AI data crunching, which powers the software behind generative AI tools.

Nvidia's top submission for the inference benchmark build around eight of its flagship H100 chips. Nvidia has dominated the market for training AI models, but hasn't captured the inference market yet. "What you see is that we're delivering leadership performance across the board, and again, delivering that leadership performance on all workloads," Nvidia's accelerated computing marketing director, Dave Salvator, said. Intel's success is based around its Gaudi2 chips produced by the Habana unit the company acquired in 2019. The Gaudi2 system was roughly 10% slower than Nvidia's system.

This discussion has been archived. No new comments can be posted.

New Benchmark Tests Speed of Running AI Models

Comments Filter:
  • by Ecuador ( 740021 ) on Tuesday September 12, 2023 @05:44AM (#63841250) Homepage

    Nvidia declined to discuss the cost of its chip. On Friday Nvidia said it planned to soon roll out a software upgrade that would double the performance from its showing in the MLPerf benchmark.

    Ugh, how do they think that's a good announcement? If a software release will double the performance of your hardware in a benchmark, you are "optimizing" for just that benchmark. And I quote the term here, because nVidia have been cheating on benchmarks since the 90s (they are obviously not the only ones, Intel has possibly done it more) and we know it...

    • by Dwedit ( 232252 ) on Tuesday September 12, 2023 @02:00PM (#63842186) Homepage

      Way back in 2001, ATI was caught red-handed cheating in benchmarks. [archive.org]

      Someone noticed that when running Quake 3 Arena, it would secretly use textures that were 1/2 width and height of the normal dimensions. Someone made a tool that renamed the game to "Quack 3" instead of Quake 3, and it started using the full size textures again, and performed 13FPS worse.

      • by Ecuador ( 740021 )

        I did say they are not the only ones. But they were sort of pioneers :) You are talking about 2001, but as I said Nvidia were at it since the 90s, when reviewers had not yet wised up to actually check image quality along with performance (apart from the obvious 16bit vs 24bit and shading). Nvidia had either the Riva 128 or Tnt smashing the frame rate records and ATI (I think with the Rage Pro at the time) could not match it, according to all reviews I read in magazines... except a review if I remember corre

  • Absurd comparison (Score:3, Interesting)

    by holostagram ( 6735694 ) on Tuesday September 12, 2023 @06:49AM (#63841312)

    Comparing Nvidia to Intel for AI workloads is like comparing a Ferrari to a horse. Where is AMD's MI300 in this test? The MI300 is arguably the best processing unit ever created for these types of workloads, and the architecture will be able to scale for years.

    Nvidia's primary advantage at this point is software, because CUDA and associated libraries are well known; and until recently had little competition. But that is rapidly changing. OpenAI and others are developing their own libraries, because they understand the danger of vendor lock-in. Once those libraries are as performant as CUDA and friends, Nvidia's monopoly in this space is in serious trouble.

    • Comparing Nvidia to Intel for AI workloads is like comparing a Ferrari to a horse.

      How so? Intel's Gaudi2 is a purpose-build AI processor. If the blurb is correct that it's only a 10% speed difference, that's quite small. Small enough to be potentially offset by other factors such as availability, power consumption, or driver quality (although for all I know NVidia may be ahead in all these areas as well).

  • by chas.williams ( 6256556 ) on Tuesday September 12, 2023 @07:25AM (#63841366)
    How fast can you get the wrong answer? What a world.
  • This comparison is completely useless. As with most things at large scales, performance per watt or performance per dollar is what actually matters. Maybe these figures are in TFA somewhere? If Nvidia's hardware was slightly faster than Intel's, but Nvidia's hardware used twice the watts during the benchmark, assuming they are similarly priced then intel has the better offering.
    • This comparison is completely useless. As with most things at large scales, performance per watt or performance per dollar is what actually matters. Maybe these figures are in TFA somewhere? If Nvidia's hardware was slightly faster than Intel's, but Nvidia's hardware used twice the watts during the benchmark, assuming they are similarly priced then intel has the better offering.

      Power efficiency is important but not the most important.

      First is getting the answer. It's very telling that many vendors don't submit complete results. The reason Nvidia dominates (for training and vying for inference) is that it works and is closer to a turnkey solution. That's why customers prefer Nvidia and why AMD lags far behind even though its hardware should otherwise be competitive.

      Second is speed (i.e., time to getting a working system as well as latencies for training and influence), especially d

  • I assume it asks the standard questions and measures the response time, you know, stuff like:

    "Youâ(TM)re in a desert walking along in the sand when all of the sudden you look down, and you see a tortoise, itâ(TM)s crawling toward you. You reach down, you flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it canâ(TM)t, not without your help. But youâ(TM)re not helping. Why is that?"

All life evolves by the differential survival of replicating entities. -- Dawkins

Working...