Nvidia Calls Out Intel For Cheating In Xeon Phi vs GPU Benchmarks (arstechnica.com) 58
An anonymous reader writes: Nvidia has called out Intel for juicing its chip performance in specific benchmarks -- accusing Intel of publishing some incorrect "facts" about the performance of its long-overdue Knights Landing Xeon Phi cards. Nvidia's primary beef is with the following Intel slide, which was presented at a high performance computing conference (ISC 2016). Nvidia disputes Intel's claims that Xeon Phi provides "2.3x faster training" for neural networks and that it has "38 percent better scaling" across nodes. It looks like Intel opted for the classic using-an-old-version-of-some-benchmarking-software manoeuvre. Intel claimed that a Xeon Phi system is 2.3 times faster at training a neural network than a comparable Maxwell GPU system; Nvidia says that if Intel used an up-to-date version of the benchmark (Caffe AlexNet), the Maxwell system is actually 30 percent faster. And of course, Maxwell is Nvidia's last-gen part; the company says a comparable Pascal-based system would be 90 percent faster. On the 38-percent-better-scaling point, Nvidia says that Intel compared 32 of its new Xeon Phi servers against four-year-old Nvidia Kepler K20 servers being used in ORNL's Titan supercomputer. Nvidia states that modern GPUs, paired with a newer interconnect, scale "almost linearly up to 128 GPUs."
Hmmm. (Score:4, Funny)
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Re:Hmmm. (Score:4, Informative)
It's not so much "older" as "different" in the artificial benchmarking world. Real-world loads don't tend to follow benchmarks religiously, and the newer benchmark might favor a configuration that's not as good in real-world loads.
The classical marketing maneuver is to select from multiple sets of up-to-date benchmarks and pick the ones that favor your particular product. CPUBoss usually shows that one CPU outperforms another consistently (except for single-core vs threaded with dissimilar cores or SMT--fast clock wins single-core, many-cores wins threaded); and frequently shows the same benchmark tool using different strategies and rating each CPU faster than the other based on how it was configured, or shows that one benchmark favors one CPU and another favors the other.
This goes all the way up to real-world functional tests, where you select games which perform better because of some feature or strategy of your GPU and CPU. You have better shaders? Pick a shader-heavy game. Heavy parallelism? Pick a game that meshes with that. You've got fewer parallel operations, but a higher clock? Avoid games that work best with 387-core GPUs and pick ones that like that 1185MHz clock. Show off 6 or 7 games running at freakishly-high 292fps.
Re: (Score:2)
For awhile, it got even worse than that... [hardocp.com]
Re: (Score:2)
Re: (Score:2)
That is kind of ironic. Do think that nVidia and AMD are doing more or fewer application-specific tweaks in their drivers today? Because I don't believe that the latter is the case. (After all, Microsoft does it even with regular Windows software.)
Considering NVidia has special driver releases right after new games that are "optimized" for the new game. Plus a special bundled tool to "optimize" games to your graphics card... Yeah, it is not even a secret any more.
Re: (Score:2)
Thank you, VW. I'll expect my $10 million check in the mail.
Re:Hmmm. (Score:5, Informative)
Only if you think this is new. Intel has been doing shit like this for year and keeps getting caught, there were two lawsuits against them from AMD a few years ago where they ended up paying AMD around $7B USD for doing things like this and other forms of anti-competitive behavior which resulted in multi-billion dollar fines. Then again, nvidia has been caught doing the same. Probably the best example most recently is with their "Hairworks" API, which is likely going to land them in hot water again. Nvidia got nailed a few years ago for anti-competitive behavior over shaders.
Re: (Score:2)
Poor poor AMD. At least that's what their marketing and PR departments like to say.
A real AMD employee who -- like most of the actual engineers -- no longer works there has a different story though:
http://vrworld.com/2011/06/24/... [vrworld.com]
Re: (Score:1)
So an engineer who doesn't understand marketing spoke out about a company's change in marketing direction?
Smoking gun right there. So much worse than intel sabotaging compilers and bribing manufactures to force customer selection.
Re: (Score:1, Troll)
You're a retard.
Intel went out of its way to cripple software compiled using the ICC if it detected a non-Intel CPU, ignoring the standard flags the CPU exposed for extension and feature support.
The ICC is probably the most-used compiler for major software products. This is textbook anti-competitive behavior. There's a reason they got sued and lost.
Re: (Score:3)
Intel optimized per-architecture, not per feature. This had the end result of AMD chips taking the generic path and being slower, but I wouldn't call this tactic dirty. Why would Intel go out of their way to optimize for a competitor?
CPUs have a wide variety of timing and pipeline limitations, and optimizing purely for feature set will never get you peak performance -- this is why GCC has the exact same per-architecture optimization support.
Re: (Score:2)
Re: (Score:2)
Re: (Score:1, Troll)
There is a check for Intel CPU. Since AMD is !Intel, it gets the crappiest possible code path. Override the detection with a preload and suddenly performance improves many-fold.
The Intel code path then follows the relevant standard of choosing code path based on feature flags.
Re: (Score:1)
Funny you're marked troll. Guess the intel fanboys don't like the truth. Here's the reality, if you have a AMD chip there's a program called IPC that removes that flagging in executables built with that. It's fairly well known in gaming circles, and people usually see a 10-40% increase in performance from their games. Sadly it only works with non-encrypted exe's and so on. So some steam games you're pretty out of luck with.
Re: (Score:2)
It is interesting that I got marked troll over an easily verified statement of fact. I sometimes wonder is it's just extreme fanbois or paid shills.
Re: (Score:2)
No. ICC certainly did look at feature flags and use them to the utmost UNLESS the cpu was AMD, then it used the worst performing code paths available. The telling part is that you could preload a library that replaced the IsThisIntel function (not the actual symbol) with a function that always returns true and greatly improve performance on an AMD processor (sometimes beating the performance on an Intel processor). The existence of that function is very much Intel going out of it's way to de-optimize AMD pe
Re: (Score:2)
The 'per-architecture' check included a strcmp with "GenuineIntel". There are processor flags to check if a processor supports an API already. I mean Intel designed the X86 ASM spec the least they could do is follow it in their own software.
Re: (Score:3, Insightful)
Re: (Score:3, Informative)
Re: (Score:2)
They all do it. Nvidia is notorious for it, probably more so than Intel, going so far as to bin chips and creating special review boards and firmwares that make the review cards 30% faster than the retail versions.
This is nothing more than a Pot meet kettle moment. Intel must be making waves in HPC with Phi to draw this strong of an Nvidia comment.
Here's the real reason for Nvidia's complaints (Score:5, Interesting)
The real reason that Nvidia is bitching up a storm is that KNL has received a very positive reception in the HPC world.
Oh, and KNL is actually an absolute bargain in comparison to the requirements to get a high-end Pascal system setup, not only because you can buy an entire KNL system (not just a GPU card) starting at only $5000, but because it's self-hosting and doesn't need a high-end Xeon CPU just to feed the GPU. To put it in perspective, you could build a cluster of 26 KNLs for the price of one of those 8-way systems Nvidia is selling.
http://www.colfax-intl.com/nd/... [colfax-intl.com]
Re:Here's the real reason for Nvidia's complaints (Score:5, Insightful)
Yeah, but their beef isn't about the cost, it is about the speed comparisons. Intel never has tried to compete in the GPU performance space - they are happy with being in the low cost space. If you just compare what you get for a certain cost I have no idea, but I'm guessing having so many more Intel chips in your cluster will add significant power and space requirements at the very least. You may actually be better off with the nVidia solution in the long run.
Re: (Score:2)
If you have some actual reason to believe that, please share so others can make a good decision. If not, why chaff the discussion?
Re: (Score:1)
HPC is not GPU. It's a whole other area of computing that has little if anything to do with graphics. The Knights landing chips are kinda like a GPU in the sense that there are lots of tiny cores good at one or two operations but it differs significantly from a standard GPU type chip that Nvidia produces in that its x86, and it's cores are a little more general purpose than a standard nvidia CUDA core.
Think of it this way, Knights Landing is marketed as 70 Atom-like cores, rather than 1200 CUDA cores. Becau
Re: (Score:3)
Yes, the HPC world is waiting for KNL because they don't want to port their old codes to CUDA. But that's just the expectation : people are starting to realize that running a Xeon code on KNL is by no mean immediate and you won't get much performance boost without a serious application rewrite ... just like porting to GPUs, maybe slightly easier though.
But on the performance side, it is very clear that KNL performance is terrible. The fact that Intel only shows scaling figures is quite funny : it is very
Re: (Score:2)
Yes, the HPC world is waiting for KNL because they don't want to port their old codes to CUDA. But that's just the expectation : people are starting to realize that running a Xeon code on KNL is by no mean immediate and you won't get much performance boost without a serious application rewrite ... just like porting to GPUs, maybe slightly easier though.
Exactly this. AVX-512 is now much more GPGPU-like than traditional SIMD, so even transitioning AVX-256 code to it isn't going to be trivial. I would not expect random code to perform better on it without serious work.
Re: (Score:2)
According to this Phi can be had for $200 at the low end. You can't buy a Nvidia Tesla product for that.
https://www.phoronix.com/scan.... [phoronix.com]
Pot meet kettle (Score:2)
Both parties are quite guilty here
Layoffs (Score:2)
Re: (Score:2)
They've had multiple rounds of layoffs recently (or one round with the reported number increasing frequently).
Intel will be a husk in less than 10 years if they keep this shit up.
Worlds smallest violin for Nvidia (Score:5, Insightful)
Re: (Score:3)
If by improvements you mean optimsations, and by overpriced cards you mean get what you pay for, and by better than AMD you mean better than AMD then yeah you're 100% right.
Now you can repeat the same statement for AMD.
And for Intel
And for ARM
And for every other chip manufacturer who targets a specific market with specific products.
Manufacturer Exaggerates Product's Virtues (Score:2)
Yawn, wake me when there's some actual news.
Also, anyone who puts much faith in Intel's claims is either naive or a company shill. This simply business as usual for Intel.
Cheater A calls out Cheater B for cheating! (Score:2)
People in glass houses shouldn't throw stones... and buy some damn shades because seriously, nobody wants to see that!
Re: (Score:2)
People in glass houses shouldn't throw stones... and buy some damn shades because seriously, nobody wants to see that!
Cheaters think everybody cheats. In this case they might be right.
Get used to it Nvidia (Score:1)