Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment: Re:Better article (Score 1) 75

by gman003 (#48909831) Attached to: NVIDIA GTX 970 Specifications Corrected, Memory Pools Explained

I won't make any assumptions about you, but I've *never* looked at the marketing for the product I work on. I don't check to make sure their numbers are accurate, because my job is to build the damn thing, not proofread. If someone from marketing *asks* me to check something, I will, but I don't go around reading reviews to make sure all the numbers are right.

Further, it's a compromise in a part that's already compromised. In any video card, there are several parts that need to be roughly proportionate in power - memory bandwidth, ROP units, shader units, at the most basic level. Adding extras of any one part won't speed things up, it'll just bottleneck on the other parts. The 980 was a balanced design, perhaps a bit shader-heavy. The 970 took the 980 chip, binned out about 20% of the shaders, binned out about 13% of the ROPs and slowed down one of the memory controllers by segmenting it off. The part that you're complaining is "compromised" is still *over*-engineered for the job at hand. They could have completely removed that memory controller and still been bottlenecked on shaders, not bandwidth.

Finally, you missed the most crucial part. You are assuming malice, and ignoring any possibility of incompetence, despite it being a very pointless issue to lie about, and very easy to mess up on. In fact, you seem to be ignoring all evidence that it *was* incompetence, and blindly assert that it was malice for no other reason than that you want it to be malice.

Comment: Re:Just bought two of these cards (Score 1) 75

by gman003 (#48909033) Attached to: NVIDIA GTX 970 Specifications Corrected, Memory Pools Explained

... except they WEREN'T the "same CPU". They were the same GPU die (GM204), but three of the sixteen cores were disabled. This was perfectly explained at launch. If you bought a 970 thinking you could overclock it to 980 clocks and get the exact same performance, I'm sorry, but you just weren't paying any attention.

Comment: Re:Better article (Score 2) 75

by gman003 (#48908705) Attached to: NVIDIA GTX 970 Specifications Corrected, Memory Pools Explained

This wasn't "marketing material", it was "technical marketing material", the stuff given to review sites, not the general public. And it was a relatively obscure portion that was incorrect, not something that most consumers would even understand, let alone care about. The technical marketing staff (a distinct group from the consumer marketing department) made the assumption that every enabled ROP/MC functional unit has two 8px/clock ROPs, two L2 cache units of 256KB, two links into the memory crossbar, and two 32-bit memory controllers.

This assumption was true for previous architectures (Tesla, Fermi, Kepler). It was true for earlier releases in this architecture (the 750 Ti and 980 were full-die releases, no disabled units; the 750 only disabled full units). This is the first architecture where disabling parts of the ROP/MC functional unit, while keeping other parts active, was possible. The marketing department was informed that there were still 8 ROP/MC units, and that there was still a 256-bit memory buss. They were not informed that one ROP/MC unit was partially disabled, with only one ROP and one L2 cache unit, and only one port into the memory crossbar, but still two MCs.

The point AT made is this: this information would have been figured out eventually. If Nvidia had been up-front with it, it would have been a minor footnote on the universally-positive launch reviews, not dedicated articles just for this issue. It only hurts them to have it not be known information from the get-go.

As much as it's hip to hate on big corporations for being evil, they are not evil for no purpose. They do evil only when it is more profitable. In this case, the supposed lie was less profitable than the truth. Therefore it was incompetence, either "they honestly didn't know this was how it worked when they sent the info to reviewers", or "they thought they could get away with something that absolutely would have gotten out, and would not help them sell cards anyway". The former incompetence seems far, far more likely than the latter.

Comment: Re:Option? (Score 1) 75

by gman003 (#48908151) Attached to: NVIDIA GTX 970 Specifications Corrected, Memory Pools Explained

There might be cases where an application queries how much memory is available, then allocates all of it to use as caching. If the driver doesn't manage that memory well (putting least-used data in the slower segment), that could cause performance to be lower than if it were forced to 3.5GB only.

That said, nobody seems to have found any applications where the memory management malfunctions like that, so it's more a theoretical quibble than actual at this point. And, knowing Nvidia, they'd just patch the driver so it would report a lower memory amount to that app only (they unfortunately tend to fill their drivers with exceptions or rewritten shaders to make big-name games run faster).

Comment: Roswell (Score 1) 366

by dgatwood (#48908127) Attached to: Best 1990s Sci-fi show?

Admittedly, Roswell barely qualifies as 1990s, because it began in 1999, but it was one of the better sci-fi shows I've seen. Among other things, it turned the genre on its head by being told from the perspective of aliens, in the present day, on Earth. It had a lot of things going against it, of course, with network politics being the big one, and season two strayed awfully far into X-Files territory, but it had good writing, good acting, and much like Stargate, it didn't take itself too seriously, somehow managing just the right blend of humor, romance, dramatic tension, etc. And in spite of the main characters being teenagers, it managed to almost entirely avoid the usual teen drama that you'd expect to clog up such a series.

My favorite funny moment had to be when Jonathan Frakes (playing himself) told one of the alien teenagers that he just didn't make a believable alien. And my favorite episode was the Christmas special; it was almost pure character development, did nothing to drive the plot, but it was a breathtaking tear-jerker that gave a lot of insight into the main characters' personalities.

If you haven't seen Roswell, it's worth a look.

Comment: Better article (Score 5, Informative) 75

by gman003 (#48908047) Attached to: NVIDIA GTX 970 Specifications Corrected, Memory Pools Explained

As usual, AnandTech's article is generally the best technical reporting on the matter

Key takeaways (aka tl;dr version):
* Nvidia's initial announcement of the specs was wrong, but only because the technical marketing team wasn't notified that you could partially disable a ROP unit with the new architecture. They overstated the number of ROPs by 8 (was 64, actually 56) and the amount of L2 cache by 256KB (was 2MB, actually 1.75MB). This was quite unlikely to be a deliberate deception, and was most likely an honest mistake.
* The card effectively has two performance cliffs for exceeding memory usage. Go over 3.5GB, and it drops from 196GB/s to 28GB/s; go over 4GB and it drops from 28GB/s to 16GB/s as it goes out to main memory. This makes it act more like a 3.5GB card in many ways, but the performance penalty isn't quite as steep, and it intelligently prioritizes which data to put in the slower segment.
* The segmented memory is not new; Nvidia previously used it with the 660 and 660 Ti, although for a different reason.
* Because, even with the reduced bandwidth, the card is bottlenecked elsewhere, this is unlikely to cause actual performance issues in real-world cases. The only things that currently show it are artificial benchmarks that specifically test memory bandwidth, and most of those were written specifically to test this card.
* As always, the only numbers that matter for buying a video card are benchmarks and prices. I'm a bigger specs nerd than most, but even I recognize that the thing that matters is application performance, not theoretical. And the application performance is good enough for the price that I'd still buy one, if I were in the market for a high-end but not top-end card.

Not a shill or fanboy for Nvidia - I use and recommend both companies' cards, depending on the situation.

Comment: Re:The solution is obvious (Score 1) 456

But do realize, that was an outlier and is atypical of what Apple does.

No, it isn't atypical, at least for early-generation Apple products. The average support period for Apple is about three years, and there are a fair number of products that got less than that (mostly early models). For example, here's the time between the release date and last supported update of some other first-generation and second-generation Apple iOS devices:

  • Original Apple TV: 3 years, 1 month, and 1 day
  • Original iPhone: 2 years, 7 months, and 4 days
  • iPhone 3G: two years, four months, 11 days

The support period tends to vary based in part on how many of the devices are out there in active use, and in part on how badly underpowered the hardware was to begin with. So later products in a given line are likely to have longer support periods than earlier products.

Comment: Re:life in the U.S. (Score 1) 230

by dgatwood (#48906531) Attached to: Verizon, Cable Lobby Oppose Spec-Bump For Broadband Definition

Actually, the telcos in Europa are preparing to roll out G.fast, which makes telcos again competive with Cable.

Not really. We hit the bandwidth limits of a single twisted pair a long time ago. For G.fast to be usable, the phone company has to replace your phone line with fiber to within just a few hundred feet of your home. For it to reach maximum speeds, you need fiber within just 230 feet. In effect, this means that if the phone company replaces all of their copper with fiber, G.fast lets them skip the cost of running the fiber from the pole outside your house into your house, for now. That's about it.

If your community has no fiber, G.fast won't even connect unless you're within BB gun range of your central office or DSL-capable remote terminal.

Comment: Re:Modula-3 FTW! (Score 1) 453

by Grishnakh (#48905431) Attached to: Ask Slashdot: Is Pascal Underrated?

Nope, Beta was not far, far superior. You're totally forgetting that Betas could only store 1 hour of video. (They later fixed this, but by then it was far too late.) Who wants to change tapes in the middle of a movie? VHS tapes could store a whole 2-hour movie, so they easily took over. Not having Sony's stupid licensing costs helped too. And by the time Beta was on the way out, VHS had caught up to it video-quality-wise too.

Everyone can be taught to sculpt: Michelangelo would have had to be taught how not to. So it is with the great programmers.

Working...