Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Better article (Score 1) 113

I won't make any assumptions about you, but I've *never* looked at the marketing for the product I work on. I don't check to make sure their numbers are accurate, because my job is to build the damn thing, not proofread. If someone from marketing *asks* me to check something, I will, but I don't go around reading reviews to make sure all the numbers are right.

Further, it's a compromise in a part that's already compromised. In any video card, there are several parts that need to be roughly proportionate in power - memory bandwidth, ROP units, shader units, at the most basic level. Adding extras of any one part won't speed things up, it'll just bottleneck on the other parts. The 980 was a balanced design, perhaps a bit shader-heavy. The 970 took the 980 chip, binned out about 20% of the shaders, binned out about 13% of the ROPs and slowed down one of the memory controllers by segmenting it off. The part that you're complaining is "compromised" is still *over*-engineered for the job at hand. They could have completely removed that memory controller and still been bottlenecked on shaders, not bandwidth.

Finally, you missed the most crucial part. You are assuming malice, and ignoring any possibility of incompetence, despite it being a very pointless issue to lie about, and very easy to mess up on. In fact, you seem to be ignoring all evidence that it *was* incompetence, and blindly assert that it was malice for no other reason than that you want it to be malice.

Comment Re:Just bought two of these cards (Score 1) 113

... except they WEREN'T the "same CPU". They were the same GPU die (GM204), but three of the sixteen cores were disabled. This was perfectly explained at launch. If you bought a 970 thinking you could overclock it to 980 clocks and get the exact same performance, I'm sorry, but you just weren't paying any attention.

Comment Re:Better article (Score 2) 113

This wasn't "marketing material", it was "technical marketing material", the stuff given to review sites, not the general public. And it was a relatively obscure portion that was incorrect, not something that most consumers would even understand, let alone care about. The technical marketing staff (a distinct group from the consumer marketing department) made the assumption that every enabled ROP/MC functional unit has two 8px/clock ROPs, two L2 cache units of 256KB, two links into the memory crossbar, and two 32-bit memory controllers.

This assumption was true for previous architectures (Tesla, Fermi, Kepler). It was true for earlier releases in this architecture (the 750 Ti and 980 were full-die releases, no disabled units; the 750 only disabled full units). This is the first architecture where disabling parts of the ROP/MC functional unit, while keeping other parts active, was possible. The marketing department was informed that there were still 8 ROP/MC units, and that there was still a 256-bit memory buss. They were not informed that one ROP/MC unit was partially disabled, with only one ROP and one L2 cache unit, and only one port into the memory crossbar, but still two MCs.

The point AT made is this: this information would have been figured out eventually. If Nvidia had been up-front with it, it would have been a minor footnote on the universally-positive launch reviews, not dedicated articles just for this issue. It only hurts them to have it not be known information from the get-go.

As much as it's hip to hate on big corporations for being evil, they are not evil for no purpose. They do evil only when it is more profitable. In this case, the supposed lie was less profitable than the truth. Therefore it was incompetence, either "they honestly didn't know this was how it worked when they sent the info to reviewers", or "they thought they could get away with something that absolutely would have gotten out, and would not help them sell cards anyway". The former incompetence seems far, far more likely than the latter.

Comment Re:Option? (Score 1) 113

There might be cases where an application queries how much memory is available, then allocates all of it to use as caching. If the driver doesn't manage that memory well (putting least-used data in the slower segment), that could cause performance to be lower than if it were forced to 3.5GB only.

That said, nobody seems to have found any applications where the memory management malfunctions like that, so it's more a theoretical quibble than actual at this point. And, knowing Nvidia, they'd just patch the driver so it would report a lower memory amount to that app only (they unfortunately tend to fill their drivers with exceptions or rewritten shaders to make big-name games run faster).

Comment Better article (Score 5, Informative) 113

As usual, AnandTech's article is generally the best technical reporting on the matter

Key takeaways (aka tl;dr version):
* Nvidia's initial announcement of the specs was wrong, but only because the technical marketing team wasn't notified that you could partially disable a ROP unit with the new architecture. They overstated the number of ROPs by 8 (was 64, actually 56) and the amount of L2 cache by 256KB (was 2MB, actually 1.75MB). This was quite unlikely to be a deliberate deception, and was most likely an honest mistake.
* The card effectively has two performance cliffs for exceeding memory usage. Go over 3.5GB, and it drops from 196GB/s to 28GB/s; go over 4GB and it drops from 28GB/s to 16GB/s as it goes out to main memory. This makes it act more like a 3.5GB card in many ways, but the performance penalty isn't quite as steep, and it intelligently prioritizes which data to put in the slower segment.
* The segmented memory is not new; Nvidia previously used it with the 660 and 660 Ti, although for a different reason.
* Because, even with the reduced bandwidth, the card is bottlenecked elsewhere, this is unlikely to cause actual performance issues in real-world cases. The only things that currently show it are artificial benchmarks that specifically test memory bandwidth, and most of those were written specifically to test this card.
* As always, the only numbers that matter for buying a video card are benchmarks and prices. I'm a bigger specs nerd than most, but even I recognize that the thing that matters is application performance, not theoretical. And the application performance is good enough for the price that I'd still buy one, if I were in the market for a high-end but not top-end card.

Not a shill or fanboy for Nvidia - I use and recommend both companies' cards, depending on the situation.

Comment Re:the problem with Twitter (Score 2) 114

140 characters ISN'T ENOUGH! That's not enough to say anything of substance.

With you so far.

If there was a service that came out with 300 characters as a limit, it would crush Twitter.

And now you lost me. Twitter isn't for "anything of substance". It's either insubstantial stuff, or links to substantial stuff. People don't use it as, or want it to be, a place for "anything of substance". Leave that to the blogs.

Comment Logitech Anywhere MX (Score 1) 431

The Logitech Anywhere MX has a physical middle-click button underneath the scroll wheel ("clicking" the wheel itself just toggles a friction gear on the scroll wheel). If it weren't for your additional complaint about needing a massive mouse (this thing is tiny), it would be perfect for you.

Interestingly, while it really can run perfectly on surfaces as weird as glass, I have found one surface it does not work on: my old mousepad.

Comment Re:Awesome, I shall buy one in a year (Score 1) 114

I generally prefer ATI hardware because I think nVidia's stock cooling kills graphics cards and I'd rather deal with crappy drivers, but the current ATI hardware is a complete non-starter. There's really no level at all where it can be justified.

Well, yeah, because ATI doesn't exist anymore, even as a brand. AMD bought them in 2006 and retired the brand in 2010. The last ATI card was the 5870 Eyefinity Edition, which packs about as much punch as this 960, but in a card with the size, noise and power draw of a top-end card. Everything since has been AMD.

I know exactly what you meant, and I even agree with you on your points, it's just hard to take those points seriously when you're using a half-decade-old name.

Comment An Offline Mode (Score 3) 324

I've already given more data to Google than I would like. I'm not buying Glass unless I can use it as MY device, not theirs. No uploading shit to the cloud. No monitoring my location or what I look at or what apps I use.

I'm not worried about people recording me with Glass. I actually think that could do more good than harm (mainly by recording police). So I'd be recording anything I think interesting (fortunately for you all, I find humans incredibly dull). But those recordings would have to remain MINE, under MY control.

Comment Re:wtf are you talking about (Score 1) 40

Goldeneye had, primarily, a 2D gamespace. The graphics were 3D, but altitude rarely factored into things (unless you played Oddjob).

Any game with a real 3D gamespace was brutal on the N64. Daikatana, for instance, although that one had more problems than just control issues.

Consider games as part of an information system. Data flows from the game to the user (display, sound, rumble) and from the user to the game (controller). The more data that can flow, the more complex the game can be without overwhelming the user. On the output side, that's why game designers push for higher resolution and framerates. On the input side, that's why we go for the maximum number of analog inputs (two sticks plus analog triggers, plus sometimes a touchpad or touchscreen), and cram as many buttons as possible onto the pad.

Slashdot Top Deals

Happiness is twin floppies.

Working...