Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Comment: Lasers (Score 1) 38

by gman003 (#48926223) Attached to: The discovery of intelligent alien life would be met predominantly with...

Think about it. We're not going to discover alien life by having it drop by for a visit. We're going to discover them by long-range communications, and reply the same way.

Lasers might not be a bad way to get a decent amount of bandwidth between stars, and we'd need a big freaking one to be visible at astronomic distances.

Comment: Re:Car Analogy (Score 2) 113

by gman003 (#48911397) Attached to: NVIDIA GTX 970 Specifications Corrected, Memory Pools Explained

Both of you suck at car analogies.

Let's say Nissan makes an engine. V6, 3.8L. They advertize it as being 250HP, promote it mainly by putting it in racecars and winning races, and a whole lot of other technical specs get handed out to reviewers to gush over, but nobody really reads them except nerds.

They then make a variant engine. Same V6, but they cut the stroke down so it's only 3.0L. They advertize it as being 200HP, promote it with some more racecars that don't win the overall race but are best in their class, and again they hand out a small book worth of technical specs, this one with a minor error in the air flow rates on page 394. Somebody forgot to edit the numbers from the 3.8L engine, so even though the actual airflow is more than enough for the smaller engine, the numbers originally given look bigger. Nobody from marketing was told about the airflow change, because it was a weird side-effect of something they got rid of related to turbocharger compatibility, and nobody thought to ask the engineers to double-check all of their numbers since only like 200 people would see it worldwide anyways.

Once actual customers get their hands on the new engine, most of them are pretty happy. The 3.8L is better, but it costs like twice as much as the 3.0L, so whatever. One customer is driving on this godawful, decrepit highway that hasn't been repaved since the Eisenhower administration built it, and obviously has some issues. Rather than blame the shitty conditions, he takes a look at the engine, and finds that if you take an air compressor and blow air through the intakes, not as much gets to the engine as in the 3.8L. He then bitches about it online, and other people find the same thing. Motorheads being just as collectively retarded as any group, they build a standardized test set that completely ignores realistic driving conditions and pretty much only identifies this particular oddity in this particular engine, and take to the streets waving torches and pitchforks when they find the air flow value on page 394 isn't the airflow they're getting.

Someone at Nissan hears the noise outside, checks with their internal books and finds the typo. They start explaining as quickly and loudly as they can, but the mob's angry and nobody's going to stop it with logic at this point.

Meanwhile, the smart motorheads are sitting back, waiting for Nissan to drop the price on the "tainted" engine so they can pick one up for cheap themselves, since it's actually a perfectly fine engine, already a pretty good one for the price, and way more fuel-efficient than Audi's equivalent.

Comment: Re:Better article (Score 1) 113

by gman003 (#48909831) Attached to: NVIDIA GTX 970 Specifications Corrected, Memory Pools Explained

I won't make any assumptions about you, but I've *never* looked at the marketing for the product I work on. I don't check to make sure their numbers are accurate, because my job is to build the damn thing, not proofread. If someone from marketing *asks* me to check something, I will, but I don't go around reading reviews to make sure all the numbers are right.

Further, it's a compromise in a part that's already compromised. In any video card, there are several parts that need to be roughly proportionate in power - memory bandwidth, ROP units, shader units, at the most basic level. Adding extras of any one part won't speed things up, it'll just bottleneck on the other parts. The 980 was a balanced design, perhaps a bit shader-heavy. The 970 took the 980 chip, binned out about 20% of the shaders, binned out about 13% of the ROPs and slowed down one of the memory controllers by segmenting it off. The part that you're complaining is "compromised" is still *over*-engineered for the job at hand. They could have completely removed that memory controller and still been bottlenecked on shaders, not bandwidth.

Finally, you missed the most crucial part. You are assuming malice, and ignoring any possibility of incompetence, despite it being a very pointless issue to lie about, and very easy to mess up on. In fact, you seem to be ignoring all evidence that it *was* incompetence, and blindly assert that it was malice for no other reason than that you want it to be malice.

Comment: Re:Just bought two of these cards (Score 1) 113

by gman003 (#48909033) Attached to: NVIDIA GTX 970 Specifications Corrected, Memory Pools Explained

... except they WEREN'T the "same CPU". They were the same GPU die (GM204), but three of the sixteen cores were disabled. This was perfectly explained at launch. If you bought a 970 thinking you could overclock it to 980 clocks and get the exact same performance, I'm sorry, but you just weren't paying any attention.

Comment: Re:Better article (Score 2) 113

by gman003 (#48908705) Attached to: NVIDIA GTX 970 Specifications Corrected, Memory Pools Explained

This wasn't "marketing material", it was "technical marketing material", the stuff given to review sites, not the general public. And it was a relatively obscure portion that was incorrect, not something that most consumers would even understand, let alone care about. The technical marketing staff (a distinct group from the consumer marketing department) made the assumption that every enabled ROP/MC functional unit has two 8px/clock ROPs, two L2 cache units of 256KB, two links into the memory crossbar, and two 32-bit memory controllers.

This assumption was true for previous architectures (Tesla, Fermi, Kepler). It was true for earlier releases in this architecture (the 750 Ti and 980 were full-die releases, no disabled units; the 750 only disabled full units). This is the first architecture where disabling parts of the ROP/MC functional unit, while keeping other parts active, was possible. The marketing department was informed that there were still 8 ROP/MC units, and that there was still a 256-bit memory buss. They were not informed that one ROP/MC unit was partially disabled, with only one ROP and one L2 cache unit, and only one port into the memory crossbar, but still two MCs.

The point AT made is this: this information would have been figured out eventually. If Nvidia had been up-front with it, it would have been a minor footnote on the universally-positive launch reviews, not dedicated articles just for this issue. It only hurts them to have it not be known information from the get-go.

As much as it's hip to hate on big corporations for being evil, they are not evil for no purpose. They do evil only when it is more profitable. In this case, the supposed lie was less profitable than the truth. Therefore it was incompetence, either "they honestly didn't know this was how it worked when they sent the info to reviewers", or "they thought they could get away with something that absolutely would have gotten out, and would not help them sell cards anyway". The former incompetence seems far, far more likely than the latter.

Comment: Re:Option? (Score 1) 113

by gman003 (#48908151) Attached to: NVIDIA GTX 970 Specifications Corrected, Memory Pools Explained

There might be cases where an application queries how much memory is available, then allocates all of it to use as caching. If the driver doesn't manage that memory well (putting least-used data in the slower segment), that could cause performance to be lower than if it were forced to 3.5GB only.

That said, nobody seems to have found any applications where the memory management malfunctions like that, so it's more a theoretical quibble than actual at this point. And, knowing Nvidia, they'd just patch the driver so it would report a lower memory amount to that app only (they unfortunately tend to fill their drivers with exceptions or rewritten shaders to make big-name games run faster).

Comment: Better article (Score 5, Informative) 113

by gman003 (#48908047) Attached to: NVIDIA GTX 970 Specifications Corrected, Memory Pools Explained

As usual, AnandTech's article is generally the best technical reporting on the matter

Key takeaways (aka tl;dr version):
* Nvidia's initial announcement of the specs was wrong, but only because the technical marketing team wasn't notified that you could partially disable a ROP unit with the new architecture. They overstated the number of ROPs by 8 (was 64, actually 56) and the amount of L2 cache by 256KB (was 2MB, actually 1.75MB). This was quite unlikely to be a deliberate deception, and was most likely an honest mistake.
* The card effectively has two performance cliffs for exceeding memory usage. Go over 3.5GB, and it drops from 196GB/s to 28GB/s; go over 4GB and it drops from 28GB/s to 16GB/s as it goes out to main memory. This makes it act more like a 3.5GB card in many ways, but the performance penalty isn't quite as steep, and it intelligently prioritizes which data to put in the slower segment.
* The segmented memory is not new; Nvidia previously used it with the 660 and 660 Ti, although for a different reason.
* Because, even with the reduced bandwidth, the card is bottlenecked elsewhere, this is unlikely to cause actual performance issues in real-world cases. The only things that currently show it are artificial benchmarks that specifically test memory bandwidth, and most of those were written specifically to test this card.
* As always, the only numbers that matter for buying a video card are benchmarks and prices. I'm a bigger specs nerd than most, but even I recognize that the thing that matters is application performance, not theoretical. And the application performance is good enough for the price that I'd still buy one, if I were in the market for a high-end but not top-end card.

Not a shill or fanboy for Nvidia - I use and recommend both companies' cards, depending on the situation.

Comment: Re:the problem with Twitter (Score 2) 114

by gman003 (#48896919) Attached to: Twitter Moves To Curb Instagram Links

140 characters ISN'T ENOUGH! That's not enough to say anything of substance.

With you so far.

If there was a service that came out with 300 characters as a limit, it would crush Twitter.

And now you lost me. Twitter isn't for "anything of substance". It's either insubstantial stuff, or links to substantial stuff. People don't use it as, or want it to be, a place for "anything of substance". Leave that to the blogs.

Comment: Logitech Anywhere MX (Score 1) 422

by gman003 (#48895519) Attached to: Ask Slashdot: Where Can You Get a Good 3-Button Mouse Today?

The Logitech Anywhere MX has a physical middle-click button underneath the scroll wheel ("clicking" the wheel itself just toggles a friction gear on the scroll wheel). If it weren't for your additional complaint about needing a massive mouse (this thing is tiny), it would be perfect for you.

Interestingly, while it really can run perfectly on surfaces as weird as glass, I have found one surface it does not work on: my old mousepad.

Comment: Re:Awesome, I shall buy one in a year (Score 1) 114

by gman003 (#48881711) Attached to: NVIDIA Launches New Midrange Maxwell-Based GeForce GTX 960 Graphics Card

I generally prefer ATI hardware because I think nVidia's stock cooling kills graphics cards and I'd rather deal with crappy drivers, but the current ATI hardware is a complete non-starter. There's really no level at all where it can be justified.

Well, yeah, because ATI doesn't exist anymore, even as a brand. AMD bought them in 2006 and retired the brand in 2010. The last ATI card was the 5870 Eyefinity Edition, which packs about as much punch as this 960, but in a card with the size, noise and power draw of a top-end card. Everything since has been AMD.

I know exactly what you meant, and I even agree with you on your points, it's just hard to take those points seriously when you're using a half-decade-old name.

Comment: An Offline Mode (Score 3) 324

by gman003 (#48869025) Attached to: What Will Google Glass 2.0 Need To Actually Succeed?

I've already given more data to Google than I would like. I'm not buying Glass unless I can use it as MY device, not theirs. No uploading shit to the cloud. No monitoring my location or what I look at or what apps I use.

I'm not worried about people recording me with Glass. I actually think that could do more good than harm (mainly by recording police). So I'd be recording anything I think interesting (fortunately for you all, I find humans incredibly dull). But those recordings would have to remain MINE, under MY control.

Although the moon is smaller than the earth, it is farther away.