Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:What's the point? (Score 1) 176

Because(in most product line ups) the one that has 80-90% of the performance, and often fewer warts, since it's a mass-market product, costs half as much.

Obviously, I'm fine with people buying whatever amuses them. If that's your hobby, rock on. It's still the case that bang-for-buck generally goes to hell at the very top end.

Comment Re:What's the point? (Score 1) 176

Since we're good at picking up motion in peripheral vision that's probably a lot harder to do effectivly than full screen rendering.

I imagine that the real trick would be in keeping the eye from perceiving the quality drop-off as movement. You certainly wouldn't need maximum texture quality or fancy bump-mapping or whatnot to fool peripheral vision; but a sensation that 'everything moves the moment I take my eyes off it' would be damn annoying...

Comment Re:Add-on CPU (Score 2) 176

I wonder what kind of yields Nvidia is getting... 3 times as many transistors as one of Intel's fancy parts, and on a slightly larger process(28 vs. 22nm) that's a serious slice of die right there.

On the plus side, I image that defects in many areas of the chip would only hit one of the identical stream processors, which can then just be lasered out and discounted slightly, rather than something critical to the entire chip working. That probably helps.

Comment Re:GK110 vs. 7970 (Score 4, Funny) 176

Hmm. $999 (2013) for 4.5 TF/s vs. $15 million (1984) for 400 MF/s from Cray-XMP. Hard to believe.

This is why I've stopped buying hardware altogether and am simply saving up for a time machine... Importing technology from the future is, by far, the most economically sensible decision one can make.

Comment Re:What's the point? (Score 4, Interesting) 176

All games that have the budget for graphics these days are targeted at console limitations. I can't really see any reason to spend that much on a graphics card, except if you're a game developer yourself.

Buying the absolute-top-of-range card(or CPU) almost never makes any sense, just because such parts are always 'soak-the-enthusiasts' collectors items; but GPUs are actually one area where (while optional; because console specs haven't budged in years) you actually can get better results by throwing more power at the problem on all but the shittiest ports:

First, resolution: 'console' means 1920x1080, maximum, possibly less'. If you are in the market for a $250+ graphics card, you may also own a nicer monitor, or two or three running in whatever your vendor calls their 'unified' mode. A 2550x1440 is pretty affordable by the standards of enthusiast gear. That is substantially more pixels pushed.

(Again, all but the shittiest ports) you usually also have the option to monkey with draw-distance, Anti-aliasing, and sometimes various other detail levels, particle effects, etc. Because consoles provide such a relatively low floor, even cheap PC graphics will meet minimum specs, and possibly even look good doing it; but if the game allows you to tweak things like that(even in an .ini file somewhere, just as long as it doesn't crash), you can throw serious additional power at the task of looking better.

It is undeniable that there are some truly dire console ports out there, that seem hellbent on actively failing to make use of even basic things like 'a keyboard with more than a dozen buttons'; but graphics are probably the most flexible variable. It is quite unlikely(and would require considerable developer effort) for a game that can only handle X NPCs in the same cell as the player on the PS3 to be substantially modified for the PC release that has access to four times the RAM or enough CPU cores to handle the AI scripts or something. That would require having the gameplay guys essentially designing and testing parallel versions of substantial portions of the gameplay assets, and potentially even require re-balancing skill trees and things between platforms.

In the realm of pure graphics, though, only the brittlest 3d engines freak out horribly at changing viewport resolutions or draw distances, so there can be a reward for considerably greater power(for some games, there's also the matter of mods: Skyrim, say, throws enough state around that the PS3 teeters on the brink of falling over at any moment. However, on a sufficiently punchy PC, the actual game engine doesn't start running into (more serious than usual) stability problems until you throw substantially more cluttered gameworld at it.

Comment Re:Fuck yeah (Score 1) 161

Our government works for us, not the corporations who want to turn our private lives into profit.

HADOPI? It is undeniably true that France has a distinct distaste for data-hoovering American internet companies(how much out of a genuine commitment to privacy law, and how much out of an ongoing jealous spat over the surprising lack of data-hoovering French internet companies is somewhat unclear); but damn are they ever 'helpful' when it comes to protecting those culturally-vital copyright holders...

Comment Wait a second... (Score 4, Interesting) 49

AIDS(which, while nasty, is pretty stubbornly fluid-borne) shares a containment level with the flu(which, while merely annoying, cuts a broad and temporary swath through the population pretty much every time somebody gets the winter sniffles)? Are 'containment levels' based much less on ease of transmission than the name would suggest?

Comment Re:Because... (Score 1) 320

There is http://maps.google.com/gl which uses webGL to add some amount of integrated 3d stuff to Google Maps, wholly in-browser. Definitely more limited than the plugin-based or freestanding Google Earth 3d tricks.

I don't know whether this is because webGL is currently too fucked to support it, or whether there just isn't any demand, or whether it's a project in progress, or what.

Comment Re:Underlying structure versus pretty pictures. (Score 4, Interesting) 320

I guess you might be stating my opinion; but my thought is why? What is the 3d web going to give me that 2d doesn't?

It might be helpful to consider an analogy: "What is the 3d desktop going to give me that 2d doesn't?".

The first stab at '3d web', the ghastly VRML horror, is very similar in spirit to the various abortive attempts at creating '3d desktop' graphical shells. As it turns out, this is an area where you are lucky to break even with what you are trying to replace, and epic failure is the rule. Such attempts have largely died, and deserved it.

'WebGL'(as its name suggests) is much more closely aligned to '3d desktop' in the sense of 'people writing programs for this platform can expect OpenGL and/or Direct3d to be available to their programs if they want it'. This has proven to be enormously useful: lots of applications are simply impossible in anything approaching real time on affordable hardware with a pure-software render path, and the bad old days of having one variant for 3dfx/Glide, one for software, one for openGL, and possibly one or two others for oddball losers like 'S3 METAL'.

If you fundamentally don't like this 'web-app' stuff, you won't like it any more once OpenGL ES is given javascript hooks and set loose upon the world. However, the ability to deploy as 'web-apps' applications that require 3d capabilities has the same basic set of use cases as deploying 3d applications as native binaries.

Slashdot Top Deals

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...