Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment More power, less flexibility. Better off without. (Score 1) 320

Actually, I have a very different view of where we might be without dedicated graphics hardware.

The introduction of hardware acceleration removed the incentive to develop real-time software rendering techniques.
All 3D hardware has been designed for just one rendering technique: Drawing triangles one at a time with a depth buffer.
Couple this with the problem that for a long time none of the major graphics architectures were open (which prevented any APIs apart from OpenGL and DirectX to be developed), and you're left with a very stagnant set of usable rendering techniques.

There hasn't been enough effort in real-time ray tracing, and very little in combined approaches, (eg using the standard per-triangle method to compute the 1st ray when performing ray tracing). It's very rare to hear news of new rendering algorithms being developed, and yet there's plenty of scope for more.

We've also been limited to not using procedurally generated models, given only a few arbitrary blending modes (thus making transparency far uglier), been forced to use only a handful of supported scalar types (which varies depending on whether you're calculating vectors or pixels), and a million other issues that programmers would've avoided by using a CPU. And your games and applications have surely suffered.

At the bottom I will describe a few techniques that I would like to use, which would be possible on a fully programmable CPU, but are not (AFAIK, at least they weren't a year ago) possible on current hardware.

Now, compare the reality above with what we would have had if we hadn't used GPUs:
- We'd've focused very early on on high degrees of hardware parallelism, meaning overall computing power would be higher and rendering frame rates would probably be comparable to present.
- We'd've developed many new and superior rendering techniques, like those mentioned above.
- We'd've been able to write our own APIs, instead of having to choose between Windows-only (DirectX) or inferior feature set (OpenGL), both of which have less than optimal coding interfaces.
- And lastly, there'd be far more Linux gaming as developers would have no reason to force a proprietary single-OS API on us (DirectX), which is the primary reason for the lack of uptake of Linux amongst gamers.

Now we're heading back in the right direction:
- GPUs are becoming more general purpose chips (CUDA etc).
- We're possibly seeing the first steps in a return to memory sharing between graphics and applications (like with Larrabee). Of course we need to increase memory bandwidth to make this worthwhile.
- Next we'll get access to the raw instruction sets, and the ability compile and run arbitrary code on these chips.
And then we'll be back to square one - (specialist) CPUs doing our graphics.
Yet we could've gotten here far earlier if we didn't take the GPU diversion.

In my opinion, what we really did is we sold out on the long future of rendering and gaming for a few years of good times.

ASIDE: Two (of thousands of) techniques that we could be using now, if it weren't for GPUs.

1: Per-pixel curved surfaces (Eg spheres, cones, spline planes).
For these you just need a programmable depth value and a programmable "edge" shader that allows you to specify the start and end pixels of each line.

Eg:
A sphere 100 pixels round in an orthographic projection can be drawn in 1 pass with no unnecessary pixel calculations by:
Calculating the edge start and end points per line as sqrt((50^2) - (Y - CentreY)^2).
Per pixel, calculate the Z by CentreZ - sqrt((50^2) - ((X - CentreX)^2) - ((Y - CentreY)^2)).
It gives a perfectly smooth sphere and it renders far faster than multiple triangles.

Last time I tried, with OpenGL, about a year ago, the above was impossible, although I did make it work partially by ignoring depth and using a triangle that fully enclosed the sphere.

2: Subtraction from surface. (Not sure if there is an official name).

a: Sort polygons front-to-back.
b: Initially treat the drawing surface as an empty (rectangular) polygon.
c: Step through the list of triangles. For each triangle:
      - Carve the empty drawing space up into smaller polygons by subtracting each triangle vectorially, till you are left with no more empty space.
      - Remember all the bits of triangle that were removed by this triangle.
d: You're now left with a list of triangular sections that uniquely cover the drawing rectangle. You can now draw these triangles with 0 overdraw.

It seems John Carmack had this idea before me, but he rejected it at the time for its complexity. However, I think this technique can provide large benefits if used as an early 1st pass for only significant structural surfaces (walls, floors) to significantly reduce overdraw and eliminate many hidden objects.

Slashdot Top Deals

C'est magnifique, mais ce n'est pas l'Informatique. -- Bosquet [on seeing the IBM 4341]

Working...