Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

DirectX 10 & the Future of Gaming 93

Homogeneous Cow writes "Brent Justice at [H] Enthusiast has put together a quick look at what DX10 has to offer gamers and what the main differences are between that and our current DX9. Unified Architecture and Small Batch Problems are shown to be addressed. There are a lot of ATI slides supporting the text as well." From the article: "The obvious question for the gamer that arises is, 'Will this terribly expensive and arduous upgrade path positively impact my gaming experience enough to justify the cost?' That has yet to be seen and can only be answered with the games we have yet to play. We can however discuss some of capabilities of DirectX 10 with a unified architecture and how it can potentially benefit gamers."
This discussion has been archived. No new comments can be posted.

DirectX 10 & the Future of Gaming

Comments Filter:
  • by Big_Mamma ( 663104 ) on Wednesday May 03, 2006 @12:36PM (#15254800)

    Small batch problem? I think OpenGL solved that one with display lists - basically, you create a list with commands once, then you can execute the whole batch with a single call, instead of calling glVertex3f() for every single vertex.

    Fixed pipeline isn't really an api problem either, the gpu's added a function to allow a programmer to change pipeline type, from vertex to pixel and the other way around. It doesn't look like it's hard to implement in OpenGL either, it's just a setPipelines(int, int, int) call.

    From the article: To put this in real terms this new shader can take triangles and treat them like objects controlling them at their vertices. These primitives are then passed on to the pixel shader.

    The source of all incompatibility - the geometry shader. But it looks like only another pass of vertex manipulation after the vertex shaders...

    Conclusion: DX10 is just another step in the evolution of 3D graphics, nothing spectacular, and definately not fixing the problems with PC games. It will take a lot more than new api's to fix the gameplay (shooter #2674) and qa = release and patch afterwards.

  • by MaestroSartori ( 146297 ) on Wednesday May 03, 2006 @12:45PM (#15254897) Homepage
    "It seems to me that "DirectX 10 hardware" may finally be approaching a phase-3 machine."

    No, at least from a coding point of view it passed that around the time of maybe DX5 to DX7. Back then it was a real chore to write stuff for, documentation wasn't entirely great and textbooks got all confused and out of date really really quickly. Round about DX8 it really started to be OK though, and that's about when I did a bit of Xbox dev work. Since then, I've been on PS2 duties so have fallen out of touch.

    The thing is, DX isn't the same as OpenGL. It's pretty much a full game middleware platform, only for Windows and Xbox instead of being really multiplatform. Open Source stuff can approximate the feature set if you combine things (OpenGL + SDL + various things for audio, networking, etc.) but they're all done by different people, with different coding styles and different levels of goodness. DirectX's strength is its coherence, and the big install base of Windows users.

    DX10 is throwing away a big pile of audience, I'm not sure that's a good idea...
  • by dpilot ( 134227 ) on Wednesday May 03, 2006 @01:24PM (#15255255) Homepage Journal
    I was thinking more in terms of hardware rearchitecting, not software. From what I've learned, including the DX9->DX10 from TFM, it looks to me as if DX9 and earlier graphics cards shaders used to be implemented like a pipeline, whereas DX10 is implemented more like a multi-issue. That's using CPU terms, but that may not be inappropriate. It appears that in graphics, the elemental unit is the shader, where in a regular cpu there are various execution units. But it's the issue-time equality that's the key. Assuming the multi-issue logic can be contained, the DX10 architecture looks much more regular than DX9 and earlier.

    Assuming OpenGL can take advantage of this too, this could be a boost for the OpenHardware folks.
  • Re:No thanks. (Score:1, Informative)

    by Anonymous Coward on Wednesday May 03, 2006 @11:18PM (#15259649)
    Actually, you're still wrong.

    Virtually no games used DirectX 1. It was an abject failure. It wasn't until DirectX 3 that games actually began to use it.

    Until that time, most games were DOS only, with a few Windows games using the standard Windows API for graphics display, and standard DOS-style drawing routines for the actual drawing. Games didn't start really using DirectX until two things happened.

    First, a very large number of people had Windows 95 instead of Windows 3.1, or DOS. That didn't happen for a couple of years.

    Second, DirectX had to stop sucking, which didn't happen until DirectX 3. Even then, it still wasn't really any better than DOS in most cases.

    Even then, there were a lot of games that ran on both DOS and Windows 95. More often than not, the DOS version worked just as well unless you had newer hardware that only had Windows drivers (like a non-VESA graphics card, an AC97 sound card, USB mice, and so on). Hell, you could even get 3D accelerated games for DOS (using the DOS version of Glide).
  • by Spy Hunter ( 317220 ) on Thursday May 04, 2006 @04:06AM (#15260578) Journal
    Display lists are an old solution, not used much any more. Vertex buffers are what is used nowadays. DirectX never even had a call analagous to glVertex3f, it started straight out with vertex buffers. The small batch problem refers to the fact that DirectX's rendering calls are incredibly CPU intensive. Making a call to render one triangle takes the same amount of time as a call to render thousands of triangles. Making more than about 200 draw calls per frame will cause your application to become CPU-bound, even if you're only rendering 200 triangles! The graphics card can handle the polygons without breaking a sweat but DirectX burns up your CPU doing God knows what instead of passing them along. This makes it difficult to render more than about 200 (depending on CPU speed) objects, which isn't really a whole lot when you think about all of the things that go into a realistic scene.

    I don't know if OpenGL suffers from the same phenomenon. My guess is that it does to some degree, but I can't imagine that it's as bad as DirectX.

    The geometry shader is actually a cool concept. It fits into the pipeline *before* the vertex shader, and it has the ability to create and delete vertices and polygons, which vertex shaders cannot do. This helps free up PCI bandwidth and CPU time by generating complex geometry completely on the graphics card. Applications using stencil shadow volumes and particle systems should benefit immediately, and in the future I expect a move toward lots more procedural generation of geometry. Today's graphics cards can render so many triangles that most applications just can't send them enough to keep them occupied, so having the card generate its own triangles makes sense. For example, you could send the card a list of points on the ground and it could generate a field of unique leafy plants swaying in the wind, one for each point. If the plants are complex then the bandwidth saved by generating that vertex data on the card instead of transferring it over the PCI bus from main memory could be huge.

UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things. -- Doug Gwyn

Working...