Slashdot is powered by your submissions, so send in your scoop


Forgot your password?

Beyond DirectX 10 - A glance at DirectX 10.1 236

Hanners1979 writes "Although we still appear to be some way away from the release of Windows Vista, and with it DirectX 10, specifications for the first point release of the 3D graphics API, DirectX 10.1, have already been finalised and largely made public. Elite Bastards looks at what's new and what will be changing in this release, set to become available not all that long after DirectX 10 — There's more to it than you might imagine."
This discussion has been archived. No new comments can be posted.

Beyond DirectX 10 - A glance at DirectX 10.1

Comments Filter:
  • by cos(x) ( 677938 ) on Sunday August 13, 2006 @08:21PM (#15900063)
    GPU shader processors certainly are Turing complete and there are plenty of people (ab-)using them for general purpose calculations. See for example []. For some types of calculations, GPUs are much faster than CPUs due to their massively parallel processing. In fact, I have written my thesis on that very topic, comparing CPU and GPU based implementations of some algorithms.
  • Article Text (Score:5, Informative)

    by insane_machine ( 952012 ) on Sunday August 13, 2006 @08:22PM (#15900067)
    Just by reading this article title, it may seem rather like we're getting ahead of ourselves here - After all, we still have another handful of DirectX 9 boards to come from ATI, never mind being a fair few months away from the launch of Windows Vista, and with it the latest iteration of the DirectX API, DirectX 10.

    Nonetheless, despite all this, DirectX 10 is likely to see a number of point revisions during its lifespan and the first of these, imaginatively titled DirectX 10.1, will be the first of these. It may surprise some of you reading this, but the features which will be added by DirectX 10.1 have already been decided upon and information made available about them, so in this article we'll be taking a look through what we can expect to see in DirectX 10.1 compliant hardware.

    I would imagine this goes without saying, but before tackling this article I'd well and truly recommend beginning by reading our look at what DirectX 10 has to offer in our article entitled "ATI on the possibilities of DirectX 10" to get yourself up to speed on everything that this major inflection point in 3D graphics rendering entails, from geometry shaders through to (more importantly for this article) the WDDM driver model. So, if you feel that you know all you need to know about DirectX 10, let's move onwards to the future world of DirectX 10.1.


    Before we begin outright, we should remind ourselves briefly as to exactly why the API will be seeing point releases as of DirectX 10. The main reason for this move is the removal of cap (or capability) bits in the API. In the past, cap bits allowed for graphics vendors to basically pick and choose what features their hardware would support (albeit within some fairly strict guidelines to ensure compliancy to particular DirectX and Shader Model revisions). Although this left the likes of NVIDIA and ATI with plenty of room to develop and tout features that the other didn't have, it also had the side effect of creating development Hell for any game developers working on titles, leaving them to sort through a myriad of cap bits for different GPUs and configurations to ensure that they were supporting the right features for the right boards - More often than not, this simply meant that advanced features that only one graphics vendor supported were left out of the vast majority of titles altogether (Truform anyone?). The removal of this labyrinth was one of the main things developers were screaming out for when it came to discussing what was required of DirectX 10, and so it came to pass.

    Of course, this removal of cap bits had to be offset against the ever changing and progressing world of GPU development, so the graphics vendors still needed a way to push the technology forward and allow new technologies to find their way into games. Thus, DirectX 10 will be seeing point releases, one of the main facets of which will be to facilitate the inclusion of new funtionality for compliant graphics hardware to make use of. This makes life easier both for developers (who can target DirectX 10, 10.1 etc rather than individual features) and consumers - How do you explain to the man on the street that yes, a Radeon X800 and GeForce 6800 are both DirectX 9 parts, but both support different Shader Models in their respective architectures. It isn't much fun, trust me. As DirectX 10 and its point releases will also have very little in the way of features that are only optional in the API, buying a graphics board compliant with a particular DirectX 10 version will ensure that it does everything it needs to do to satisfy game titles that use that level of technology. No more Vertex Texture Fetch-esque confusions this time around then.

    The other question to answer (or not answer, such is the way these things work) before we start is - When will DirectX 10.1 be released? From what we've heard thus far, it appears that it may well become available not all that long after DirectX 10 itself. What isn't so likely however, is that we'll be seeing DirectX 10.1 capable hardware before or at the time of the launch of this new iteration of the API. The main reason for this will be the additional requirements necessary to support DirectX 10.1's WDDM 2.1 driver model, but we'll go into that a little more thoroughly in due course. In other words then, although we may see DirectX 10.1 pretty soon in the grand scheme of things, don't expect to be racing out to buy a DirectX 10.1 capable graphics board for the forseeable future.

    Improvements over DirectX 10

    As I'm sure you've already fathomed by now, DirectX 10.1 will be a superset of DirectX 10 - That is, it will support everything that DirectX 10 does (and thus all DirectX 10 compliant parts), but then add more in the way of features and performance to that offered by the base level of DirectX 10. So, before we start looking at additions to the DirectX 10.1 feature set, let's talk about where we'll be seeing improvements in the API.

    One of the main improvements touted by Microsoft in DirectX 10.1 is improved access to shader resources - In particular, this involves better control when reading back samples from multi-sample anti-aliasing. In conjunction with this, the ability to create customised downsampling filters will be available in DirectX 10.1.

    Floating point blending also gets some new functionality in DirectX 10.1, more specifically when used with render targets - New formats for render targets which support blending will be available in this iteration of the API, and render targets can now be blended independently of one another.

    Shadows never fail to be an important part of any game title's graphics engine, and Direct3D 10.1 will see improvements to the shadow filtering capabilities within the API, which will hopefully lead to improvements in image quality in this regard.

    On the performance side of things, DirectX 10.1 will allow for higher performance in multi-core systems, which is certainly good news for the ever growing numbers of dual-core users out there. The number of calls to the API when drawing and rendering reflections and refractions (two commonly used features in modern game titles) has been reduced in Direct3D 10.1, which should also make for some rather nice performance boosts. Finally, another oft-used feature, cube mapping, gets its own changes which should help with performance, in the form of the ability to use an indexable array for handling cube maps.

    Additions over DirectX 10

    One of the major additions which will impact image quality in DirectX 10.1 regards precision, in a couple of different disciplines. Firstly, this revision of the API will see the introduction of 32-bit floating-point filtering over the 16-bit filtering currently on show in DirectX 9 and 10 - This should see improvements to the quality of High Dynamic Range rendering which use this functionality over what is currently available. On top of this, overall precision throughout the rendering pipeline will also be increased, although to what level doesn't seem to have been publically specified at present. These increases in precision could make for an interesting challenge for the graphics IHVs, as it seems likely they'll be needing to spend a large number of transistors in future parts just to match these new requirements, let alone ekeing decent performance out of their GPUs when dealing with higher precisions than those we have seen thus far.

    Again looking towards improvements on the image quality front, DirectX 10.1 will also see the introduction of full application control over anti-aliasing. This will allow applications to control the usage of both multi-sample and super-sample anti-aliasing, as well as giving them the ability to choose sample patterns to best suit the rendering scenario in a particular scene or title. Finally, these changes in DirectX 10.1 give the application control over the pixel coverage mask, a mask which is used to help to quickly approximate sampling for an area of pixels. This in particular should prove to be a boon when anti-aliasing particles, vegetation, scenes with motion blur and the like. All of this additional control handed to the application could allow for anti-aliasing to be used much more wisely and effectively, and controlled by game developers themselves, rather than the current 'all or nothing' implementation available, which basically amounts to a simple on-off switch.

    To add further to the additional focus on anti-aliasing in DirectX 10.1, support for a minimum of four samples per pixel (in other words, 4x anti-aliasing) is now required (Although this doesn't necessarily mean that support for 2x anti-aliasing in hardware and drivers is a thing of the past).

    WDDM 2.1

    Lastly, we come to one final major change which will be seen in DirectX 10.1 - Whereas DirectX 10 will see the introduction of support for WDDM (Windows Driver Display Model) 2.0, DirectX 10.1 moves this on a step as the driver model moves up to 2.1. Again, be sure to have read our look at WDDM 2.0 before you proceed to understand what this driver model is all about. Needless to say, WDDM 2.1 does everything WDDM 2.0 does, but with a couple of significant additions, mainly aimed at improving performance on DirectX 10.1 capable GPUs further still.

    First on the list for WDDM 2.1 is further improvements to context switching abilities - This was improved significantly with the introduction of WDDM 2.0, where context switching could be performed after processing a command or triangle (compared to what is required prior to WDDM 2.0, where whole buffers needed to be completely worked through before a context switch could be performed). With WDDM 2.1 however, context switching can now be performed instantly. This means that a context switch is guaranteed when requested with WDDM 2.1, which isn't necessarily the case under WDDM 2.0 when long shaders or large triangles are being processed, whilst retaining the same average switching time between 2.0 and 2.1.

    Due to the amount of threads in use and work being done in parallel at any one time on a GPU, efficient context switching (which basically involves switching between the various threads the hardware is working on) is a vital part of processing work on a GPU, so the removal of as much overhead as possible when context switching is most welcome. This is all the more important under Windows Vista, as the possibility of the GPU having to work on more than one application that requires graphics rendering in some shape or form becomes greater, and thus the need to shift seamlessly between different rendering workloads without one application taking up all of the GPU's rendering time increases further still.

    The other major addition to WDDM 2.1 is a change to the way the GPU and its driver handles page faults - In WDDM 2.0, a page fault (i.e. a request for data that is not currently loaded into memory) triggers a rather long-winded process, which involves the GPU stalling, informing the Operating System of the missing page, and then restarting again once the correct page (piece of data) has been loaded into memory. In WDDM 2.1, things are handled far more gracefully, thanks to the GPU having additional support to handle page faulting and virtual memory allocated on a per process basis. This means that when a page fault crops up, the GPU doesnt have to stall, and instead the improved context switching capabilities we mentioned earlier are put to good use to switch to the next item on the agenda that needs to be worked on rather than sitting around idly waiting for the correct data to be made available.

    As we mentioned earlier in this article, the implementation of WDDM is likely to be one of the main reasons why we don't see DirectX 10.1 compliant hardware springing up any time soon - Quite simply, solving these context switching and page faulting issues isn't a trivial task from either a hardware or driver point of view, and thus a massive amouont of work and resources will have to go into implementing the full 2.1 Windows Driver Display Model as required to gain compliancy. Add to that the necessity to make the other changes required of this point release of the API, and the constant demand of users to see increased performance across the board, in the enthusiast space in particular, then you can see that getting everything together to create a compliant part is going to be a tall order for a little while yet.


    Until we've seen both Windows Vista and DirectX 10 running on compliant hardware, with fully WDDM drivers, in the flesh, then there certainly isn't much we can conclude about DirectX 10.1 and how it will improve and impact upon the future of the API and hardware which relies upon it. Certainly, although I'm sure this point release will become another important marketing tick box, it doesn't feature a great deal that will send the jaws of graphics enthusiasts around the world dropping particularly.

    Of course, this focus on what the developers want and need over additional eye candy isn't a bad thing - Quite the opposite in fact. It certainly seems that the focus of the DirectX team circa DirectX 10 is to try and solve as many of those age-old graphics rendering issues as they can, allowing developers to wring every last drop of functionality and performance out of their titles which should, in turn, give us some far better gaming experiences on the PC in future.

    That isn't to say that additional image quality-related features are non-existent in DirectX 10.1. In particular, the addition of full application control for anti-aliasing is an intriguing one, and it'll certainly be interesting to see how (and if) this is put to comprehensive use in game titles further down the line. We'll also have to wait and see what the increased float and rendering pipeline precisions hold in store for us from an image quality point of view, and perhaps more importantly how any increases in precision will impact upon performance in future generations of hardware.

    Certainly, when you look at the thoughts behind DirectX 10.1 as a whole, you can safely say that the consumer 3D graphics industry isn't looking like slowing down any time soon, which means plenty more excitement, arguments and competition for quite some time to come - Heaven for 3D graphics geeks like you and I.
  • by MBCook ( 132727 ) <> on Sunday August 13, 2006 @08:28PM (#15900092) Homepage
    It's just like the rest of Win32. There is nothing magical. But as you implement it new versions will come out and you'll be in constant catch-up. On top of that, DirectX is used for games so you need to have it perform well. This combination makes it hard. CodeWeavers and Cedega are both trying.
  • Re:WHOM (Score:3, Informative)

    by SpottedKuh ( 855161 ) on Sunday August 13, 2006 @08:51PM (#15900161)
    Indeed, you are correct that "whom," as opposed to "who," should have been used. However, I believe the term "accusative" does not apply to the distinction between "who" and "whom" in English. I believe the terms that should be used are "subjective" (who) or "objective" (whom).

    In modern English, the accusative and dative cases that existed in Old English (and are still used in modern languages such as German) collapsed into a single objective usage. That is, "whom" can be used either as a direct object pronoun, corresponding to an accusative usage in other languages ("Whom did you hit?"); or, it can be used as an indirect object pronoun, corresponding to a dative usage in other languages ("To whom did you give the apple?"). There's a much better explanation here [].
  • by Tolleman ( 606762 ) <jens.tollofsen@se> on Sunday August 13, 2006 @09:20PM (#15900257) Homepage
    nVidia has always had excellent support for OpenGL. And concidering that alot of the guys at nVidia is former SGI employees, SGI being the ones that made OpenGL, they've always been OpenGL fans. So basicly, is anything you wrote correct?
  • by EvilMerlin ( 32498 ) on Sunday August 13, 2006 @09:51PM (#15900354) Homepage
    For the LAST time, DirectX != OpenGL.

    Direct3D is more like OpenGL, DirectX includes a whole boat load of stuff OpenGL can't even think about touching, stuff like DirectPlay and DirectSound for starters.

    People, especially those who love the anti-Microsoft FUD, shoult better educate themselves before attempting to speak about Microsoft...
  • Insightful? (Score:3, Informative)

    by Sycraft-fu ( 314770 ) on Monday August 14, 2006 @12:33AM (#15900811)
    More like wishful innacurate zealot rambling. nVidia isn't betting on OpenGL, nVidia has ALWAYS supported OpenGL to the same level as they have DriectX, which is to say excellently. Ever since their fumbling first attempt with a proprietary API they decalred their cards native APIs were DirectX and OpenGL. They supported both as native, and no others. You'll find that with games that support both, their speed is equal. To this day, I've never seen them slack on their GL support.

    And yes, DirectX IS a standard. It's not an open standard, but it's a standard. Look up "standard" in the dictionary. A standard is just something that's regularly and widely used. There doesn't even have to be an offical document on it or anything, so long as a bunch of people do it a certian way, it's a standard.

    DirectX is the predominant standard in PC gaming graphics, sound, input, and so on. You look at game titles, better than 90% of them require DirectX. Yes it's MS exclusive, but it's still the standard for gaming.

    Unless OpenGL really gets it's shit together and starts keeping up to date with graphics hardwre developments, then no, I don't think there's any chance of DirectX going anywhere. GL support lags behind hardware which means to implement a GL game using the latest, greatest features you've got to implement them multiple times to deal with the different extensions form different vendors.
  • by nighthawk127127 ( 848761 ) on Monday August 14, 2006 @02:02AM (#15900970) Journal
    we won't be seeing any games that use DirectX10 for at least 2 years
    Hmm... like, for example, Crysis? Or UT2K7? Or Halo 2 (PC obviously)? Or Flight Simulator X? Come on out from under your rock, buddy... these are all games that use DX10 and they'll be out well within 2 years.
  • by Khyber ( 864651 ) <> on Monday August 14, 2006 @02:36AM (#15901020) Homepage Journal
    Running games and graphics apps in OpenGL was better and faster than D3D - why? Simple! D3D had to go thru the OS first. OpenGL was direct to hardware. That was one less step to do (from what I'm understanding reading the OpenGL website,) which usually resulted in better performance, and the general reason was that games running D3D needed more CPU/GPU power and RAM to run as smoothly (Anyone recall Unreal Tournament 2003's requirements? Remember the hidden OpenGL renderer which gave you an extra 10 or so FPS, just like the OpenGL renderer in the original Unreal Tournament?) Having less layers of code to go through will almost always, with the exception of poor programming, outdo going through a separate API. With the lovely novelty of universal drivers, games can easily be written to directly address the hardware. In steps OpenGL, and out steps D3D. Hello Linux, OSX, and Windows gaming, all in a wonderful harmony. As long as everyone plays by OpenGLs standard, all should be well in theory. This is only a thought, and a theory.
  • by ichigo 2.0 ( 900288 ) on Monday August 14, 2006 @06:55AM (#15901570)
    As the other poster already pointed out, DX8 is the first version that supported shaders. Also, using more advanced shaders do not grow the size of the game by "too many gigs of space", shaders are quite tiny, usually under a few kbytes in size. In fact, if a game uses some of the more advanced procedural shaders that become a realistic possibility with DX10, the size of the game will decrease as some of the art is generated at runtime instead of being handcrafted and stored in the game data. Otherwise agreeable.

New systems generate new problems.