Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Add Another Core for Faster Graphics 237

Dzonatas writes "Need a reason for extra cores inside your box? How about faster graphics. Unlike traditional faster GPUs, raytraced graphics scale with extra cores. Brett Thomas writes in his article Parallel Worlds on Bit-Tech, 'But rather than working on that advancement, most of the commercial graphics industry has been intent on pushing raster-based graphics as far as they could go. Research has been slow in raytracing, whereas raster graphic research has continued to be milked for every approximate drop it closely resembles being worth. Of course, it is to be expected that current technology be pushed, and it was a bit of a pipe dream to think that the whole industry should redesign itself over raytracing.' A report by Intel about Ray Tracing shows that a single P4 3.2Ghz is capable of 100 million raysegs, which gives a comfortable 30fps. Intel further states 450 million raysegs is when it gets 'interesting.' Also, quad cores are dated to be available around the turn of the year. Would octacores bring us dual screen or separate right/left real-time raytraced 3D?"
This discussion has been archived. No new comments can be posted.

Add Another Core for Faster Graphics

Comments Filter:
  • by manjunaths ( 83313 ) on Tuesday August 29, 2006 @05:42AM (#15998643)
    Each core is already capable of doing 100 million raysegs and you talk about quad cores. So I think you mean
    450 million raysegs not 450 raysegs.
  • rabbit rabbit rabbit (Score:4, Informative)

    by RuBLed ( 995686 ) on Tuesday August 29, 2006 @05:53AM (#15998656)
    FTA

    "Oh, blast. Rabbit, I seem to have forgotten my pocketwatch. May I borrow yours?"

    Rabbit: I'm late, I'm late, I'm late...

    ---

    anyway, if these technology becomes a reality in the 3-5 years and if I read the article right, the whole graphics architecture would change, there would only be a need for a super graphics processor and less need for too much memory and those graphics pipeline/shader thingies...

    The reason that they might want it in a CPU is that, why have a separate add on GPU to handle the job while the CPU could do it alone by that time. You would then only need a "basic" video card that would just do the display.

    Hmmm... could this be one of the reasons why ATI and AMD merged?
  • by Tim C ( 15259 ) on Tuesday August 29, 2006 @06:45AM (#15998734)
    Unfortunately, the only downloads I see on that site are for videos of the engine in action. I also note that they quote speeds of 20FPS on a virtual CPU running at 36GHz... Add to that the fact that the site hasn't been updated since mid-2005, and I'd say it's dead.
  • Won't happen soon. (Score:5, Informative)

    by midkay ( 984862 ) on Tuesday August 29, 2006 @06:51AM (#15998744) Homepage
    It's extremely unlikely that anything will go anywhere with raytracing in the near future. Raytracing takes a tremendous amount of power - apps that demonstrate it in realtime usually run quite choppy, and they're very minimalistic to boot; ugly textures, very simple geometry, very confined areas...

    The main benefits of raytracing in games would be:
    1) Shadows; they'd be Doom 3-like. Several games have full stencil shadows and that's just how raytraced ones would look: sharp and straight. The difference? Raytraced ones would take a ton more power and time to compute.
    2) True reflection and refraction. We can "fake" this well enough - for example, see the Source engine's water, incorporating realtime fresnel reflections and refractions. Though Source's water's "fake" refraction/reflection aren't pixel-perfect, and are only distorted by a bump-map, it certainly looks great.

    Honestly, considering the small gain in visual quality (although a major gain in accuracy) - it's like going after a fly with a bazooka. Sure, once we get to the point where there's enough processing power to deal with this well enough in realtime, it will happen - but don't expect it soon, and don't expect that huge a difference. Nicer reflections and refractions (which already look good today) and pixel-perfect shadows (looking just the same as stencil shadows in some newer games).
  • Re:Gaming (Score:5, Informative)

    by Vario ( 120611 ) on Tuesday August 29, 2006 @07:09AM (#15998769)
    They managed to get reasonable frame rates with a FPGA board, which is rather slow compared to modern GPUs. A lot of special effects like diffraction are included and don't kill the framerate. This might be a very interesting alternative to more texels/s and shaders.
    It just looks good as well: http://graphics.cs.uni-sb.de/~woop/rpu/rpu.html [uni-sb.de]
  • by greazer ( 165571 ) on Tuesday August 29, 2006 @07:09AM (#15998770) Homepage
    I've seen the topic of realtime ray-tracing and hardware accelerated ray-tracing come up countless times over the past 15 years. In the 80's and 90's, a realtime ray-tracing acceleration chip was always around the corner. Some products did actually emerge, but never quite caught on. The reason for this is not because "commercial graphics industry has been intent on pushing raster-based graphics as far as they could go". Quite the contrary; it's much more elegant algorithmically (and hence 'easier') to implement a ray-tracer than a scanline based renderer. However, there's a fundamental limitation of ray-tracing that make it unappealing performance-wise. Cache coherence for ray-tracers suck.

    All rendering algorithms boil down to a sorting problem, where all the geometry in the scene is sorted in the Z dimension per pixel or sample. Fundamentally, scanline algorithms and ray-tracing algorithms are the same. For primary rays, here's some simpliefied pseudocode:

          foreach pixel in image
            trace ray through pixel
            shade frontmost geometry

    The trace essentially sorts all the geometrty along its path.

    A scanline algorithm looks like this:

          foreach geometry object in the scene
            foreach pixel geometry is in
              if geometry is in front of whatever is in the pixel already
                shade fragement of geometry in pixel
                replace pixel with new shaded fragment

    As you can see, the only distinction is the order of the two loops. For ray-tracing, traversing the pixels is in the outer loop, and the geometry in the inner loop. For scanline rendering, it's the opposite. This has huge consequences in terms of cache coherency. With scanline methods, since the same object is being shaded in the inner loop, and neighboring fragments of the same object are being shaded, cache coherency tends to be extermely high. The same shader program is used, and likelyhood of the texture being accessed from cache is very good. The same can't be said for ray-tracing. You can shoot two almost identical rays but touch wildly different parts of the scene. Cache coherency relative to scanline rendering is abysmal.

    This one performance side-effect of ray-tracing is the only reason we haven't seen any serious ray-tracing for realtime applications. Even in offline rendering, scanline rendering dominates even though software ray-tracing has been available from the beginning of CG. For ray-tracing to become viable, we need more than just more CPU cores. We need buses fast enough to feed all the cores in situations where we have an extremely high ratio of cache misses. Unfortunately, the speed gap between memory speeds and compute power seems to be increasing in recent years.
  • Re:It's been done... (Score:2, Informative)

    by Yetihehe ( 971185 ) on Tuesday August 29, 2006 @07:14AM (#15998782)
    Yup, and Heaven seven [pouet.net] is even good looking :)
  • by adam31 ( 817930 ) <adam31.gmail@com> on Tuesday August 29, 2006 @08:15AM (#15998933)
    If there's one thing the RT raytracing community is good at, it's explaining how good it works in theory. Take some numbers, extrapolate a little one dimension, then another and BOOM-- The Future. There are several problems with raytracing in real-time:


    1) Static Objects Only. The huge majority of computation time is traversing a spatial subdivision structure. It happens that K-d trees offer the best characteristic (typically, fewest primitive per leaf for a given memory limit). However, these are really heinous to dynamically update. You can cheaply re-create it with median partitioning, but your trees are crappy. You can do a much nicer SAH (surface area heuristic), but to do this per frame blows out your CPU budget.

    2) Bandwidth. Even if you could update your subdivision structure very cheaply, that structure still needs to be propogated out to all the CPUs participating in the raytrace. For the 1.87 MTri model they list on page 6, their spatial structure was 127 MB. Say you have a bandwidth of 6 GB/s, it takes 20ms just to transfer the structure (and there are other problems here). So your ceiling is 50 Fps before you trace your first ray.

    3) Slower than a GPU. Even though they give you some little graph showing that raytracing (a static model, with static partitioning) beats a GPU at a MTri in the frame, this is very deceiving. The GPU pipeline works such that zillions of sub-pixel triangles simply can't get into pixel shaders fast enough, and force the pixel shader to be run many times extra. Double the resolution, however and the GPU won't take a cycle longer... with raytracing, performance will halve. So they found a bottleneck in the GPU which is totally unrepresentative of a game in every single sense, and said LOOK! BETTER! (in theory).

    4) Hey, Where's my Features? All the cool things about raytracing (nice shadows, refraction, implicit surfaces, reflection, subsurface scattering) all get tossed out the window to make it real-time! What's the point, then? Given all the pixel shader hacks invented to make a GPU frame look interesting, the quality that can be achieved in a real-time raytrace is sadly tame. Especially when you consider that quality is the supposed advantage of raytracing.

    And c'mon. It's Gameplay that counts anyway :P

  • Re:Put it on the GPU (Score:3, Informative)

    by Anonymous Coward on Tuesday August 29, 2006 @08:22AM (#15998960)
    Raytracing does not scale nicely with the amount of geometry - mainly because of the shadow rays that have to be scattered from each intersection.

    Erm, that's just flat wrong. With the correct bounding volume hierarchy, ray tracing scales with geometric scene complexity much better than scanline methods. It is one of the reasons that offline raytracing renderers can handle such huge datasets efficiently. Also, the number of shadow rays used is *completely* independent of the "amount of geometry" in a scene. You are likely to need more shadow rays if you have large area lights and are seeing a lot of noise in the penumbrae of these lights - but this is nothing to do with the amount of scene geometry.
  • by Glacial Wanderer ( 962045 ) on Tuesday August 29, 2006 @08:34AM (#15999011) Homepage
    I mostly agree with you; however, your statement that ray tracing results in hard/sharp shadows is wrong. Ray tracing can easily make realistic soft shadows. As you mentioned ray tracing costs a ton of extra processing power to result in approximately equivalent images to raster graphics. Ray tracing more or less simulates how light works in the real world, and there is the real problem. Ask anyone in the graphics industry and they'll tell you their job is to fudge things until they look good because realistically modeling the real world is too expensive.
  • Three Words (Score:3, Informative)

    by GeffDE ( 712146 ) on Tuesday August 29, 2006 @08:34AM (#15999012)
    The Cell Processor

    Three or four people have brought up the idea that problem would work well for the cell processor. But I don't think anyone has really seen the (rays of) light on the issue. The Cell is perfect for this. Some facts:
    1) Raytracing is highly vectorized. The Cell's many processors are optimized for vector calculations. [wikipedia.org]
    2) Raytracing scales linearly with the number of cores. The Cell has 8 (at least in its current manifestation).
    3) The Cell is already available [linuxdevices.com] as a PCI-Express add-in card (that even runs linux!) which sounds awfully like what a GPU is... 4) The Cell is a bitch to program. But then, so are GPUs...so maybe it's not that ridiculous to see the future of the GPU...from IBM.

    How ironic it is that Intel is now pushing this technology...
  • by DotDotSlasher ( 675502 ) on Tuesday August 29, 2006 @09:12AM (#15999136)
    SGI had a ray tracing demo at Siggraph 2002. On the show floor, a 128-processor SGI box ran demos at around 20hz at about 512x512 pixels.
    http://www.sci.utah.edu/stories/2002/sum_star-ray. html [utah.edu]
    They make some good points about geometric complexity increasing much faster than displayed pixels, so there are fewer graphics primitives per pixel, so scan-line-based algorithms will make less sense.
    So in 2002 it took 128 processors to run at 20Hz at 512x512 pixels. And now we think quad-cores will be enough to render today's complex environments? That math doesn't add up to me. I think scan-line algorithms are the mainstream answer for a long time coming...
  • by CTho9305 ( 264265 ) on Tuesday August 29, 2006 @10:19AM (#15999506) Homepage
    Raytracing takes a tremendous amount of power - apps that demonstrate it in realtime usually run quite choppy
    If you read the Intel paper that inspired TFA's author to write his ill-informed article, you'll see that raytracing scales better with scene complexity, and Intel did benchmarks to show that after about 1M triangles per scene, software raytracers will outperform hardware GPUs using triangle pipelines (e.g. openGL, directX, shaders).

    Sure, once we get to the point where there's enough processing power to deal with this well enough in realtime, it will happen
    The benchmarks in the Intel paper show that we are very close to that point right now.
  • Scientific American (Score:4, Informative)

    by samkass ( 174571 ) on Tuesday August 29, 2006 @10:32AM (#15999603) Homepage Journal
    There is a good article about this in August's Scientific American by W Wayt Gibbs. It's only a couple pages but worth picking up a paper issue, or if you have one of their digital subscriptions here: http://www.sciam.com/article.cfm?chanID=sa001&arti cleID=000637F9-3815-14C0-AFE483414B7F4945 [sciam.com]
  • by Creepy ( 93888 ) on Tuesday August 29, 2006 @02:05PM (#16001166) Journal
    you are correct - pixel/fragment shading is not free - both use the same default shading model in OpenGL and probably DirectX.

    the default shader is, I believe, Lambert (a close relative to Phong - if not, it's Phong) for OpenGL and probably DirectX as well. Programs in the shader can change this to whatever you want it to be (e.g. a cel shader) and you would need to do that in either a ray tracer or rasterizer.

    there's a lot of things I like about ray tracing, but it's not without flaw - it handles specular highlights fantastically, but doesn't handle diffuse well at all, so you have to bolt on other techniques. Most people (including Intel) use ambient occlusion since it's a quick technique (also commonly used in polygon based graphics), but it tends to make muddy shadows (see the wikipedia entry [wikipedia.org]). Radiosity [wikipedia.org] is more realistic, but the patch computations are incredibly expensive (but parallel-able). photon mapping [wikipedia.org] is another method that could be used, but I haven't used it myself. In college I wrote (with a team) a simple ray tracer and shortly after that class wrote a radiosity engine, so I'm familiar with both techniques. I never did really understand how to combine them, but I remember seeing POVRAY do it in the mid 90s and really wanted to figure out how they did it (but I graduated and was putting in startup hours, so that never happened).

    Oh, and waves on a lake are non-trivial - to be completely realistic, you need to deal with subsurface diffusion (or estimation of), foam and caustics (if you can see through the semi-transparent water surface). The specular mirror effect would be nice, but I don't see true caustics from either a raytracer or a rasterizer (you'd need to use ray bars or cones, probably).

The one day you'd sell your soul for something, souls are a glut.

Working...