Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Add Another Core for Faster Graphics 237

Dzonatas writes "Need a reason for extra cores inside your box? How about faster graphics. Unlike traditional faster GPUs, raytraced graphics scale with extra cores. Brett Thomas writes in his article Parallel Worlds on Bit-Tech, 'But rather than working on that advancement, most of the commercial graphics industry has been intent on pushing raster-based graphics as far as they could go. Research has been slow in raytracing, whereas raster graphic research has continued to be milked for every approximate drop it closely resembles being worth. Of course, it is to be expected that current technology be pushed, and it was a bit of a pipe dream to think that the whole industry should redesign itself over raytracing.' A report by Intel about Ray Tracing shows that a single P4 3.2Ghz is capable of 100 million raysegs, which gives a comfortable 30fps. Intel further states 450 million raysegs is when it gets 'interesting.' Also, quad cores are dated to be available around the turn of the year. Would octacores bring us dual screen or separate right/left real-time raytraced 3D?"
This discussion has been archived. No new comments can be posted.

Add Another Core for Faster Graphics

Comments Filter:
  • by b1ufox ( 987621 ) on Tuesday August 29, 2006 @05:26AM (#15998615) Homepage Journal
    Need a reason for extra cores inside your box? No :)
  • Gaming (Score:5, Interesting)

    by Anonymous Coward on Tuesday August 29, 2006 @05:39AM (#15998636)
    There are already ray traced games. :O

    http://graphics.cs.uni-sb.de/~morfiel/oasen/ [uni-sb.de]
    • Re:Gaming (Score:5, Informative)

      by Vario ( 120611 ) on Tuesday August 29, 2006 @07:09AM (#15998769)
      They managed to get reasonable frame rates with a FPGA board, which is rather slow compared to modern GPUs. A lot of special effects like diffraction are included and don't kill the framerate. This might be a very interesting alternative to more texels/s and shaders.
      It just looks good as well: http://graphics.cs.uni-sb.de/~woop/rpu/rpu.html [uni-sb.de]
    • by Anonymous Coward
      It's nice that people are working on ray traced games, but please note the following:

      Oasen is based on "OpenRT [openrt.de]" --- which is entirely proprietary, and is NOT open source. Their FAQ explains that clearly.

      I'm sure that I'm not the only person annoyed at their use of "open" to mean "closed".

      Time to look for an open-source raytracing engine designed for interative use ...
    • Another one (Score:2, Troll)

      by gr8_phk ( 621180 )
      I did RTchess a few years back (a link would kill my friends server). The core RT code has been pulled into a library and improved significantly since then. I was actually meaning to write an artice making the same point as the one in the summary. Multi-core will make realtime ray tracing common in a five years, and then there will be no use for the GPU. Why rasterize when you can ray trace instead? Ray tracing scales exceptionally well with polygon count (log n). Why add a second chip? Not to mention the g
      • Re: (Score:3, Interesting)

        by Nahor ( 41537 )

        I remember a talk from someone (John Carmack I think) saying something like raytracing is nice but overkill. Today's hardware maybe be able to handle realtime raytracers but no way near the quality you can get from current 3D engines.

        Most special effects you see in current engines are approximation/hacks compared to what you can do with a raytracer but it's also way cheaper to compute.

        It's the same kind of relationship than between texture maps vs procedural textures. Procedural textures are better for a

    • Scientific American (Score:4, Informative)

      by samkass ( 174571 ) on Tuesday August 29, 2006 @10:32AM (#15999603) Homepage Journal
      There is a good article about this in August's Scientific American by W Wayt Gibbs. It's only a couple pages but worth picking up a paper issue, or if you have one of their digital subscriptions here: http://www.sciam.com/article.cfm?chanID=sa001&arti cleID=000637F9-3815-14C0-AFE483414B7F4945 [sciam.com]
  • It's been done... (Score:5, Interesting)

    by SigILL ( 6475 ) on Tuesday August 29, 2006 @05:42AM (#15998641) Homepage
    F.A.N. released a real-time raytraced demo [pouet.net] at breakpoint back in 2003. It does no more than 10 fps on my lowly 1GHz P3, but I'm sure it runs quite smooth on a nice modern CPU (though I don't think it's multithreaded).
  • by manjunaths ( 83313 ) on Tuesday August 29, 2006 @05:42AM (#15998643)
    Each core is already capable of doing 100 million raysegs and you talk about quad cores. So I think you mean
    450 million raysegs not 450 raysegs.
    • You're absolutely right. I actually SkimmedTFA, and the figure is 450M raysegs/S. "Interesting" means 30fps at 1 megapixel and 15 raysegs per pixel. Frankly, I don't find 1 megapixel all that interesting. I want graphics at my LCDs native 1600x1200, nearly 2 megapixels (and that desire will change when I'm forced to buy a 16x9 monitor some day). 1 megapixel should be a bit below 1280x1024, but better than 1024x768.
    • Re: (Score:3, Funny)

      by MBGMorden ( 803437 )
      That's nothing. As long as you're running an Intel chip with a class-G phase varying containment field you should be able to reverse the polarity of the fluxing core to match that of the capaciting core, and then temporally render twice that much. That's assuming that you have a 1.21 Jiggawatt PS (I would personally recommend the 1.8 Jiggawatt unit from PC Power & Cooling just to some breathing room).
  • Put it on the GPU (Score:5, Interesting)

    by TheRaven64 ( 641858 ) on Tuesday August 29, 2006 @05:42AM (#15998644) Journal
    The thing about ray tracing is that it's the archetypal embarrassingly parallel problem that makes heavy use of floating point arithmetic. The thing about GPUs is that they are incredibly parallel processors optimised for for floating point operations.

    Take a look at the proceedings from any graphics conference in the last three or four years, and you will see several papers which involve ray-tracing on a GPU. Actually, not so many recently, because it's been done to death. The most impressive one I saw was at Eurographics in 2004 running non-linear ray tracing. As the rays advanced, their direction was adjusted based on the gravity of objects in the scene. The demo (rendered in realtime) showed a black hole moving in front of a stellar scene and all of the light effects this caused.

    • by N Monkey ( 313423 ) on Tuesday August 29, 2006 @06:45AM (#15998732)
      The thing about ray tracing is that it's the archetypal embarrassingly parallel problem that makes heavy use of floating point arithmetic. The thing about GPUs is that they are incredibly parallel processors optimised for for floating point operations.

      It's not just the sheer number of FP calculations that can be the problem. Once you get away from the first (or perhaps even second) level of rays, you end up losing coherence between neighbouring rays which causes memory page/cache thrashing. This is not a nice thing on a GPU.
    • Re:Put it on the GPU (Score:5, Interesting)

      by smallfries ( 601545 ) on Tuesday August 29, 2006 @07:28AM (#15998799) Homepage
      The problem with raytracing researchers is that they are incredibly myopic. *Everybody* should use raytracing for *everything* because it is superior to raster in *every case*. Well, bullshit. Take a look at the raytracing results people have posted links to, and then watch the video of Crysis. The problem is not raytracing, but geometric complexity. Raytracing does not scale nicely with the amount of geometry - mainly because of the shadow rays that have to be scattered from each intersection. The 100mil figure assumes about 100 rays per pixel. Well, you need 64 of them just to get around aliasing, and that doesn't leave many for ambient and shadow bounces.

      But the GPU is interesting for raytracing. As it moves closer towards a giant floating point vector machine the motivating application will become raytracing. So at the moment a 7800gtx can push 280Gflops. That is 2800 cycles per ray for a single frame. (BTW Intels figures in the article are bullshit. 100mil rays at 30fps = 3 billion rays per second. Roughly one ray per cycle on averge. They are counting a huge number of rays that have been optimised out of the scene, eg shadows or interpolated from pervious frames using a cache).

      The raw horsepower is getting there on the card but at the moment the communication soaks up all of the time. Raytracing is the poster-child problem for parallelisation - assuming that you have random access (readable) global memory. If you need to partition the memory into the compute nodes it begins to get harder. In a GPU building datastructures to hold the information is the bottleneck, and it drops the speed by factors of 100s or 1000s. Nvidia and ATi have given the general-purpose community hints that they will improve performance in reading data-structures so this particular roadblock may disappear. A real scatter operation in the fragment shader would be nice, but you would have to gut the ROPs in order to do it. This may happen anyway as the local-area operations that the ROPs compute could fold into fragment operations. To increase the write bandwidth in the card the retirement logic needs to start retiring 'pages' of pixels anyway, over a much wider bus. Otherwise the number of feasible passes per pixel will always be capped by the speed that the ROPs can retire the data.

      So given how hard it would be to *efficiently* raytrace on a GPU - why bother when you can throw so much more raw horsepower at faking it with cheap raster techniques?
      • Re: (Score:3, Informative)

        by Anonymous Coward
        Raytracing does not scale nicely with the amount of geometry - mainly because of the shadow rays that have to be scattered from each intersection.

        Erm, that's just flat wrong. With the correct bounding volume hierarchy, ray tracing scales with geometric scene complexity much better than scanline methods. It is one of the reasons that offline raytracing renderers can handle such huge datasets efficiently. Also, the number of shadow rays used is *completely* independent of the "amount of geometry" in a scene.
        • Re: (Score:3, Insightful)

          by smallfries ( 601545 )
          Erm, yes it is actually. You and the other replies that pointed out that it scales better with complexity are correct. Google confirms that my memory was a bit off on this one...
      • by JohnPM ( 163131 ) on Tuesday August 29, 2006 @08:32AM (#15999005) Homepage
        The problem with raytracing researchers is that they are incredibly myopic.
        Yes but myopia would seem to be one of those problems that ray tracing would be much better at solving since it can handle refraction directly.
      • That is 2800 cycles per ray for a single frame. (BTW Intels figures in the article are bullshit. 100mil rays at 30fps = 3 billion rays per second.

        Ahh....this to me says you screwed up the math. 100m rays at 30f/s = 3.333333m rays per frame not 3b r/s. You multiplied when you should have divided. In other words, it looks like Intel got it right and your math is in the south pasture.

        I still agree with the mods...your post is interesting.
        • Re: (Score:3, Insightful)

          by smallfries ( 601545 )
          When I read it the way that you've put it, it does sound plausible. But the Intel quote was a bit ambiguous - you could read it as 100m rays per image, which I still think is a more natural way of describing it. If you read it the other way as 100m rays per second then it would be a division there, making it about 350 cycles per ray. The actual math could be done that quickly, but it would be very dependent on how cache friendly the data is. Using 3m rays per frame is roughly 3 rays per pixel - beneath the
      • This month's edition of Scientific American had a good article on Ray Tracing. Basically, how it can be more feasible with the faster/better hardware we have today. The article is available here [sciamdigital.com], but unfortunately, you have to pay for it. The article focused on new software and hardware techniques for Ray Tracing being developed at Intel. They say that Ray Tracing is "poised to replace raster graphics" because it "scales well with hyper threading and multi-processor configurations.". Also the "cache hierarc
      • -1, Wrong (Score:3, Interesting)

        by spun ( 1352 )
        I'll let the other posters comment on the wrongness of your idea that raytracing doesn't scale with scene complexity. There was a nice SciAm article about it, if you need more convincing. Instead, I'll talk about something in the article that the other posters didn't mention. Raster Processing may scale with scene somplexity, but creation doesn't. Raster graphics must be tweaked at creation to make an object look realistic while still rendering quickly. With ray tracing, you just create an object and forget
      • Re: (Score:3, Insightful)

        by lenhap ( 717304 )

        The problem is not raytracing, but geometric complexity. Raytracing does not scale nicely with the amount of geometry - mainly because of the shadow rays that have to be scattered from each intersection.

        Did you even read the article? I understand this is slashdot where no one RTFA but come on...

        The whole benefit of raytracing, according to the article, is that it scales logarithmically with complexity (number of triangles) and shadows are free (shadows are just a side effect of raytracing, not somet

      • Raytracing does not scale nicely with the amount of geometry

        It actually scales exceptionally well with the amount of geometry O(log n) where GPUs suck. Read the linked article. Also, the spatial index used in ray tracing is a non-trivial data structure which is not handled well on a GPU. I've also found that ray tracing works better (fewer/no artifacts) with double precision floating point which is not available on a GPU. In a few years, the CPU will be quite capable of realtime ray tracing, so at that po

  • Wouldn't it be possible to write a raytracer that used the GPU core(s) instead of the CPU? Raytracing is pretty much entirely vectors isn't it? That's what GPUs do best.

    NB: The only raytracer I've ever written was in PHP and it managed about 0.01 frames per second with very basic geometry and no textures, so I'm probably very, very wrong.
    • "entirely vectors" (Score:5, Insightful)

      by Joce640k ( 829181 ) on Tuesday August 29, 2006 @06:29AM (#15998713) Homepage
      Raytracing is pretty much entirely vectors isn't it?

      No, ray tracing is all about searching databases for ray-object intersections. That's what GPUs can't do at all.

      • by S3D ( 745318 ) on Tuesday August 29, 2006 @06:50AM (#15998742)
        No, ray tracing is all about searching databases for ray-object intersections. That's what GPUs can't do at all.
        Serious raytracers are tile-based anyway, that is using a lot of look-up tables. Processing of single tile could probably be fit into upcoming GPU with "unified shader architecture". But it wouldn't be efficient. GPU arn't designed for a lot of branching.
      • by Nutria ( 679911 )
        ray tracing is all about searching databases for ray-object intersections.

        Is ray-tracing "tied in with" vector graphics?

      • Re: (Score:3, Funny)

        by pimpimpim ( 811140 )
        No, ray tracing is all about searching databases for ray-object intersections.

        So the choice for php+sql might not be such a bad idea after all ;)

  • Not quite (Score:5, Insightful)

    by Aceticon ( 140883 ) on Tuesday August 29, 2006 @05:50AM (#15998651)
    If i remember it correctly from my days of playing with POVRay [povray.org] (free raytracing app), the time it took to raytrace an image depended on things like the presence (or not) of semi-transparent, semi-reflective surfaces and on the number of light sources.

    If this is still the case, then going from the current rendering techniques in games to raytracing would result in images with more realistic reflections and lighting but, due to performance tradeoffs, few reflective surfaces and light sources.

    Besides, at the moment what games need the most is beter AIs and procedurally generated content, not yet another layer of eyecandy that requires gamers to upgrade their hardware (again).
    • the time it took to raytrace an image depended on things like the presence (or not) of semi-transparent, semi-reflective surfaces and on the number of light sources. Yep. Raw "number of rays" means nothing. The number of rays can grow exponentially as soon as you try to make a scene of anything other than plastic spheres.


      The current crop of "raster based" games don't look so bad to me. I doubt that ray tracing would add very much to a FPS.

    • Re:Not quite (Score:5, Interesting)

      by tgd ( 2822 ) on Tuesday August 29, 2006 @07:54AM (#15998869)
      Thats because a reflection creates another ray segment, and a refraction creates two.

      Considering a non-reflective ray traced world at 800x600 needs 320,000 rays to be cast to calculate an image, so 9,600,000 at 30fps, the claim of 450 million ray segments makes sense... thats 45+ per pixel at 800x600, which is a lot of reflections. Usually you'd limit the number to a fairly low because 100 deep reflections don't add noticable detail, especially in motion. Thats a lot of room for both refractive and reflective objects to be in the scenes.
      • Re:Not quite (Score:5, Interesting)

        by TheRaven64 ( 641858 ) on Tuesday August 29, 2006 @08:30AM (#15998997) Journal
        You probably wouldn't just use one ray per pixel. It is typical to fire a number of rays and then average the result. This is because rays diverge quite quickly after passing through the display port, and so you get quite an uneven image. There is a noticeable difference between 1 and 4 rays per pixel, and between 4 and 9. After 9, you start to get into diminishing returns, and beyond about 25 it becomes harder to spot the difference (note that it is common to use a square number of rays, since that makes it easy calculate where they should go).
        • Someone mod the parent and grandparent up. Both are fairly interesting/insightful.
        • by tgd ( 2822 )
          Thats for antialiasing... and yes, it will consume more rays but there are other ways to handle antialiasing that don't need to cast another entire sequence of rays... especially if what you were looking for in a game was the sort of reflective and refractive effects you can't get with shaders. You don't need true RT accuracy, you just need a fast way of doing things you can't do with existing technology.

          Plus, as I said in my first post on this, you just don't need that level of accuracy or detail if you're
          • Re:Not quite (Score:5, Interesting)

            by TheRaven64 ( 641858 ) on Tuesday August 29, 2006 @10:57AM (#15999787) Journal
            What really needs to be done is to track motion the way you would encoding for mpeg, and focus more ray casts in areas of low motion

            There was a paper published a couple of years ago (at Eurographics?) about this. Each ray was independent, and would return a value at each intersection (i.e. you get the primary ray value quickly, and then refine it further with secondary, tertiary, etc ray data). When a ray was no longer lined up with a pixel, it was interrupted and terminated. This meant that you got a fairly low quality image while moving quickly, but a much better one when you let they rays run longer. I found it particularly interesting, since it completely removed the concept of a frame; each pixel was updated independently when a better approximation of its correct value was ready, giving a much better degradation.

  • rabbit rabbit rabbit (Score:4, Informative)

    by RuBLed ( 995686 ) on Tuesday August 29, 2006 @05:53AM (#15998656)
    FTA

    "Oh, blast. Rabbit, I seem to have forgotten my pocketwatch. May I borrow yours?"

    Rabbit: I'm late, I'm late, I'm late...

    ---

    anyway, if these technology becomes a reality in the 3-5 years and if I read the article right, the whole graphics architecture would change, there would only be a need for a super graphics processor and less need for too much memory and those graphics pipeline/shader thingies...

    The reason that they might want it in a CPU is that, why have a separate add on GPU to handle the job while the CPU could do it alone by that time. You would then only need a "basic" video card that would just do the display.

    Hmmm... could this be one of the reasons why ATI and AMD merged?
  • Quake 3: Raytraced (Score:4, Interesting)

    by Anonymous Coward on Tuesday August 29, 2006 @05:59AM (#15998663)
    Just found that game using raytracing - Quake 3: Raytraced.
    http://graphics.cs.uni-sb.de/~sidapohl/egoshooter/ [uni-sb.de]

    Rumors are there's a q4 version on the way.
    • by Tim C ( 15259 ) on Tuesday August 29, 2006 @06:45AM (#15998734)
      Unfortunately, the only downloads I see on that site are for videos of the engine in action. I also note that they quote speeds of 20FPS on a virtual CPU running at 36GHz... Add to that the fact that the site hasn't been updated since mid-2005, and I'd say it's dead.
      • Add to that the fact that the site hasn't been updated since mid-2005, and I'd say it's dead.

        I'm a doctor, not a programmer...
      • by Peldor ( 639336 )
        Nah, it's not dead. I think they're just waiting for it to finish drawing.
  • by DoofusOfDeath ( 636671 ) on Tuesday August 29, 2006 @06:17AM (#15998696)
    I wonder how much this research relates to Intel's renewed desire to become a graphics player.

    If they're having trouble, for staffing or other reasons, producing good GPU designs, then it would be pretty clever of them to revolutionize the industry AND capitalize on their CPU strengths in a single move. More power to them, I say. (More power = about 120 watts, I'm guessing.)
  • Won't happen soon. (Score:5, Informative)

    by midkay ( 984862 ) on Tuesday August 29, 2006 @06:51AM (#15998744) Homepage
    It's extremely unlikely that anything will go anywhere with raytracing in the near future. Raytracing takes a tremendous amount of power - apps that demonstrate it in realtime usually run quite choppy, and they're very minimalistic to boot; ugly textures, very simple geometry, very confined areas...

    The main benefits of raytracing in games would be:
    1) Shadows; they'd be Doom 3-like. Several games have full stencil shadows and that's just how raytraced ones would look: sharp and straight. The difference? Raytraced ones would take a ton more power and time to compute.
    2) True reflection and refraction. We can "fake" this well enough - for example, see the Source engine's water, incorporating realtime fresnel reflections and refractions. Though Source's water's "fake" refraction/reflection aren't pixel-perfect, and are only distorted by a bump-map, it certainly looks great.

    Honestly, considering the small gain in visual quality (although a major gain in accuracy) - it's like going after a fly with a bazooka. Sure, once we get to the point where there's enough processing power to deal with this well enough in realtime, it will happen - but don't expect it soon, and don't expect that huge a difference. Nicer reflections and refractions (which already look good today) and pixel-perfect shadows (looking just the same as stencil shadows in some newer games).
    • Re: (Score:3, Funny)

      "going after a fly with a bazooka" + raytracing in the same game? Hell, I'D BUY IT!!! :)
    • Re: (Score:3, Informative)

      I mostly agree with you; however, your statement that ray tracing results in hard/sharp shadows is wrong. Ray tracing can easily make realistic soft shadows. As you mentioned ray tracing costs a ton of extra processing power to result in approximately equivalent images to raster graphics. Ray tracing more or less simulates how light works in the real world, and there is the real problem. Ask anyone in the graphics industry and they'll tell you their job is to fudge things until they look good because re
      • by ivan256 ( 17499 )
        approximately equivalent images to raster graphics

        Huh?

        Current tech essentially adds up to the game of the day being a showcase for whatever the latest buzzword technology from the GPU makers is that month. Look at the games that are out in the last 6 months. We've got "HDR" lighting now, so everything is so damned fake-shiny it makes you want to puke. If you rate glitz as highly as realism, well, I still wouldn't rate them the same. The hacks we pull for high performance 3D graphics today result in plastic
    • 1) You don't know what you're talking about.

      There are multiple techniques to "fix" hard shadows in a raytracer or a raster, although the "correct" way to do them involves pairing a raytracer with a global illumination model (something like photon mapping). They're just slow to compute. In general, you can make the raytraced ones look nicer, but they take longer the nicer you want them to look, of course.

      2) You can fake it well enough for simple cases like water, or a single mirror. Although sometimes the la
    • by cgenman ( 325138 )
      There is no reason to do anything for real if you can fake it in gaming. There is no reason to fully and accurately render a 3D scene if you can just make it a hand-painted image and agree not to move the camera. There is no reason to render the molecular behaviors of individual pieces of glass when a "shattered" texture would suffice. Mario doesn't obey the laws of physics.

      And really, the only reason to raytrace is so that your artists don't need to make and optimize a million reflection and shadow maps
    • by CTho9305 ( 264265 ) on Tuesday August 29, 2006 @10:19AM (#15999506) Homepage
      Raytracing takes a tremendous amount of power - apps that demonstrate it in realtime usually run quite choppy
      If you read the Intel paper that inspired TFA's author to write his ill-informed article, you'll see that raytracing scales better with scene complexity, and Intel did benchmarks to show that after about 1M triangles per scene, software raytracers will outperform hardware GPUs using triangle pipelines (e.g. openGL, directX, shaders).

      Sure, once we get to the point where there's enough processing power to deal with this well enough in realtime, it will happen
      The benchmarks in the Intel paper show that we are very close to that point right now.
    • Re: (Score:3, Interesting)

      by nacturation ( 646836 )
      Shadows; they'd be Doom 3-like. Several games have full stencil shadows and that's just how raytraced ones would look: sharp and straight.

      Sharp and straight shadows? Check out this example [povray.org] or this one [povray.org] or yet another. [povray.org] Granted, these scenes rendering times are measured in hours, not fractions of a second... but eventually games will be at that level of quality.
  • 30 fps - unlikely (Score:5, Interesting)

    by DrXym ( 126579 ) on Tuesday August 29, 2006 @06:52AM (#15998746)
    Ray tracing works by tracing a hypothetical ray(s) of light back from a screen pixel, and following it as it bounces and splits off various objects which may or may not be opaque, shiny, textured etc. to the light source. So a ray might first hit a sphere so you calculate the light at that point and recursively to trace the light as it bounces off other objects. To get any level of realism you're talking about multiple recursion which takes an enormous amount of time in any complex scene. Transparency also requires the relected and refracted ray to be traced so the number of rays can increase dramatically.

    Ray tracing also suffers terribly from "jaggies". Edges look bad because rays can just miss an object and cause really bad stepping on the edges of objects. To eliminate jaggies and do anti-aliasing, you need to do sub-pixel rendering with jitter (slight randomness) to produce an average value for the pixel. So you might have to trace 4 or more rays in a pixel for acceptable anti-aliasing. Effects like focal length, fog, bump mapping etc. cause things to get even more complex. Most pictures rendered with high quality on Blender, POVRay etc. would take minutes if not hours even on a fast / dual core processor.

    The only way you'd get 30fps is if cut your ray trace depth to 1 or 2, used a couple of lights, cut the screen res down and forgot about fixing jaggies. It would look terrible. Oh and find time for all the other things that apps and games must do.

    • I use a trace depth of 1 or 2 in offline rendering for paying clients all the time. Modern raytracers are getting scary fast, and a lot of the rendertime is devoted to building the acceleration tree, which can be preproceesed and pulled from storage. And don't forget that realtime raster/scanline rendering also cuts down scene complexity to maintain performance.
    • The only way you'd get 30fps is if cut your ray trace depth to 1 or 2, used a couple of lights, cut the screen res down and forgot about fixing jaggies. It would look terrible. Oh and find time for all the other things that apps and games must do.

      The Intel research paper that inspired TFA's author actually did benchmarks, and their scenes were pretty complex. Basically, raytracing's complexity scales with the log of the number of triangles in the scene, whereas the techniques currently used in GPUs scale l
    • by sholden ( 12227 )
      Wow, if only you had told the authors of the tech report how ray tracing worked before they spent all that time working on actually doing the math on it.
    • Ray tracing also suffers terribly from "jaggies". Edges look bad because rays can just miss an object and cause really bad stepping on the edges of objects.

      I know traditional ray-tracers solve this with an anti-aliasing pass, but I thought real-time ray tracers solved this by incorporating a radiacity engine instead.

      Effects like focal length, fog, bump mapping etc. cause things to get even more complex. Most pictures rendered with high quality on Blender, POVRay etc. would take minutes if not hours eve

  • A ray-tracing problem can be solved simultaneously using a moment method that incorporates physical optics. I wrote my Master's thesis a long time ago that did precisely that for 2-dimensional situations. Of course, this required solving massive linear systems that, at the time I wrote it, took hours on a 433MHz Alpha to do a single frame, and it was written in FORTRAN77, but hey, we've come a long way since then :)
  • by greazer ( 165571 ) on Tuesday August 29, 2006 @07:09AM (#15998770) Homepage
    I've seen the topic of realtime ray-tracing and hardware accelerated ray-tracing come up countless times over the past 15 years. In the 80's and 90's, a realtime ray-tracing acceleration chip was always around the corner. Some products did actually emerge, but never quite caught on. The reason for this is not because "commercial graphics industry has been intent on pushing raster-based graphics as far as they could go". Quite the contrary; it's much more elegant algorithmically (and hence 'easier') to implement a ray-tracer than a scanline based renderer. However, there's a fundamental limitation of ray-tracing that make it unappealing performance-wise. Cache coherence for ray-tracers suck.

    All rendering algorithms boil down to a sorting problem, where all the geometry in the scene is sorted in the Z dimension per pixel or sample. Fundamentally, scanline algorithms and ray-tracing algorithms are the same. For primary rays, here's some simpliefied pseudocode:

          foreach pixel in image
            trace ray through pixel
            shade frontmost geometry

    The trace essentially sorts all the geometrty along its path.

    A scanline algorithm looks like this:

          foreach geometry object in the scene
            foreach pixel geometry is in
              if geometry is in front of whatever is in the pixel already
                shade fragement of geometry in pixel
                replace pixel with new shaded fragment

    As you can see, the only distinction is the order of the two loops. For ray-tracing, traversing the pixels is in the outer loop, and the geometry in the inner loop. For scanline rendering, it's the opposite. This has huge consequences in terms of cache coherency. With scanline methods, since the same object is being shaded in the inner loop, and neighboring fragments of the same object are being shaded, cache coherency tends to be extermely high. The same shader program is used, and likelyhood of the texture being accessed from cache is very good. The same can't be said for ray-tracing. You can shoot two almost identical rays but touch wildly different parts of the scene. Cache coherency relative to scanline rendering is abysmal.

    This one performance side-effect of ray-tracing is the only reason we haven't seen any serious ray-tracing for realtime applications. Even in offline rendering, scanline rendering dominates even though software ray-tracing has been available from the beginning of CG. For ray-tracing to become viable, we need more than just more CPU cores. We need buses fast enough to feed all the cores in situations where we have an extremely high ratio of cache misses. Unfortunately, the speed gap between memory speeds and compute power seems to be increasing in recent years.
    • The nice effect of placing the geometry loop on the outside is that clipping becomes a coherent decision for large groups of pixels. Again this has nice effects in both control flow, which can be amortised over many pixels, and the relative depth of pixels. If you ignore intersecting geometry then you can optimise even more pixels out of the calculation.
  • Film at 11 (Score:5, Insightful)

    by jalefkowit ( 101585 ) <jasonNO@SPAMjasonlefkowitz.com> on Tuesday August 29, 2006 @07:13AM (#15998779) Homepage
    A report by Intel about Ray Tracing shows that a single P4 3.2Ghz is capable of 100 million raysegs, which gives a comfortable 30fps.

    Extra, extra! This just in! Report from CPU vendor discovers that you should spend more money on your CPU and less on your graphics card!

    Shocking, I tells ya. Shocking.

    • by GraZZ ( 9716 )
      Especially shocking since that CPU vendor's competitor has recently purchased a company that produces graphics cards...
  • Surely an advantage of Raytracing would be that you could have exact shapes. e.g. Instead of approximating a cylinder by an octagonal prism, you could just have a real cylinder. What's more, the pure shapes like cylinders, sphere etc, would take up less memory than their approximations.
  • by adam31 ( 817930 ) <adam31 AT gmail DOT com> on Tuesday August 29, 2006 @08:15AM (#15998933)
    If there's one thing the RT raytracing community is good at, it's explaining how good it works in theory. Take some numbers, extrapolate a little one dimension, then another and BOOM-- The Future. There are several problems with raytracing in real-time:


    1) Static Objects Only. The huge majority of computation time is traversing a spatial subdivision structure. It happens that K-d trees offer the best characteristic (typically, fewest primitive per leaf for a given memory limit). However, these are really heinous to dynamically update. You can cheaply re-create it with median partitioning, but your trees are crappy. You can do a much nicer SAH (surface area heuristic), but to do this per frame blows out your CPU budget.

    2) Bandwidth. Even if you could update your subdivision structure very cheaply, that structure still needs to be propogated out to all the CPUs participating in the raytrace. For the 1.87 MTri model they list on page 6, their spatial structure was 127 MB. Say you have a bandwidth of 6 GB/s, it takes 20ms just to transfer the structure (and there are other problems here). So your ceiling is 50 Fps before you trace your first ray.

    3) Slower than a GPU. Even though they give you some little graph showing that raytracing (a static model, with static partitioning) beats a GPU at a MTri in the frame, this is very deceiving. The GPU pipeline works such that zillions of sub-pixel triangles simply can't get into pixel shaders fast enough, and force the pixel shader to be run many times extra. Double the resolution, however and the GPU won't take a cycle longer... with raytracing, performance will halve. So they found a bottleneck in the GPU which is totally unrepresentative of a game in every single sense, and said LOOK! BETTER! (in theory).

    4) Hey, Where's my Features? All the cool things about raytracing (nice shadows, refraction, implicit surfaces, reflection, subsurface scattering) all get tossed out the window to make it real-time! What's the point, then? Given all the pixel shader hacks invented to make a GPU frame look interesting, the quality that can be achieved in a real-time raytrace is sadly tame. Especially when you consider that quality is the supposed advantage of raytracing.

    And c'mon. It's Gameplay that counts anyway :P

    • Re: (Score:3, Interesting)

      by TheRaven64 ( 641858 )
      I read a paper a couple of years ago about a ray tracer that updated the pixel at each ray iteration. This meant that when you were moving quickly, you got a lower-quality picture, but you didn't notice because you were moving quickly. If you stopped, then the detail appeared very quickly (over 2-5 frames, as I recall; too quick for the user to notice that it was appearing).

      As for dynamic scenes, this is actually easier in many ways with a ray tracer. If you start with a scene graph API, you just need

    • And c'mon. It's Gameplay that counts anyway :P

      But gameplay doesn't sell hardware. You've got a lot to learn about the modern gaming industry. :)

      Although now that I think about it, I'd probably buy an "AI Coprocessor" if it meant better AI in games...
  • Three Words (Score:3, Informative)

    by GeffDE ( 712146 ) on Tuesday August 29, 2006 @08:34AM (#15999012)
    The Cell Processor

    Three or four people have brought up the idea that problem would work well for the cell processor. But I don't think anyone has really seen the (rays of) light on the issue. The Cell is perfect for this. Some facts:
    1) Raytracing is highly vectorized. The Cell's many processors are optimized for vector calculations. [wikipedia.org]
    2) Raytracing scales linearly with the number of cores. The Cell has 8 (at least in its current manifestation).
    3) The Cell is already available [linuxdevices.com] as a PCI-Express add-in card (that even runs linux!) which sounds awfully like what a GPU is... 4) The Cell is a bitch to program. But then, so are GPUs...so maybe it's not that ridiculous to see the future of the GPU...from IBM.

    How ironic it is that Intel is now pushing this technology...
    • 4) The Cell is a bitch to program. But then, so are GPUs...so maybe it's not that ridiculous to see the future of the GPU...from IBM.

      I do not know how difficult cell is to program, but I can assure you that GPU programming is no harder than CPU programming (as long as you don't code your shaders in assembler of course).
  • SGI had a ray tracing demo at Siggraph 2002. On the show floor, a 128-processor SGI box ran demos at around 20hz at about 512x512 pixels.
    http://www.sci.utah.edu/stories/2002/sum_star-ray. html [utah.edu]
    They make some good points about geometric complexity increasing much faster than displayed pixels, so there are fewer graphics primitives per pixel, so scan-line-based algorithms will make less sense.
    So in 2002 it took 128 processors to run at 20Hz at 512x512 pixels. And now we think quad-cores will be enough
  • ... would be a FPGA that sits on a core. For those who are not familiar, a Field Programmable Gate Array is essentially a peice of hardware that can be "programmed" to perform specialized tasks, especially sequental ones, at faster speeds than software on a general purpose CPU. Imagine a fully programmable coprocessor with blazing access to RAM and a hypertransport to the general purpose cores for more complex functions that are hard to script in hardware.

    I have seen comparatively weak, 1M gate FPGA's enc
    • by inKubus ( 199753 )
      So you could have one of these as a peripheral, and just load a "program" into it, then run whatever task you want to do on it? How long does it take to load the program, and how much faster will it encode videos, etc?

    • by JustNiz ( 692889 )
      >> a peice of hardware that can be "programmed" to perform specialized tasks, especially sequental ones, at faster speeds than software on a general purpose CPU.

      So just like, say, a 3D graphics card then?
  • Just another step in the well known Wheel of Reincarnation [cap-lore.com]. At least well known to all three of us who don't completely ignore computer history ;-)
  • >>> ' A report by Intel about Ray Tracing shows that a single P4 3.2Ghz is capable of 100 million raysegs, which gives a comfortable 30fps'

    Thats a bogus comment, as raytracing time (hence framerate) is totally dependent on the complexity of the scene being rendered.

    E.g. A few simple cubes would raytrace MUCH faster than a forest scene with reflective water and multiple trees, leaves, and blades of grass etc.

    Unfortunately, for gaming, the latter scenario is much more likely.

    It still makes sense to o

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...