Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Add Another Core for Faster Graphics 237

Dzonatas writes "Need a reason for extra cores inside your box? How about faster graphics. Unlike traditional faster GPUs, raytraced graphics scale with extra cores. Brett Thomas writes in his article Parallel Worlds on Bit-Tech, 'But rather than working on that advancement, most of the commercial graphics industry has been intent on pushing raster-based graphics as far as they could go. Research has been slow in raytracing, whereas raster graphic research has continued to be milked for every approximate drop it closely resembles being worth. Of course, it is to be expected that current technology be pushed, and it was a bit of a pipe dream to think that the whole industry should redesign itself over raytracing.' A report by Intel about Ray Tracing shows that a single P4 3.2Ghz is capable of 100 million raysegs, which gives a comfortable 30fps. Intel further states 450 million raysegs is when it gets 'interesting.' Also, quad cores are dated to be available around the turn of the year. Would octacores bring us dual screen or separate right/left real-time raytraced 3D?"
This discussion has been archived. No new comments can be posted.

Add Another Core for Faster Graphics

Comments Filter:
  • Gaming (Score:5, Interesting)

    by Anonymous Coward on Tuesday August 29, 2006 @05:39AM (#15998636)
    There are already ray traced games. :O

    http://graphics.cs.uni-sb.de/~morfiel/oasen/ [uni-sb.de]
  • It's been done... (Score:5, Interesting)

    by SigILL ( 6475 ) on Tuesday August 29, 2006 @05:42AM (#15998641) Homepage
    F.A.N. released a real-time raytraced demo [pouet.net] at breakpoint back in 2003. It does no more than 10 fps on my lowly 1GHz P3, but I'm sure it runs quite smooth on a nice modern CPU (though I don't think it's multithreaded).
  • Put it on the GPU (Score:5, Interesting)

    by TheRaven64 ( 641858 ) on Tuesday August 29, 2006 @05:42AM (#15998644) Journal
    The thing about ray tracing is that it's the archetypal embarrassingly parallel problem that makes heavy use of floating point arithmetic. The thing about GPUs is that they are incredibly parallel processors optimised for for floating point operations.

    Take a look at the proceedings from any graphics conference in the last three or four years, and you will see several papers which involve ray-tracing on a GPU. Actually, not so many recently, because it's been done to death. The most impressive one I saw was at Eurographics in 2004 running non-linear ray tracing. As the rays advanced, their direction was adjusted based on the gravity of objects in the scene. The demo (rendered in realtime) showed a black hole moving in front of a stellar scene and all of the light effects this caused.

  • Quake 3: Raytraced (Score:4, Interesting)

    by Anonymous Coward on Tuesday August 29, 2006 @05:59AM (#15998663)
    Just found that game using raytracing - Quake 3: Raytraced.
    http://graphics.cs.uni-sb.de/~sidapohl/egoshooter/ [uni-sb.de]

    Rumors are there's a q4 version on the way.
  • by DoofusOfDeath ( 636671 ) on Tuesday August 29, 2006 @06:17AM (#15998696)
    I wonder how much this research relates to Intel's renewed desire to become a graphics player.

    If they're having trouble, for staffing or other reasons, producing good GPU designs, then it would be pretty clever of them to revolutionize the industry AND capitalize on their CPU strengths in a single move. More power to them, I say. (More power = about 120 watts, I'm guessing.)
  • by Anonymous Coward on Tuesday August 29, 2006 @06:18AM (#15998698)
    I've been thinking about the same thing, but only with 1 core dedicated to the Operating System itself, without the drivers for the various peripherals. This way it should become a lot easier to make a crash free OS: when one core (the OS one) has a higher priority than the other(s) and the only code inside the core is some kind of busy loop checking if the other core is still working as planned. Perhaps of course its already been done by Sun/IBM/... and I'm busy reinventing the wheel.
  • by Anonymous Coward on Tuesday August 29, 2006 @06:25AM (#15998705)
    Found this link a while ago: http://graphics.cs.uni-sb.de/SaarCOR/ [uni-sb.de]

    Methinks it would be pretty cool when there's a 'graphics' card that could do standard raster-based graphics, raytracing and physics. Most of the calculations are the same anyway, so a general purpose processor that is very good in floating-point vector calculations would be necessary. The API's would be mostly implemented in the driver (OpenGL, OpenRT, etc.)
  • by N Monkey ( 313423 ) on Tuesday August 29, 2006 @06:45AM (#15998732)
    The thing about ray tracing is that it's the archetypal embarrassingly parallel problem that makes heavy use of floating point arithmetic. The thing about GPUs is that they are incredibly parallel processors optimised for for floating point operations.

    It's not just the sheer number of FP calculations that can be the problem. Once you get away from the first (or perhaps even second) level of rays, you end up losing coherence between neighbouring rays which causes memory page/cache thrashing. This is not a nice thing on a GPU.
  • by S3D ( 745318 ) on Tuesday August 29, 2006 @06:50AM (#15998742)
    No, ray tracing is all about searching databases for ray-object intersections. That's what GPUs can't do at all.
    Serious raytracers are tile-based anyway, that is using a lot of look-up tables. Processing of single tile could probably be fit into upcoming GPU with "unified shader architecture". But it wouldn't be efficient. GPU arn't designed for a lot of branching.
  • 30 fps - unlikely (Score:5, Interesting)

    by DrXym ( 126579 ) on Tuesday August 29, 2006 @06:52AM (#15998746)
    Ray tracing works by tracing a hypothetical ray(s) of light back from a screen pixel, and following it as it bounces and splits off various objects which may or may not be opaque, shiny, textured etc. to the light source. So a ray might first hit a sphere so you calculate the light at that point and recursively to trace the light as it bounces off other objects. To get any level of realism you're talking about multiple recursion which takes an enormous amount of time in any complex scene. Transparency also requires the relected and refracted ray to be traced so the number of rays can increase dramatically.

    Ray tracing also suffers terribly from "jaggies". Edges look bad because rays can just miss an object and cause really bad stepping on the edges of objects. To eliminate jaggies and do anti-aliasing, you need to do sub-pixel rendering with jitter (slight randomness) to produce an average value for the pixel. So you might have to trace 4 or more rays in a pixel for acceptable anti-aliasing. Effects like focal length, fog, bump mapping etc. cause things to get even more complex. Most pictures rendered with high quality on Blender, POVRay etc. would take minutes if not hours even on a fast / dual core processor.

    The only way you'd get 30fps is if cut your ray trace depth to 1 or 2, used a couple of lights, cut the screen res down and forgot about fixing jaggies. It would look terrible. Oh and find time for all the other things that apps and games must do.

  • by EmagGeek ( 574360 ) on Tuesday August 29, 2006 @07:01AM (#15998759) Journal
    A ray-tracing problem can be solved simultaneously using a moment method that incorporates physical optics. I wrote my Master's thesis a long time ago that did precisely that for 2-dimensional situations. Of course, this required solving massive linear systems that, at the time I wrote it, took hours on a 433MHz Alpha to do a single frame, and it was written in FORTRAN77, but hey, we've come a long way since then :)
  • by Anonymous Coward on Tuesday August 29, 2006 @07:16AM (#15998784)
    It's nice that people are working on ray traced games, but please note the following:

    Oasen is based on "OpenRT [openrt.de]" --- which is entirely proprietary, and is NOT open source. Their FAQ explains that clearly.

    I'm sure that I'm not the only person annoyed at their use of "open" to mean "closed".

    Time to look for an open-source raytracing engine designed for interative use ...
  • Re:Put it on the GPU (Score:5, Interesting)

    by smallfries ( 601545 ) on Tuesday August 29, 2006 @07:28AM (#15998799) Homepage
    The problem with raytracing researchers is that they are incredibly myopic. *Everybody* should use raytracing for *everything* because it is superior to raster in *every case*. Well, bullshit. Take a look at the raytracing results people have posted links to, and then watch the video of Crysis. The problem is not raytracing, but geometric complexity. Raytracing does not scale nicely with the amount of geometry - mainly because of the shadow rays that have to be scattered from each intersection. The 100mil figure assumes about 100 rays per pixel. Well, you need 64 of them just to get around aliasing, and that doesn't leave many for ambient and shadow bounces.

    But the GPU is interesting for raytracing. As it moves closer towards a giant floating point vector machine the motivating application will become raytracing. So at the moment a 7800gtx can push 280Gflops. That is 2800 cycles per ray for a single frame. (BTW Intels figures in the article are bullshit. 100mil rays at 30fps = 3 billion rays per second. Roughly one ray per cycle on averge. They are counting a huge number of rays that have been optimised out of the scene, eg shadows or interpolated from pervious frames using a cache).

    The raw horsepower is getting there on the card but at the moment the communication soaks up all of the time. Raytracing is the poster-child problem for parallelisation - assuming that you have random access (readable) global memory. If you need to partition the memory into the compute nodes it begins to get harder. In a GPU building datastructures to hold the information is the bottleneck, and it drops the speed by factors of 100s or 1000s. Nvidia and ATi have given the general-purpose community hints that they will improve performance in reading data-structures so this particular roadblock may disappear. A real scatter operation in the fragment shader would be nice, but you would have to gut the ROPs in order to do it. This may happen anyway as the local-area operations that the ROPs compute could fold into fragment operations. To increase the write bandwidth in the card the retirement logic needs to start retiring 'pages' of pixels anyway, over a much wider bus. Otherwise the number of feasible passes per pixel will always be capped by the speed that the ROPs can retire the data.

    So given how hard it would be to *efficiently* raytrace on a GPU - why bother when you can throw so much more raw horsepower at faking it with cheap raster techniques?
  • by Don_dumb ( 927108 ) on Tuesday August 29, 2006 @07:44AM (#15998837)
    What about 2 cores for the OS (one for the system idle process and the other for the working processes
    A core for an idle process?
    I am not an expert in OSes, I thought that the idle process just gave the CPU something to do while it waited for a working process (the idle just allowed the working to butt-in, whenever somethin came along).
    Wouldn't creating a core just to do nothing be hardware bloat at its most obsurd?
    Or am I showing my ignorance, just a bit too openly.
  • Re:Not quite (Score:5, Interesting)

    by tgd ( 2822 ) on Tuesday August 29, 2006 @07:54AM (#15998869)
    Thats because a reflection creates another ray segment, and a refraction creates two.

    Considering a non-reflective ray traced world at 800x600 needs 320,000 rays to be cast to calculate an image, so 9,600,000 at 30fps, the claim of 450 million ray segments makes sense... thats 45+ per pixel at 800x600, which is a lot of reflections. Usually you'd limit the number to a fairly low because 100 deep reflections don't add noticable detail, especially in motion. Thats a lot of room for both refractive and reflective objects to be in the scenes.
  • Re:Not quite (Score:5, Interesting)

    by TheRaven64 ( 641858 ) on Tuesday August 29, 2006 @08:30AM (#15998997) Journal
    You probably wouldn't just use one ray per pixel. It is typical to fire a number of rays and then average the result. This is because rays diverge quite quickly after passing through the display port, and so you get quite an uneven image. There is a noticeable difference between 1 and 4 rays per pixel, and between 4 and 9. After 9, you start to get into diminishing returns, and beyond about 25 it becomes harder to spot the difference (note that it is common to use a square number of rays, since that makes it easy calculate where they should go).
  • by TheRaven64 ( 641858 ) on Tuesday August 29, 2006 @08:44AM (#15999047) Journal
    I read a paper a couple of years ago about a ray tracer that updated the pixel at each ray iteration. This meant that when you were moving quickly, you got a lower-quality picture, but you didn't notice because you were moving quickly. If you stopped, then the detail appeared very quickly (over 2-5 frames, as I recall; too quick for the user to notice that it was appearing).

    As for dynamic scenes, this is actually easier in many ways with a ray tracer. If you start with a scene graph API, you just need to send the changes each frame. How much changes in a typical game? Most of the scenery is fairly static. The characters move and deform slightly (you can often get away with a spacial transfer function, rather than a real change to the geometry in this case). With a traditional graphics pipeline, you still need to redraw every single polygon every frame. With a ray tracer, you can cache huge amounts of the scene (in terms of secondary rays; simply save the results of them as a texture and perform a lookup here for the next ray, and invalidate the texture when something moves between it and any of the light sources).

    Ray tracers have been running in real time for a while (take a look at Utah), but not on cheap hardware. The hardware will catch up soon. Take a look at some of the designs from Microsoft Research; they have some very shiny FPGA-based logic which does 100% procedural graphics and is likely to show up in the XBox 3.

  • -1, Wrong (Score:3, Interesting)

    by spun ( 1352 ) <loverevolutionary@@@yahoo...com> on Tuesday August 29, 2006 @10:22AM (#15999524) Journal
    I'll let the other posters comment on the wrongness of your idea that raytracing doesn't scale with scene complexity. There was a nice SciAm article about it, if you need more convincing. Instead, I'll talk about something in the article that the other posters didn't mention. Raster Processing may scale with scene somplexity, but creation doesn't. Raster graphics must be tweaked at creation to make an object look realistic while still rendering quickly. With ray tracing, you just create an object and forget about it. It just looks right without any tweaking.

    What costs game designers more: hand tweaking every object, or you buying a better computer so you can ray trace their un-tweaked objects? Now guess which way 3D graphics are gonna go...
  • Re:Not quite (Score:5, Interesting)

    by TheRaven64 ( 641858 ) on Tuesday August 29, 2006 @10:57AM (#15999787) Journal
    What really needs to be done is to track motion the way you would encoding for mpeg, and focus more ray casts in areas of low motion

    There was a paper published a couple of years ago (at Eurographics?) about this. Each ray was independent, and would return a value at each intersection (i.e. you get the primary ray value quickly, and then refine it further with secondary, tertiary, etc ray data). When a ray was no longer lined up with a pixel, it was interrupted and terminated. This meant that you got a fairly low quality image while moving quickly, but a much better one when you let they rays run longer. I found it particularly interesting, since it completely removed the concept of a frame; each pixel was updated independently when a better approximation of its correct value was ready, giving a much better degradation.

  • by nacturation ( 646836 ) <nacturation AT gmail DOT com> on Tuesday August 29, 2006 @12:13PM (#16000410) Journal
    Shadows; they'd be Doom 3-like. Several games have full stencil shadows and that's just how raytraced ones would look: sharp and straight.

    Sharp and straight shadows? Check out this example [povray.org] or this one [povray.org] or yet another. [povray.org] Granted, these scenes rendering times are measured in hours, not fractions of a second... but eventually games will be at that level of quality.
     
  • Re:Another one (Score:3, Interesting)

    by Nahor ( 41537 ) on Tuesday August 29, 2006 @12:31PM (#16000542)

    I remember a talk from someone (John Carmack I think) saying something like raytracing is nice but overkill. Today's hardware maybe be able to handle realtime raytracers but no way near the quality you can get from current 3D engines.

    Most special effects you see in current engines are approximation/hacks compared to what you can do with a raytracer but it's also way cheaper to compute.

    It's the same kind of relationship than between texture maps vs procedural textures. Procedural textures are better for a rendering point of view, they scale better. But it's also lot harder to make a good quality texture and it requires a lot more power to render.

  • by jared9900 ( 231352 ) on Tuesday August 29, 2006 @12:59PM (#16000749)
    I just feel that it should be mentioned that part of the reason for that naming is that it attempts to stay inline with the OpenGL API conventions, making the user experience of the OpenRT API seem more familiar. And just like OpenRT, OpenGL is not open source.

2.4 statute miles of surgical tubing at Yale U. = 1 I.V.League

Working...