Add Another Core for Faster Graphics 237
Dzonatas writes "Need a reason for extra cores inside your box? How about faster graphics. Unlike traditional faster GPUs, raytraced graphics scale with extra cores. Brett Thomas writes in his article Parallel Worlds on Bit-Tech, 'But rather than working on that advancement, most of the commercial graphics industry has been intent on pushing raster-based graphics as far as they could go. Research has been slow in raytracing, whereas raster graphic research has continued to be milked for every approximate drop it closely resembles being worth. Of course, it is to be expected that current technology be pushed, and it was a bit of a pipe dream to think that the whole industry should redesign itself over raytracing.' A report by Intel about Ray Tracing shows that a single P4 3.2Ghz is capable of 100 million raysegs, which gives a comfortable 30fps. Intel further states 450 million raysegs is when it gets 'interesting.' Also, quad cores are dated to be available around the turn of the year. Would octacores bring us dual screen or separate right/left real-time raytraced 3D?"
Gaming (Score:5, Interesting)
http://graphics.cs.uni-sb.de/~morfiel/oasen/ [uni-sb.de]
It's been done... (Score:5, Interesting)
Put it on the GPU (Score:5, Interesting)
Take a look at the proceedings from any graphics conference in the last three or four years, and you will see several papers which involve ray-tracing on a GPU. Actually, not so many recently, because it's been done to death. The most impressive one I saw was at Eurographics in 2004 running non-linear ray tracing. As the rays advanced, their direction was adjusted based on the gravity of objects in the scene. The demo (rendered in realtime) showed a black hole moving in front of a stellar scene and all of the light effects this caused.
Quake 3: Raytraced (Score:4, Interesting)
http://graphics.cs.uni-sb.de/~sidapohl/egoshooter
Rumors are there's a q4 version on the way.
If you can't beat them, obviate them! (Score:4, Interesting)
If they're having trouble, for staffing or other reasons, producing good GPU designs, then it would be pretty clever of them to revolutionize the industry AND capitalize on their CPU strengths in a single move. More power to them, I say. (More power = about 120 watts, I'm guessing.)
Re:I need more cores... (Score:1, Interesting)
Raytracing in hardware (Score:1, Interesting)
Methinks it would be pretty cool when there's a 'graphics' card that could do standard raster-based graphics, raytracing and physics. Most of the calculations are the same anyway, so a general purpose processor that is very good in floating-point vector calculations would be necessary. The API's would be mostly implemented in the driver (OpenGL, OpenRT, etc.)
It's not JUST FP that's the issue (Score:5, Interesting)
It's not just the sheer number of FP calculations that can be the problem. Once you get away from the first (or perhaps even second) level of rays, you end up losing coherence between neighbouring rays which causes memory page/cache thrashing. This is not a nice thing on a GPU.
Re:"entirely vectors" (Score:4, Interesting)
30 fps - unlikely (Score:5, Interesting)
Ray tracing also suffers terribly from "jaggies". Edges look bad because rays can just miss an object and cause really bad stepping on the edges of objects. To eliminate jaggies and do anti-aliasing, you need to do sub-pixel rendering with jitter (slight randomness) to produce an average value for the pixel. So you might have to trace 4 or more rays in a pixel for acceptable anti-aliasing. Effects like focal length, fog, bump mapping etc. cause things to get even more complex. Most pictures rendered with high quality on Blender, POVRay etc. would take minutes if not hours even on a fast / dual core processor.
The only way you'd get 30fps is if cut your ray trace depth to 1 or 2, used a couple of lights, cut the screen res down and forgot about fixing jaggies. It would look terrible. Oh and find time for all the other things that apps and games must do.
Use a moment method with physical optics (Score:2, Interesting)
Note that OpenRT is not open source (Score:3, Interesting)
Oasen is based on "OpenRT [openrt.de]" --- which is entirely proprietary, and is NOT open source. Their FAQ explains that clearly.
I'm sure that I'm not the only person annoyed at their use of "open" to mean "closed".
Time to look for an open-source raytracing engine designed for interative use
Re:Put it on the GPU (Score:5, Interesting)
But the GPU is interesting for raytracing. As it moves closer towards a giant floating point vector machine the motivating application will become raytracing. So at the moment a 7800gtx can push 280Gflops. That is 2800 cycles per ray for a single frame. (BTW Intels figures in the article are bullshit. 100mil rays at 30fps = 3 billion rays per second. Roughly one ray per cycle on averge. They are counting a huge number of rays that have been optimised out of the scene, eg shadows or interpolated from pervious frames using a cache).
The raw horsepower is getting there on the card but at the moment the communication soaks up all of the time. Raytracing is the poster-child problem for parallelisation - assuming that you have random access (readable) global memory. If you need to partition the memory into the compute nodes it begins to get harder. In a GPU building datastructures to hold the information is the bottleneck, and it drops the speed by factors of 100s or 1000s. Nvidia and ATi have given the general-purpose community hints that they will improve performance in reading data-structures so this particular roadblock may disappear. A real scatter operation in the fragment shader would be nice, but you would have to gut the ROPs in order to do it. This may happen anyway as the local-area operations that the ROPs compute could fold into fragment operations. To increase the write bandwidth in the card the retirement logic needs to start retiring 'pages' of pixels anyway, over a much wider bus. Otherwise the number of feasible passes per pixel will always be capped by the speed that the ROPs can retire the data.
So given how hard it would be to *efficiently* raytrace on a GPU - why bother when you can throw so much more raw horsepower at faking it with cheap raster techniques?
Re:I need more cores... (Score:3, Interesting)
I am not an expert in OSes, I thought that the idle process just gave the CPU something to do while it waited for a working process (the idle just allowed the working to butt-in, whenever somethin came along).
Wouldn't creating a core just to do nothing be hardware bloat at its most obsurd?
Or am I showing my ignorance, just a bit too openly.
Re:Not quite (Score:5, Interesting)
Considering a non-reflective ray traced world at 800x600 needs 320,000 rays to be cast to calculate an image, so 9,600,000 at 30fps, the claim of 450 million ray segments makes sense... thats 45+ per pixel at 800x600, which is a lot of reflections. Usually you'd limit the number to a fairly low because 100 deep reflections don't add noticable detail, especially in motion. Thats a lot of room for both refractive and reflective objects to be in the scenes.
Re:Not quite (Score:5, Interesting)
Re:Lies, Damned Lies and RT Raytracing (Score:3, Interesting)
As for dynamic scenes, this is actually easier in many ways with a ray tracer. If you start with a scene graph API, you just need to send the changes each frame. How much changes in a typical game? Most of the scenery is fairly static. The characters move and deform slightly (you can often get away with a spacial transfer function, rather than a real change to the geometry in this case). With a traditional graphics pipeline, you still need to redraw every single polygon every frame. With a ray tracer, you can cache huge amounts of the scene (in terms of secondary rays; simply save the results of them as a texture and perform a lookup here for the next ray, and invalidate the texture when something moves between it and any of the light sources).
Ray tracers have been running in real time for a while (take a look at Utah), but not on cheap hardware. The hardware will catch up soon. Take a look at some of the designs from Microsoft Research; they have some very shiny FPGA-based logic which does 100% procedural graphics and is likely to show up in the XBox 3.
-1, Wrong (Score:3, Interesting)
What costs game designers more: hand tweaking every object, or you buying a better computer so you can ray trace their un-tweaked objects? Now guess which way 3D graphics are gonna go...
Re:Not quite (Score:5, Interesting)
There was a paper published a couple of years ago (at Eurographics?) about this. Each ray was independent, and would return a value at each intersection (i.e. you get the primary ray value quickly, and then refine it further with secondary, tertiary, etc ray data). When a ray was no longer lined up with a pixel, it was interrupted and terminated. This meant that you got a fairly low quality image while moving quickly, but a much better one when you let they rays run longer. I found it particularly interesting, since it completely removed the concept of a frame; each pixel was updated independently when a better approximation of its correct value was ready, giving a much better degradation.
Re:Won't happen soon. (Score:3, Interesting)
Sharp and straight shadows? Check out this example [povray.org] or this one [povray.org] or yet another. [povray.org] Granted, these scenes rendering times are measured in hours, not fractions of a second... but eventually games will be at that level of quality.
Re:Another one (Score:3, Interesting)
I remember a talk from someone (John Carmack I think) saying something like raytracing is nice but overkill. Today's hardware maybe be able to handle realtime raytracers but no way near the quality you can get from current 3D engines.
Most special effects you see in current engines are approximation/hacks compared to what you can do with a raytracer but it's also way cheaper to compute.
It's the same kind of relationship than between texture maps vs procedural textures. Procedural textures are better for a rendering point of view, they scale better. But it's also lot harder to make a good quality texture and it requires a lot more power to render.
Re:Note that OpenRT is not open source (Score:2, Interesting)