Rasterizing triangles and the "first intersection" on a ray tracer actually give exactly the same result for a triangle mesh.
Ray tracing has a more obvious mapping onto the rendering equation, but rendering geometry or even first order reflections offers very little advantage (and several disadvantages) over rasterization techniques. Shadows are more implicit in ray tracing, but they don't look "better" until you have area light sources and start shooting a LOT of rays.
And that's really the problem. Most of the cool things you might want to do with ray tracing (soft shadows, photon mapping or other global illumination) involve shooting multiple-orders-of-magnitude more rays than simply drawing a game level.
If I had a fast hardware ray tracer, I'm sure I could find some very cool stuff to do with it, but wasting a ton of cycles doing what rasterization is perfectly adequate at is a bit pointless. It seems like a solution in search of a problem. If we could rasterize a scene normally, but do multiple raycasts in the pixel shader to determine light occlusion (shadows), we might be on the right track.
Neutrinos have bad breadth.