To render in real-time for a video game (say 60 FPS), you would need a processor that was around 1 million times faster than what we have today.
What is needed is an architectural paradigm shift, not necessarily a more beefy, faster (single-instruction, multiple-threaded-based) GPU.
To elaborate, with a naive implementation where independent kernels are run in parallel, one of the major bottlenecks for ray/path tracing via GPGPU processing is that every warp, a set of 32 threads, essentially executes the same instruction, with branching realizing by transparently masking out threads; as such, if branching often leads to divergent threads, then there will be low hardware utilization and performance will degrade. With a more robust implementation, you can improve hardware utilization by appropriately partitioning sub-kernels, but you'll run into issues when you start handling secondary rays.
For ray/path tracing to be carried out in an expeditious manner, it would be prudent to move to a programmable multiple-instruction, multiple-threaded (MIMT) architecture with many small cores that can handle many threads. In fact, researchers have been moving in this direction for quite some time now and the results are rather promising: while an NVIDIA GTX 285, which has a die area of around 300mm^2, can handle around 100M primary rays/sec and 60M diffuse rays/sec, with a thread issue rate of ~70% and ~50%, respectively, a custom MIMT ASIC solution, with an area of 200mm^2 at the same fabrication level, can reach around 400M primary and diffuse rays/sec, with a thread issue rate of ~70-80% for both. (As an aside, I have a paper, that's being submitted to either the ACM Trans. Graphics or IEEE Trans. Comp. Graphics and Vis., on an FPGA and theorized ASIC solution that blows these numbers away.)
Umm, if you're so rich traveling the world on interest payments and patent royalties why are you wasting time making a hackintosh and not just buying a mac pro? Haha.
On top of everything else you have reading comprehension issues. To help make things clear for you, I never said that I owned a hackintosh.
However, I will mention that I assembled my workstation, if you can even classify it as such, and it is far more powerful than a Mac Pro: it has 8 Xeon E7-8870s, for a total of 80 cores and 160 threads @ 2.4/2.8 GHz, a Supermicro X8OBN-F motherboard, 512 GB (16x 32 GB DIMMs) DDR3 RAM, and 4 NVIDIA M2070s. (As an aside, a single Xeon E7-8870 costs more than the entry-level, dual-socket Mac Pro and the entire computer is a few thousand dollars short of the price of a new Porsche 911 Carrera.) Oh, and yes, I routinely develop software or run simulations that actually require that much, or far more, computing power: try running lattice-Boltzmann/finite element or Stokesian particulate flows, for modeling thousands to hundreds of million red blood cells, on a Mac Pro.
Oh, you meant you're an unemployed old baby boomer who lives off his retirement account from Honeywell and has a very fixed income. Haha.
Continuing the above remark, it's laughable that you take what I've written and automatically default to assuming that I'm "unemployed," let alone a baby boomer. Just so you know, I'm not even 30.
And no, it's not like being a "security researcher." I regularly publish my work in international, scientific journals, attend conferences and brief my peers.That's like being a "security researcher" here at Slashdot, right?
Damn, you mean it's that apparent I went to Berkeley?So "medical researcher" really means you're a stoner?
Neutrinos have bad breadth.