However, by outsourcing everything to China and India, you loose that innovative drive, which erodes your longer term growth.
This is fine in only two cases I think, A) you don't care about the innovation of what you're developing (it's not your core business), or B) the type of work you're doing is extremely expensive *and* specialized (eg. chip design & manufacturing,) making it hard for an upstart to compete with you, even if your work is sloppy.
IBM rarely innovates anymore aside from some of its hardware, I'm not aware of any genuine software innovations from IBM in, say, the last 10 years.
The way large companies seem to be doing it now is by acquiring their way into innovation. I'm happy they do, because it makes startups that more valuable to do. Perhaps if large companies are changing their game, we engineers should wake up and adapt ours.. do more startups?
You are posting in a thread about the fact that Apple made their implementation open source and you are claiming vendor lock-in?
Are you one of those rabid Apple-haters we see so often around here? Or are you just amazingly stupid?
I must be amazingly stupid because I rather like Apple products.
Proprietary extensions are done for (arguably) the same reason by Microsoft; the goal should instead be to work on better iterations of language standards (C/C++) and not on introducing arbitrary language extensions that are not portable across compilers - especially not really extremely awkward ones like 'anonymous function pointers.' There's a similar argument to be made for 'encouraging' developers to use C# and Objective C.
The issue with raytracing is memory access patterns; this is not so much an issue with GPUs vs CPUs, but rather that both CPUs and GPUs rely on linear prefetch patterns through memory, which raytracing breaks as you traverse the spatial subdivision structure.
Secondly, ray tracers scale very well with resolution O(n) where n = number of pixels; we currently still have a relatively high constant cost, but assuming moore's law keeps up in performance and we find an answer to the memory problem, it is superior.
What makes the move to raytracing somewhat an inevitability, however, is not raytracings ability to very straightforwardly do more sophisticated lighting (many of which can be mimicked in awkward ways), but its ability to scale to massive amounts of geometry; eg. when using octrees, we're looking at O(log(n)) for n primitives - this is better than GPU rasterizing hardware where it is O(n).
The one critical failure of raytracing - and the reason it is hard to do games - is that to get to O(log(n)) you (currently) have to have static geometry - it cannot animate dynamically in realtime as you need to rebuild the spatial subdivision structure for the animated geometry on a frame by frame basis, and that gets expensive..
Perhaps some kind of mix will ultimately be used, but the geometry benefits of raytracing do still make the technology inevitable.
Having said that, Larrabee will be a stillborn for the first reason outlined.
Things to google for if you want to implement this yourself:
- "Point Matching" - when given two frames, which points are matching?
- "Five Point Relative Pose Problem" - this helps correlate camera motion from frame to frame when given 5 matching 2D points
- RANSAC - this helps filter out those matches that correspond to different moving objects - you'll burn through a lot of 5 point combinations and eventually the majority winner stays on.
Where I hope this area of technology goes is in the development of highly efficient HD codecs; I personally can't wait!
It is easier to change the specification to fit the program than vice versa.