
I agree that on the face of it this looks like it won't work, but I can see many mitigating circumstances that means it just _might_ work.
I think there's a small chance that they might actually be able to pull it off, and if they do it really is a game-changer.
A couple of things that makes me hesitant to call everyone "retarded" if they don't dismiss this before it has even seen the light of day:
- They are aiming for The Long Tail of gaming, and I think it's easy to underestimate just how gigantic the amount of cash is in this tail
- Not ALL games are hyper timing sensitive
- Multiplexing hardware means the same computer can serve Stan in Portland and Sanjay in New Delhi at different times a day (but admittedly only if there are good pipes or the game is not super lag sensitive).
- Computer power can be spent or sold in other ways when it's not used for the OnLive gaming system (just look at how Amazon has managed to use their knowledge of scalability into a nice side business that doesn't involve books)
- For the most timing sensitive games (1st person FPS), you remove the client-to-client lag, which means the server can run a single cohesive view of the world, and pipe that to the players (so you get rid of one type of lag, which might allow for the server-to-client-video lag with no problem)
- If this gets big or they have good partner deals from the beginning, games might get engineered specifically for this network topology from the game developers side, which might take steps to minimize lag problems (I can come up with quite a few ideas just off the top of my head)
- If the video algorithm is designed for gaming (as it is), they can degrade quality in the video compression in a smart way to keep the lag to a minimum - who cares if the leaves on the trees in your peripheral vision are a bit blocky when you're in a firefight in Crysis)
- They have a few pretty strong industry profiles on their company roster
That said, I am of course also highly sceptical, but I see a sliver of a chance that they might pull it off. And if they do, I really think it will be a game changer (pun intended).
Where you get the 500 ms from?
http://video.google.com/videosearch?hl=en&safe=off&q=microsoft%20e3&um=1&ie=UTF-8&sa=N&tab=wv#
Seriously, has anyone actually paid attention to the stage demo?
Take a look at the first bit with Ridiculous Sunglasses Guy and his avatar - he makes little, uncomplicated poses and the avatar twists itself into pretzels.
It's _extremely_ glitchy.
Then they change to the girl playing ricochet, which is something like 500+ ms lagged and it seems clearly impossible to control with any kind of precision. The lag indicates to me that they are using a tremendous amount of smoothing to try and avoid some serious jitter problems.
It looks like it will fall as short of their glitzy marketing video promise as the wii controller did.
Any game that is not frustrating to control with this technology will basically be playing itself with small cues from you.
I think it's great that there is work being done in these areas, but I am just astounded that so many people are so readily regurgitating the marketing promises for this technology, when they can even demonstrate it halfway convincingly under completely controlled conditions.
As a game developer myself, I can tell you one of the reasons why game developers often use finite state machines for AI instead of advanced neural networks that employ clever learning machine learning algorithms: It's orders of magnitude easier to analyze and understand (and thus debug and fix) how and why a FSM does what it does than a complicated neural network.
When you're making a game, you want results that are easy to predict and easy to schedule - if you decide to make advanced AI and train the NPC behaviors, it's hard to schedule and very hard to pinpoint and definitively fix a problem where one or more NPCs suddenly start acting extremely strange and un-human. And it's hard to fix if they become to clever.
It's one of those cases where simple models can get you most of the way, and it's more reliable and it's much cheaper to develop (in terms of processing time and implementation time).
Prediction is very difficult, especially of the future. - Niels Bohr