>How about 100% cheat prevention? When all the computing is done centrally, how could you possibly cheat in the game anymore?
Most games have all, or nearly all, of the processing happening on the server as it is. Doesn't stop cheaters. Sure, they can't exactly just memory edit the amount of gold/points/health/whatever they have anymore, but there are an infinite number of other ways to cheat. Think about botters; that doesn't rely on client-side processing. Aimbots in FPS games do not need to rely on client-side processing (from the game, anyways) either; they will detect enemies and simulate mouse movement to auto-aim.
Think again. When all processing is done in the server and only the screen is sent to the client, wall hack becomes impossible. Aimbot? Your aimbot better be able to identify an opponent's head from the displayed graphics. Not the say that's as difficult as in doing so in real video footage, but it raised the bar so high, that anyone able to pull that off would be quite an expert in pattern and facial recognition, and not just a matter of finding the coordinate for the opponent's location in the data stream.
>Plus, it totally eliminated the lag factor in FPS, as only the central server do the processing and rendering. Rubberbanding and blinking/shifting enemies will be eliminated.
No it doesn't. In fact, it's going to make it worse. It might *look* different, but the actual effect will be more detrimental to your gameplay. If you have issues with lag with a very, very low amount of information being transferred (ie. position updates), then what the fuck makes you think upping it to uncompressed 1080p video streaming is going to improve it? Instead, it'll be like trying to watch a Youtube video that is constantly trying to buffer. Even if, and I stress "if," it were to stream halfway decently for you, you are still going to be feeling the effects of the lag. Everything will feel sluggish and the controls will always seem to trigger actions that happen much later than when you pressed the button.
You assumed current technology for streaming video is the only solution. But given that the server had all the original 3D information to generate the video frame to begin with, many other techniques becomes available. E.g. Regardless of resolution, the server can just send the polygons need to be drawn to the client side for a frame, and then the changes for the next frame, etc. Which would reduce the data size tremendously. Instead of 1920x1080x24 bits = ~6MB a frame, the position of 10 thousand vertices will only take ~40KB assuming 4 bytes each. Textures can be stored locally, then the 100KB from the server could give enough information to the client to render the image.
As for the control latency, it is there in current FPS anyway, it's just a matter of how well the game masks it. Actually, it is worse in the current FPS, as my movement really have to make it, not only the first hop to the server, but make one more hop to my opponent's machine for me to avoid getting hit from his POV.
If you played FPS then you will know about turning a corner to hide, only 2 seconds later to have the game tell you that you actually got hit and killed before you turned that corner, because your opponent lagged so badly he didn't get the data of your movement until after he already shot you.
>With only 1 copy of the world, then the number of players will only be limited by the number of CPU doing rendering from the POV of each player, and that is probably easier to scale as the rendering process is read-only. So you can have MASSIVE number of players in the same game, imagine hundreds of player all in the same battlefield, and that limit can be increased by a server upgrade instead of waiting 5 years for another console generation.
This can already be done and it does not need that the video output be rendered on the server which requires even more massive servers.
This can already be done?? Which FPS running on console give you games with 100+ players in it?