For increasing integer and load/store performance, it could be achieved with pipeline and issue/execution modifications, using more functional units. The limit is to keep the OooE simple enough for avoiding wasting transistor in executing tons of instructions unnecesarily.
So, given that NVIDIA's choice is to give up a competitive edge or to intentionally implement its feature set in an obstructionist manner, how is the GPL "good" in this case?
Because it is THE LAW. </Dredd>
Chrome OS also has an autoupdate feature, however not as powerful, unified & transparent as when simply using git.
"clever" differential updates usually work this way (Chrome browser uses it or used it back in the day):
And Git has can not do that yet, because it uses diff + deflate, having far less scope than, e.g. LZMA with 500MB dictionary (requiring 5GB of memory for compressing it is acceptable if it is done just once per version).
Because you can?
At 1920x1080 with 32-bit color, the framebuffer is close to 64MiB. This will typically be refreshed at 60Hz, requiring 3.7GiB/s of memory bandwidth.
You're wrong. 32-bit is 4 bytes, so 1920*1080*4 is 7.9MiB/frame, 474MiB/s at 60FPS. With 20GB/s memories is not such a big problem. Of course, a dedicated bus and memory is better.
"If it ain't broke, don't fix it." - Bert Lantz