Would you prefer that we take e.g. the new ATI 7xxx series GPUs, set that as a "this is fast enough and has as many features as we ever need" benchmark and focus any future advancement on making GPU's that are exactly the same, only they use less power?
The 'need' for advancement comes not just from making players feel more immersed, it's about meeting the requirements of HPC in order to provide realtime data visualisation and rapid number crunching. A lot of drug research is done via simulation of protein interaction, which lends itself perfectly to running on GPU's (hence the Folding@Home project).
In 10 years time, most of us will be enjoying our Super-HD screens. Some of us may be enjoying our Super-HD, QLED, 48 bit colour, 120Hz, 3D, high dynamic range screens. Some of us may even have 3 of them in a multi-monitor config. That could potentially mean a GPU having to drive a (virtual) display resolution of 12288x2560. That is a minimum 180MB frame buffer needing to be refreshed 120 times per second.
It's the difference between having to process 7GB/s versus today's reasonable maximum of 0.35GB/s. That will require some advancements in (GDDR) memory technology as well as obvious improvements to GPU performance. If we maintain the current pace of advancement, this may be possible within the next 5 years.