I am not even remotely an expert on the matter, but I believe the point is to render at higher resolution and then reduce AA and such.
If you can make the game engine render the content at such a high resolution then there less need to do post processing of the images to do things like smooth edges as long as you have a way to efficiently scale down the image.
For instance the new Final Fantasy XIII PC port has no graphics options (update to fix that is supposed to be out tomorrow). You can't even pick resolution let alone AA and other settings, so you are stuck at 720p. So the community released a tool (GeToSaTo) that forces the game to 4k resolution and then downsamples it, so that even without AA enabled everything looks as it would if it were.
So the question is, is it harder for a GPU to render the scene at 4k with less post processing or at 2k with lots of post processing turned on to equal image quality? Apparently AMD and nVidia both have it now so at least some games must either look better or are more efficient using downsampling.