Long before Windows, virtual memory was 'invented'. Given that, the term has a specific meaning. As others have mentioned, it is a method for making programs believe they have unlimited memory space, whilst sharing the actual available physical memory between numerous programs. This 'feature' has a cost - references to memory must be translated from a virtual address to a physical one, by memory management hardware (and sometimes software). Until most recently, Intel processors used a separate chip to manage this. AMD put their memory controller onboard a few years ago. In terms of memory performance, Intel lagged for the past few years because their outboard memory controller consumed extra time to do its job. Moving the controller onboard removes an electrical interface or two, thus speeding things up and generally improving efficiencies.
The original post, I thought, was brilliant. Why are we devoting all this chip real estate (or, in the past, chips), to sharing a rare resource (memory) when that resource is no longer rare? Grant, virtual memory gives us other advantages such as ensuring one program doesn't write in the memory space of another, but surely there are other ways to do that. If we did away with virtual memory and returned to the old (ack! DOS) days of physical memory references, we could devote that chip real estate, power quota, etc to other worthwhile pursuits, like making my twitter pages load faster.