I expect that is may be mostly do the fact most apps made today still are created with the idea of 32bit in mind. (For Windows and Linux). When designing software there is a sweet spot where of how much RAM to use, vs how much to read off of slower storage such as a hard disk or download from the cloud, vs. how much you should calculate in real time. As technology progresses and prices changes this balance fluctuates. MS DOS and those old DOS apps were designed around the under 640k RAM. and reading data from the disk. So many of the games were generated via Vector graphics. As the CPU time was fast enough to draw the graphics, vs trying to store bitmaps in RAM, and loading it from the disk. Then once the Faster Accessing of the hard disk came around with larger storage, then you got more bitmapped images, where you can read more complex images and display them faster then it would take the CPU to draw them at that quality level. As well RAM has been breaking the 640k barrier, at this point we can have Windowing information as we now have the RAM to run the application and extra to store the data behind an overlapping window...
Design methods change as technology changes so you code needs to deal with the new balance of technology available in the systems.
Sometimes we call it bloat, but it is about having your program taking optimal advantage of the resources to meet what the system can do.
I have a program I created on the server that takes over a hundred gigs of RAM. It really flies because I have a good portion of the data cached in RAM for quick retrieval faster then it takes to download it from the Database. The app I would have written a decade ago, wouldn't work like this app, because we didn't have the RAM, so it would have been designed with more of creating direct read tables in the database with copies from other data elements, probably using extra disk space, to get things indexed so it will work in reasonable time. As well it may need to have been split across multiple servers.