With Linux and pretty much every other os, you back up the home directory and install over the top of the other partitions.
You and I have very different experiences of Linux-based systems, though admittedly I am mostly using Linux on servers rather than workstations, and really the problems are more about the distro/software running on top of Linux than Linux itself.
My experience of trying to back-up a real world Linux system is that you start with backing up /home. Then you also figure out what you need to back up from other places, like /root, /etc, /opt and /var. Some of the configuration files in there will be automatically generated from others, but if you overlook any of the underlying ones, you'll be running at 640x480 forever or your RAID won't be as redundant as you thought. Some of the configuration data will be specific to the particular version of something you currently have installed, and the new version will fail to initialise properly after you've upgraded because it doesn't update the previous configuration completely and correctly without user intervention. Some of the executable code you run will be under those directories too, because web apps and scripting and interpreters.
And that's just with standard applications that are provided with your distro. $DEITY help you if you want to install anything else or need to build anything from source, because no-one else is going to. Try not to allow too many breaking conflicts under /etc or /usr/local, where there are essentially no naming conventions and everything just gets a short/abbreviated name and goes into the global namespace. Oh, never mind, we forgot to add the important things under /usr/local/somedirectorymylastdistrodidntevenhave to the back-up scripts anyway.
And then you upgrade your distro to the next major revision because the price of OS stability in the Linux ecosystem is falling behind with all your applications as well, and... Well, in my entire career, across different organisations and with different teams of sysadmins, I can probably count the number of completely smooth major distro upgrades I've seen on no hands. On the server side, I now see a lot of "one install only" policies: the expectation of success with any in-place update process is so low that the standard MO is to set up a new clean machine with the new software required, figure out how to migrate specific configuration and data from the essential applications from the old system to the new one, and then retire/reformat the old machine. Even then, the actual applications and packages installed are tightly controlled; there is an entire industry these days making tools like Puppet or Chef or Ansible because trying to manage these things manually on modern Linux systems is crazy, and making any local changes to standard configurations is frowned upon. Personally I prefer to run Windows for my main workstations for various reasons, but I work with several colleagues who prefer to run Linux workstations and they seem to run into analogous problems with end user/client applications too.
Linux is great in many respects, but with most popular Linux distros, having a clean filesystem structure and code/config/data set-up are not among them. Maintaining most real world Linux-based systems is absurdly complicated as a direct result.