The first is to have your own offsite storage that you back up to, where the backup is (at least) as large as the original. Multiple people have recommended Crashplan, and that's certainly a viable option. There are undoubtedly other options that could do similar things depending on how down into the weeds you want to get - rsync, the various rsync-based versioning backup solutions, git-annex as mentioned by someone else though that one's new to me. I'll note that from experience with Crashplan's Enterprise product on some older 32-bit servers, the client software can chew some fairly significant memory when you have a lot of files or data.
The other and probably simpler option is that when you start to near capacity on the storage system, don't upgrade it - shut it down and store it, preferably not in the same (not-yet-burning) building after building the new system and copying the data over to it. After you shut the old one down, keep backups of anything you've changed since that "checkpoint" system; hopefully your data isn't changing that rapidly - 20 TB seems to me almost guaranteed to be mostly static.