And this is because when a workstation (a laptop or desktop) hibernates, it writes all allocated RAM to the swap file
Not really, this policy predates hibernation by about three decades. It's so that swapping never needs to allocate new data structures when the machine is already in a memory-constrained state.
This can be as large as RAM, though for speed, it may be smaller in operating systems that store some of their swap file in a compressed RAM disk (such as RAM Doubler on classic Mac OS or zram on Linux). But an operating system still has to provide for the worst case of memory that can't be compressed.
When Linux is using zram, it doesn't follow this policy (actually, Linux doesn't in general). It's impossible to do so sensibly if you're using compression, because you don't know exactly how much space is going to be needed until you start swapping. RAM compression generally works by the same mechanisms as the swap pager, but putting pages that compress well into wired RAM rather than on disk. You can also often compress swap, but that's an unrelated mechanism.
Until you actually use hibernation. How often does that happen on a particular work day?
Generally, never. OS X does 'safe sleep', where it only bothers writing out the contents of RAM to disk when battery gets low, so my laptop never hibernates unless I leave it unplugged for a long time. My servers don't sleep, because if you've got a server that's so idle it would make sense for it to hibernate then it's better to just turn it off completely. My workstation doesn't hibernate, because the difference in power consumption between suspend to RAM and suspend to disk is so minimal that it's not worth the extra inconvenience.
Some of RAM is used as a cache for the file system, but operating systems should be smart enough to purge this disk cache when hibernating.
Most post-mid-'90s operating systems use a unified buffer cache, so there's no difference between pages that are backed by swap and pages that are backed by other filesystem objects. Indeed, allocating swap when you allocate a page made this even easier, which is why this policy stayed around for so long: you could get away with just having a single pager that would send things back to disk without ever having to allocate on-disk storage for them or care about whether the underlying disk object was a swap file or a regular file.
Applications, on the other hand, might not be so smart. Ideally an operating system could send "memory pressure" events to processes, causing them to purge their own caches and rewrite deallocated memory with zeroes so that it can be compressed. The OS would broadcast such an event before hibernation or any other sort of heavy swapping. Do both POSIX and Windows support this sort of event?
POSIX doesn't. Windows has something like this, as does XNU. Mach had it originally, as it delegated swapping entirely to userspace pagers and allowed applications to control their own swapping policies. It's not really related to hibernation, but to memory pressure in general. It's often cheaper to recalculate data or refetch it from the network than swap it out and back in again, so it makes sense, for example, to have the web browser purge its caches when you get low on RAM, because it's likely almost as fast to re-fetch things from the network than get them from disk. On a mobile device, with no swap, it's better to let the applications reduce RAM footprint than to pick one to kill. This works best with languages that support GC, as they can use this event to trigger collection.