I think you're a bit confused over how normal filesystem operations are cached on a modern OS (e.g. OS X, Linux, BSDs, Solaris, etc). Even normal log writing (whether you run it through a compression program or not) on a normal non-SSD-aware filesystem is not going to result in tiny little writes to the SSD. The writes will simply wind up in the buffer cache (or equivalent) and get flushed to media when the filesystem syncer comes around every 30-60 seconds or so. The most highly fragmented case in this situation might require the SSD to flush a 128KB a dozen times each 60 seconds which doesn't even remotely wear it out. Only a complete idiot tries to fsync() a log file on each line, so baring that ... it isn't an issue.
I've heard this complaint many times over the years and not one person has EVER provided any factual information as to what and how much and how often they are actually writing to the SSD. Not once.
The amount of data being written is always an issue with a SSD, but if it's being permanently stored it actually isn't the issue you think it is because the equivalent cost of storage for archival data is actually better with a SSD simply due to the SSD in write-once situation lasting forever (maybe rewrite a full drive once every 5 years or so to refresh the cells x ~1000-3000 rewrites). The SSD will easily last 25 years or longer (probably until the firmware itself degrades), whereas a HDD has to be replaced every 3-5 years whether it's off, idle, or doing work. SSDs are great for archiving stuff. They take virtually no energy when idle and can simply be left attached and powered and the only real wear occurs when you write.
For temporarily staged data... this is probably a SSDs one issue. There is a wear limit after all, so constantly rewriting the drive at a high rate will wear it out. But this is also a problem with an easy solution... since such data is usually laid down linearly and processed linearly, HDDs are still useful as a storage medium. Simpler staging of temporary data doesn't even have to use ANY media if the data trivially fits in ram... you just use a tmpfs mount and schedule a job to process the data at reasonable intervals.
In terms of swap, again you appear to be confused. Simply placing swap on a SSD is not going to wear it out. It depends heavily on how much the OS actually pages data in and out. In most consumer/home-system situations the answer will be 'not often' (relative to the SSD's wear limit). In a server situation swap is not written to under normal operating conditions at all unless someone made a major mistake. It's just there to handle DOS attacks and burst situations in order to allow the system to be tuned to utilize all of its resources as fully as possible.
For example, again a 'tmpfs' (memory filesystem) which is backed by swap can actually be VERY write-efficient since the OS isn't going to flush it to its backing media unless the system is actually under memory pressure. If one schedules things such that the system is not normally under memory pressure (which is a typical case for a server installation), then the SSD won't be worn out... but it will be available for those situations that happen every once in a while that really need it.
Oh well, I don't expect much from Slashdot posters anyway. But, honestly, these things should be obvious to people by now.