When you "calibrate" swap for specific uses, it becomes non-general. In that situation it is far better to let the application use on-disk storage, because _it_ knows the data profile. Sorry, but fail to understand swap.
Other Slashdot poster adds meaningless posturing as that is the limit of what he can do.
Whether measuring speed is a meaningful benchmark depends on what you measure the speed of, relatively to what and what the circumstances are. There are many situations where "speed" is not meaningful, and others that are limited enough that it is.
However, the metric under discussion will not be meaningful in any but the most bizarre and specific circumstances, hence it is generally useless. For the special situations where it could be useful, it is much saner to adapt another metric than define a specific one as this pollutes the terminology.
The uses for that single number are as follows:
a) Some class of people like to claim "mine is bigger", which requires a single number. While that is stupid, most people "understand" this type of reasoning.
b) Anything beyond a single number is far to complicated for the average person watching TV.
In reality, things are even more complicated, as speed and compression ratio depend both on the data being compressed, and do that independently to some degree. This means, some data may compress really well and do that fast, while other data may compress exceedingly bad, but also fast, while a third data set may compress well, but slowly and a 4th may compress badly and slow. So in reality, you need to state several numbers (speed, ratio, memory consumption) for benchmark data and in addition describe the benchmark data itself to get an idea of an algorithm's performance. If it is a lossy algorithm, it gets even more murky as then you need typically several quality measures. For video, you may get things like color accuracy, sharpness of lines, accuracy of contrast, behavior for fast moving parts, etc.
It depends far too much on your border conditions. For example, LZO does compress not very well, but it is fast and has only a 64kB footprint. Hence it gets used in space-probes where the choice is to compress with this or throw the data away. On the other hand, if you distribute pre-compressed software or data to multiple targets, even the difference between 15.0% and 15.1% can matter, if it is, day 15.0% in 20 seconds and 15.1 in 10 Minutes.
Hence a single score is completely unsuitable to address the "quality" of the algorithm, because there is no single benchmark scenario.
And a solvent! Probably causes cancer or something...
There is no possibility for a useful single metric. The question does obviously not apply to the problem. Unfortunately, most journals do not accept negative results, which is one of the reasons for the sad state of affairs in CS. For those that do, the reviewers would call this one very likely "trivially obvious", which it is.
A "combined score" for speed and ratio is useless, as that relation is not linear.
Of course it is a gateway drug! And before that, obviously sugar. Which is why sugar should urgently be outlawed!
Incidentally, sugar and fat kill a lot more people than all illegal drugs combined. And seriously, the whole concept of a "gateway drug" has been discredited quite some time ago. People will escalate to a certain level, regardless of the steps before that. But the authoritarian scum that just have to force their views on people can of course not admit anything like that.
i.e., it was just a quick and dirty survey to show that the proportion of people who want slideout keyboard phones is not zero, like the stores are pretending that it is.
Whether Symbian is a good platform or not involves more than just if the code is functional. Sometimes a lack of applications is driven by a more fundamental weakness in a platform. One of the reasons the iPhone and iPad have done so well courting application developers is that Apple tries to keep everyone marching in formation, moving the platform forward without leaving current customers too far behind. (Their formation, of course, but they are Apple)
A good example is the "pixel doubing" that went into the early iPad design. That intentionally structured the design of the platform so that applications written for lower resolutions would continue working against the higher pixel counts. That's the sort of subtle thing you do to keep developers happy and application development flourishing.
Faced with the same sort of devices with multiple resolutions problem, Android leaves the whole mess in the lap of application developers. And Nokia has just abandoned the old stuff. If you're a phone developer, how would you feel about that? A lot of things like that influence whether applications are built for a platform or not.
And, yes, Microsoft has bullied their way into a winning position using their operating system monopoly for a long time, with IE being a good example of that. I don't think it's safe to assume that tactic will keep working anymore though. I don't know anyone who feels Windows compatibility is an important thing on their phone or tablet today. At best, I might want something that opens Word or Powerpoint documents someone sends me in an e-mail. You don't need Microsoft for that on your phone though. Their software is only needed if you expect to edit the documents with low risk of corruption, and that still happens on desktops.