Can you explain in more detail?
I'm not an expert here, but I think the idea is to come up with a single quantifying number that represents the idea that very fast compression has limited utility if it doesn't save much space, and very high compression has limited utility if it takes an extremely long time.
Like, if you're trying to compress a given file, and one algorithm compressed the file by 0.00001% in 14 seconds, another compressed the file 15% in 20 seconds, and the third compressed it 15.1% in 29 hours, then the middle algorithm is probably going to be the most useful one. So why can't you create some kind of rating system to give you at least a vague quantifiable score of that concept? I understand that it might not be perfect-- different algorithms might score differently on different sized files, different types of files, etc. But then again, computer benchmarks generally don't give you a perfect assessment of performance. It just provides a method for estimating performance.
But maybe you have something in mind that I'm not seeing.
A compsci sacred cow being slaughtered. See there is nothing wrong with what you suggested. Thats the reason why the idea was inserted into Silicon valley to begin with. So why the bitching about its usefulness? People who spend time in computing as a whole are a fairly rigid lot. A lot of the have aspergers syndrome which gives them a leg up on coding while taking away their socialization skills. Others think its useless because they would prefer terms that dig deeper into the compression and its velocity. They don't believe any single term could do what they want. Finally. the rest just want to stay where they are in terms of terms and anything new muddies the water for them. However, the article itself gives the best definition as to why others hate it.
It’s hard to convey to a lay audience that one compression algorithm is better than another—you could compress and decompress images, say, with some loss and look for glitches in the resulting image, but they are hard to spot. But metrics for compression algorithms that rate not only the amount of compression but the processing speed, are hard to find . So it asked the consultants it brought in to help develop the original algorithm—Stanford Professor Tsachy Weissman and then-PhD student Vinith Misra—to come up with a metric that could be used to score multiple algorithms and find a winner. (Misra recently graduated and will soon be working on IBM’s Watson project.) It seems that someone would have come up with such a metric by now. But, says Weissman, “there are two communities: the practitioners, who care about running time, and the theoreticians, who care about how succinctly you can represent the data and don’t worry about the complexity of the implementation.” As a result of this split, he says, no one had yet combined, in a single number, a means of rating both how fast and how tightly an algorithm compresses. Misra came up with a formula (photo above), incorporating both. Along with existing benchmarks the formula creates a metric that the show writers tagged the “Weissman Score.” It's not a fictional metric: although it didn’t exist before Misra created it for the show, it works and may soon find use in the real world. “It is essentially the compression ratio and the ratio of the log of the compression time,” Misra explains, “but it then normalizes that number against an industry standard compressor used for the same data. For music, say, we’d use might use FLAC.” (FLAC, or Free Lossless Audio Codec is an open format from the Xiph.org foundation.)
The saddest thing is that it took media to force it out there. People here might say its meaningless but if Stanford teaches it will be and when these people here's friends have grandchildren (because the ratio of posting on slashdot divided by the propensity of being a neck-beard also leads to a number used to determine your reproductive likelihood) who graduate Compsci it will be as common as SLOC, Halstead complexity measures & Cyclomatic complexity.
This bill was introduced on May 12, 2004, in a previous session of Congress, but was not enacted. The text of the bill below is as of Jun 09, 2004 (Reported by House Committee).
It was never signed into law as it never made it out of committee. That link you so nicely offer up also offres this:
H.R. 3754 (108th): Fraudulent Online Identity Sanctions Act Introduced: Feb 03, 2004 (108th Congress, 2003–2004) Status: Died (Reported by Committee) in a previous session of Congress
How about this one. Its currently alive. S. 2588: Cybersecurity Information Sharing Act of 2014
This bill was introduced on May 12, 2004, in a previous session of Congress, but was not enacted.The text of the bill below is as of Jun 09, 2004 (Reported by House Committee).
To be clear this means president Bush never signed it into law. It also means, that as it isn't a law the other person is right unless you can find one that was. As for SonyOnline.net the best thing would be a redirect to Piratebay.se
Wrong. The issue is that publishing is considered sufficient.
It should be publish or die. How do you know they're doing anything if they don't publish? they could be watching tv all day for all you know otherwise.
But as is made clear here, simply publishing and getting it through peer review is clearly not good enough. We need to increase what they have to do to avoid this situation.
For example... maybe one scientist pays another scientist to reproduce his work.
Maybe you have big collections of graduate students that as part of their process of getting a degree get assigned some random papers submitted by scientists in their field and they have to reproduce the work.
Obviously this isn't always possible... but whenever it isn't possible that needs to be put as a giant red asterisk on the paper saying "this work has not been reproduced"...
Do that and you're not going to get as much fraud or laziness.
I agree with this for one reason. It would finally dispense with the climate change insanity. Since its actually near impossible to get data that is verifiable about climate older than 50yrs that isn't tainted or limited and subject to interpretation(like core samples or tree rings), we could finally put al gore were he belongs. Out of sight and mind. Each day passing I am more thankful that the supreme court gave the victory to Bush. I only wish it could have happened in 2012 for Romney.
Peter holds a very high standard for himself, I'm sure.
The standard practice is to form a unspoken agreement between several reviewers that they will all favorably review each others papers.
Peter couldn't find his circle and created a self-circle.
Otherwise known as a circle jerk.