typodupeerror
Slashdot Deals: Deal of the Day - Pay What You Want for the Learn to Code Bundle, includes AngularJS, Python, HTML5, Ruby, and more. ×

## Comment Re:Last prize really Ig Nobel? (Score 1)111

The only way this "scientific" paper could have been given a prize is because it rubs people's preconceived notions the right way: the grand-parent post is living proof of this. But scientifically, it's absolutely worthless.

Here is the paper in a nutshell: if you operate under the crazy assumption that the competence of someone has absolutely NO IMPACT WHATSOEVER on how well they will do their job when they get promoted to a higher level, then it makes no sense to promote skilled people since they won't do any better than any bottom-of-the-ladder grunt. And it's actually counter-productive: because those skilled people you promoted will get assigned a new (unrelated) random skill level, the average skill level will drop more if you promote skilled people than if you promote unskilled people...

Now these are the WTFs that come to mind:
- why on Earth did they need a crazy numerical simulation to figure that out?
- why on Earth did they not put a sensible explanation like this one anywhere? before diving into the paper the abstract and conclusions were so devoid of any insight that I was expecting something much more subtle and hard to explain than the trivial reason I outlined above

The really disappointing part is not that they have a completely unrealistic model, it's that they're trying to hide it behind fancy-looking graphs so that the science appears superficially sound. But before you call me a nay-sayer, I'll throw in some constructive criticism. Here is a simple way to analyze the problem that could have saved some computer cluster energy: Let Xi be a random variable describing the current value of employee number i. Let Yi be his value at his new job if he were to be promoted. When a new higher-level position needs to be filled, we seek to find i maximizing E(Yi | Xi=xi) + Sum(j!=i) xj. This is equivalent to maximizing E(Yi | Xi=xi) - xi. Hence, if the E(...) term depends neither on i nor on xi (as in their hypothesis), the best way is to minimize xi, hence promote the lowest-skilled person (which in the real world makes zero sense at all as they are likely to be complete newbies, difficult to work with, not giving a shit, or otherwise moronic).

So that's how you can prove their simulation result. However, you can go further: if you look at the last formula, you see that there are two terms: one term E(Yi | Xi=xi) that increases as xi increases (skilled people are good at their current job, so they are more likely to be good at their next job), and one term -xi that decreases as xi increases (it's better not to risk losing a valuable person+job combination: that's why it's well-known that being indispensable to your specific position is bad for your career as management will be reluctant to promote you). So the real job of management is to understand those two contrary goals, and balance the forces due to "skill" and to "inertia" together. You shouldn't under-estimate the first term and promote random or unskilled people, just like you shouldn't under-estimate the second term and promote the single most valuable employee blindly.

And that's where you see why using a numeric model while wearing a blindfold is a bad idea: not only is it overkill for simple phenomena like this one, but it also deprives you of a deeper understanding of the subject. Don't get me wrong, what I said in the previous paragraph wasn't all that deep: I'm pretty sure most competent managers have internalized the equilibrium, without the fancy statistic notation; but at least it goes way deeper than the paper's computer simulation. At the end of the day, a manager reading the Ig Nobel paper is going to be misled into thinking that there is proof he should try to disregard skill (or, on the contrary, that he should disregard scientific papers), while being offered no reason apart from scary computer models. True science is about enlightening people by giving them tools to understand reality: this article is about getting mainstream media coverage by giving pseudo-proof of a popular theory, with no concern for scientific honesty or a wider search for truth. If you think I'm being unfairly harsh, the authors have a webpage dedicated to media coverage of their paper, so they are clearly comfortable with their paper reaching a wide audience and didn't feel necessary to make any addition or clarification of the scope of their paper.

## Comment Re:What the? (Score 1)487

This may be true in the US, but apparently not in the Netherlands: the patent lawyer he contacted told him Shazam would have a case if he published the code.

## Comment Re:What the? (Score 1)487

Even then, code is speech until you run it. Are we now to limit free speech by government order to protect their patents?

By that logic you could freely distribute an infringing program as long as you don't run it. So yes, free speech is limited in some way.

If the hardware store sells me a CNC mill and I make patented widgets with it will they sue the hardware store?

No, but if they also gave you pre-milled parts of a patented widget and instructions to assemble them together they would sure as hell be liable.

## Comment Re:Patent and disclosure... (Score 2, Interesting)487

Remember in a software patent all you need to say is "a method for identifying music playing by listening to a small sample and comparing to a list of sonic fingerprints" and you are pretty much all set.

You're referring to the description, which has little legal effect. The stuff that they can really take to court is any of the claims they have listed. Their main claim is

A method of characterizing a relationship between a first and a second audio sample, the method comprising: generating a first set of fingerprint objects for the first audio sample, each fingerprint object occurring at a respective location within the first audio sample, the respective location being determined in dependence upon the content of the first audio sample, and each fingerprint object characterising one or more features of the first audio sample at or near each respective location; generating a second set of fingerprint objects for the second audio sample, each fingerprint object occurring at a respective location within the second audio sample, the respective location being determined in dependence upon the content of the second audio sample, and each fingerprint object characterising one or more features of the second audio sample at or near each respective location; pairing fingerprint objects by matching a first fingerprint object from the first audio sample with a second fingerprint object from the second audio sample that is substantially similar to the first fingerprint object; generating, based on the pairing, a list of pairs of matched fingerprint objects; determining a relative value for each pair of matched fingerprint objects; generating a histogram of the relative values; and searching for a statistically significant peak in the histogram, the peak characterizing the relationship between the first and second audio samples.

which is not nearly as vague. But it's still very basic and obvious stuff. It doesn't seem easy to implement an efficient fingerprinter that avoids this patent since you basically have to throw away all the inter-feature timing information if you don't want to run into something equivalent to their peak histogram stuff.

I'm shocked at how such broad claims can be accepted by patent offices...

## Comment Re:What the? (Score 2, Informative)487

His blog post contains a lot of code, making it dangerously close to a full implementation. Although even their lawyers don't seem entirely confident in this interpretation, since they only mentioned the blog post in their last e-mail.

## Comment Re:Exceedingly silly (Score 1)304

Erratum: when I said variance (second central moment) I didn't mean variance but the expected value of X^2 (second moment about 0).

Also, someone should mod up the post below about wedgelets. The useful properties I was mentioning in the parent are common to all forms of wedgelet approximation.

## Submission + - Laptop specs for AVCHD editing?

An anonymous reader writes: I'm intent on buying a laptop to do some on-location video editing when I'm filming but am unsure about the specs. The convenience of just plugging in an AVCHD camera and then being able to edit the clips immediately (without converting) is very appealing, if the editing timeline can be previewed realtime (i.e. rendered sufficiently well on-the-fly). My budget is obviously an issue but a laptop is harder to expand and I'm only counting on being able to add external USB 3 HDs to my setup so I'd have to get the rest right immediately and do so by ordering a customized one. I've already browsed plenty of forums but have not yet found any reliable answers or consensus, which is why I turn to my fellow slashdotters.

## Comment Re:Exceedingly silly (Score 4, Insightful)304

I completely agree with you about the fact that everything in the paper seems to be pulled out of thin air... But I do see two reasons why his compression algorithm might be better than JPEG or other lossy codecs in some situations:
1) the decompression performs no arithmetic on the pixels, hence you can perform gamma correction or color change losslessly (like in a square-pixel image)
2) aside from the choice of mask, the compression is entirely deterministic, which is a plus in scientific imaging: when you have a "triangular pixel" with value 200, you know that the average of that zone was exactly 200 (with JPEG, you can't know anything for sure as the compressor could add artefacts or remove detail as it sees fit)

Why are you maximizing contrast instead of minimizing error like any sane person would do, WHY?

In fact they are equivalent, assuming that the masks are equal-area:

square of RMS error
= Variance(residual)

Since Variance(maskedimage1) + Variance(maskedimage2) remains constant (we just shuffle pixels between both masked images when we change the mask), minimising the error is equivalent to maximising

average1^2 + average2^2
= 1/2 * ( (average1-average2)^2 + (average1+average2)^2 )

Since average1+average2 also remains constant, this is equivalent to maximising

(average1-average2)^2

which gives us the maximum-contrast method.

## Submission + - Amateur programmer meets software patents (google.com)

Roy van Rijn writes: A couple of weeks ago, in a spare weekend, I wrote software that could recognise music through listening to the microphone, much like SoundHound and Shazam. After populair demand I was just about to release the code into the open source community when I got an email from Landmark Digital Services LLC. They claim my hobby project is infringing their patents. This took me on a journey to find out more about software patents and the validity of the requests I got from the company.

I have a theory that it's impossible to prove anything, but I can't prove it.

Working...