Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Surprising in its unsurprisingness (Score 1) 833

They've been posting things that embarrass the government and affect its public image.

Specifically, I think you mean the US government. One thing (not the only thing though) that bothers me about Wikileaks is that it seems to be exclusively, or at least principally, dedicated to embarrassing the US government.

Here's one that I'm particularly OK with. If I recall correctly, this was the first time that I had heard about ACTA.

Comment Re:Order of Magnitude (Score 1) 360

Interesting. After installing the platform preview and confirming the results of the article (1.0 ms, +/- 0%, exactly the same as Sayre and the author, even though the other times were different), I am also getting the same results when looking at your test.

Yeah, bit twiddling makes things faster, but I can think of a logical explanation for that one.
After you run "n += i" 92682 times, n can no longer fit in 32 bits, whereas all the bitwise operations fit comfortably within that range for all values of i, so you can avoid using the "much slower" bigints.

A more far-fetched but powerful explanation is that it could be optimizing certain kinds of iterated bitwise operations on vectors. Iteratively applying |=, ^=, and &= all can be done in O(1) time. Bitwise AND: all zeroes. Bitwise OR: ceil(log-base-2 of 50000000) == 26 binary ones at the end. Bitwise XOR: complicated, but doable. (i XOR n) is the current value. If i is even, then (i XOR n) XOR (i+1) == (n-1 if n is odd, or n+1 if n is even). (i XOR (n-1 if n is odd, or n+1 if n is even)) XOR (i+1) == (n). So, you can get any value of n for any odd value of i using modulo division, as it cycles between n and n +/- 1 on every odd value of i. Do another XOR if you need to figure it out for an even value of i.

I say this is "far-fetched", because it seems like this kind of optimization would rarely be worth the cost.

Next test (also helps to explain what changes when "g = n" is commented out):
"n += 1; n -= 1;" This adds 1 to and subtracts 1 from n each loop, which could hypothetically be optimized away. Observe what it does for commented and uncommented "g = n". I'd recommend increasing the max value of i by a factor of 100 to try and reduce the noise. It should take about a minute and a half or so for each on the machine that you got ~1600 ms for your code.

Hope this helps.

Comment Re:Order of Magnitude (Score 1) 360

That's why I'm suspicious about this: dead code probably should not cause an order of magnitude increase in running time.

Actually, that's precisely what a good dead code elimination will cause. Consider this loop in C++:

You're right. I should have narrowed the scope of my claim. My apologies.

That's what I was trying to say... it is more likely a symptom of cheating (at least some level of catering to the benchmark rather than to the JS code) than it is a symptom of a botched optimizer that can consistently (95% confidence interval: +/- 0.0%) optimize some code down to exactly 1.0 ms, but "somehow" manage to perform far, far worse than other existing browsers when changing the source in a trivial way, and "all of a sudden" become way more inconsistent (95% confidence interval: +/- 1.9%).

Two things of note here.

First, this behavior only shows on one particular function in one specific test of the entire test suite. In fact this is precisely how it was found - the result is so ridiculous (IE 10x faster tan all other browsers!) on that particular test and not on any other - that it immediately stands out, and that immediately prompted an investigation. (If you RTFA, it's actually old news, it just took a while to gather all evidence and officially submit it to MS as a bug report.)

That's not how any sane person would cheat - you'd nib a few milliseconds down here and there, showing advantageous but realistic figures across all tests. Especially with closed source, that is nigh impossible to catch.

That's how I would do it, too. Microsoft has a lot of smart people -- I'm sure many of them would have thought of that too.
But that didn't happen here. For some reason, it runs too quickly, quickly enough to warrant further scrutiny.

Second, it's not "far worse than other existing browsers" when changing the source. It actually still beats FF4 in this test even with the change! Chrome and Opera are faster still, but it's not like the difference there is 2x or even 1.5x.

Actually, when you ignore the anomalous 1.0 ms, it runs around 2x longer than Chrome and between 2x-3x longer than Opera. I won't install FF4 to try it out, so I'll have to give you the benefit of the doubt on that one.

Ultimately this needs more testing. Best of all would be to try to find some other pattern of dead code that is clearly unrelated to this test (so it couldn't be "wrongly detected" if this is a cheat), but which the optimizer handles in the same way. Finding a few such would definitely prove that this is just optimizer at work, and weird results are likely due to bugs in it (like incorrectly handling "return" as a side effect where it is not). But if no other patterns are found that exhibit this behavior, then this is strong evidence for hardcoding for the test.

Of course. This is circumstantial, so it can't, by itself, prove that they cheated. I also haven't installed the preview to run it myself, yet. But, assuming that the benchmark times are accurate, it'll be hard argue that a benign explanation is more likely than cheating. That's all I'm saying.

Particularly, the 95% confidence interval of +/- 0.0% strikes me as the most suspicious. It's way too convenient that the test ends up taking the same amount of time, to a certain resolution, at least 95% of the time it is run, whereas the alternate spellings of it are much less consistent. Also... the same 1.0ms on different hardware (Sayre's testing machine as well as the author's laptop), all while keeping a 95% confidence interval of +/- 0.0%? That's... dare I say it... inconceivable!

The "roundness" of 1.0ms is convenient, too, but I'm not going to count that as a strike against them. Even though this whole thing is already circumstantial... sometimes, numbers are round.

Comment Re:Order of Magnitude (Score 1) 360

To me, an order-of-magnitude difference in an interpreted language

There's no such thing as an "interpreted language". A particular implementation of the language may be an interpreter. All modern browser JS implementations, including the one in IE9, are JIT-compiling to native code.

When writing my post, I went by what Wikipedia said about that term, which conveniently accounts for JIT compilation in its description. Turns out there is such a thing!
I don't suggest that there are languages that must be interpreted, only languages that are interpreted (in this case, "interpreted" meaning "not directly executed", which includes JIT compilation).

by adding non-functional statements

That sort of thing is actually precisely what a good optimizer should be able to catch.

That's why I'm suspicious about this: dead code probably should not cause an order of magnitude increase in running time.

compiled-in functional equivalent of that particular JavaScript function.

The equivalent of that particular JS function is "void foo() {}". It does a computation, but does not use the result of said computation in any way (doesn't return it as a value, and doesn't update any global state).

Wow, you're absolutely correct. I just assumed that SunSpider actually checked to make sure that the engine came up with the correct values to catch this kind of thing. So it now seems more likely to me that it just says "if the code matches the cordic function, then sleep for 1 ms, and return." That actually makes more sense with the numbers we see.
So in this case, that code is probably just stubbed out. Why have a stub for a function used in a benchmark test if you're not treating the benchmark specially (which, I believe, is an example of cheating)?

Now, why the optimizer is so fragile in IE9, is a good question. But the order-of-magnitude difference itself is not suspicious; rather the fact that making trivial changes to the source trips the optimizer is.

That's what I was trying to say... it is more likely a symptom of cheating (at least some level of catering to the benchmark rather than to the JS code) than it is a symptom of a botched optimizer that can consistently (95% confidence interval: +/- 0.0%) optimize some code down to exactly 1.0 ms, but "somehow" manage to perform far, far worse than other existing browsers when changing the source in a trivial way, and "all of a sudden" become way more inconsistent (95% confidence interval: +/- 1.9%).

Comment Order of Magnitude (Score 1) 360

I don't know about those of you suggesting that this isn't cheating. I'm going to go ahead and agree with the author of the article on this one: it's most likely cheating, even following Hanlon's Razor.

To me, an order-of-magnitude difference in an interpreted language by adding non-functional statements is most likely due to using a compiled-in functional equivalent of that particular JavaScript function. The engine probably matches the parse tree to determine whether or not to run the (much faster) machine-code version, and the non-functional statements make the parse tree not match, thus reverting back to executing just-in-time.

Sure, it's possible that the reasoning behind this difference isn't to get ahead on this benchmark. But I'm going to use Occam's Razor to suggest that the order-of-magnitude difference is a result of using pre-compiled versions of specific JavaScript functions, rather than assuming that Microsoft engineers who can optimize that JavaScript function down to 1 ms (with, by the way, a 95% confidence interval of +/- 0.0%!!!!!!!) are incapable of optimizing a functionally equivalent but trivially different version of it below 19.5 ms (with a 95% confidence interval of +/- 1.9%)

Comment Re:It almost makes me sad to have sold my ps3. (Score 1) 167

You spot a PS3 off the side of the road. Cars are whooshing by.
What would you do?
> get ps3

You bend down to pick it up, ignoring the cars whooshing by. One whooshes too close, knocking the PS3 out of your hands and into the middle of the road.
What do you do?
> get ps3

Still ignoring the whooshing cars, you move to the PS3. A car whooshes into you.
You are dead.

Comment Re:Pirate Party (Score 1) 376

"Let me give you a few examples that illustrate why this mindset is wrong (thinking that piracy should be illegal because it 'hurts sales')."

I like your examples. They illustrate normal activities that probably cause a much greater impact to a particular company's gross revenues than piracy does, and it helps to put things into perspective. Having said that, it does not exactly prove that piracy does "no harm," and I would argue that the producers are indeed made worse off as a result of piracy, which I would define as a sort of "harm" caused to them :-)

In your first example, a person buys (!) media, then convinces N of their friends not to do the same, resulting in the company losing N potential customers. If this person had pirated that media instead (even though we know that they would otherwise have paid for it) and then convinced N of their friends not to buy it, then the company just lost N + 1 potential customers. The difference may be insignificant, but it is certainly non-zero.
The flip side of this example is that the person pirates that media, likes it, then buys it and convinces N of their friends to buy it too. That probably happens less often, though.

In your second example, a person makes the decision to buy their media from XYZ instead of ABC, and ABC loses 1 potential customer. In markets where the product is identical among all sellers, this is a little more relevant; in this instance, not so much. I can't (legally) get Microsoft Office, e.g., from anybody without Microsoft getting a share, so I really only have one "choice": pay Microsoft, or don't use Microsoft Office. If Microsoft charges too much for Microsoft Office, I might go to a competitor that offers a similar product like OpenOffice.org, but that's not the same thing.

Let me give my own example to illustrate when piracy might cause harm to one party, even when there are competitors offering a similar product. For this example, X is a rational consumer who needs office productivity software to do independent contract work, and OpenOffice.org (pretty good, and it's free) and Microsoft Office (better, but it costs $50) are the only two choices; for simplicity, assume that the cost for Microsoft to manufacture another copy of Microsoft Office is negligible (this doesn't change the general outcome of the situation), and Microsoft will not charge any more or any less than $50.
X would be a potential buyer of Microsoft Office if the value that it adds on top of OpenOffice.org is greater than the $50 difference in their licensing fees; let's say that in X's specific case, X would be able to finish a $1200 contract using Microsoft Office in the time that it would take to finish a $1000 contract using the same amount of effort.
X, a rational consumer, receives a marginal benefit of $150 ($1200 - $1000 - $50) buying Microsoft Office, and Microsoft, a rational producer, has a marginal benefit of $50 ($50 - $negligible, $0 for simplicity) to sell Microsoft Office. This transaction does not take place, however, when X pirates Microsoft Office and receives a marginal benefit of $200 ($1200 - $1000 - cost of pirating, $0 for simplicity), and Microsoft gets a marginal benefit of $0.
The only difference between the two scenarios is that X pirates Microsoft Office in one case, and purchases it in the other. Therefore, all of the differences in the outcomes of the two scenarios must be the result of the piracy. Value-added by piracy:

X: $200 - $150 = $50
Microsoft: $0 - $50 = -$50

Because of piracy, Microsoft is harmed by $50, and X is benefited by $50. I will count this as a loss for Microsoft, because X is a rational consumer and would pay Microsoft $50 to gain $200 if piracy were not an option.

Now, to illustrate when piracy might not cause any harm (which may or may not be what you're getting at), let's look at the same scenario with the only difference being that instead of $1200 for the Microsoft Office contract, it's $1020, (and we'll change X's name to Y).
Y would not buy Microsoft Office for $50, because the benefit to doing so is not more than $50; in fact, that decision would result in a marginal loss of $30 ($1020 - $1000 - $50), so Y just uses OpenOffice.org for the $1000 contract. Microsoft would therefore not sell a copy of Microsoft Office to Y, so they are not affected by Y's decision. If Y pirates Microsoft Office, however, Y receives a marginal benefit of $20 ($1020 - $1000 - cost of pirating, $0 for simplicity). Microsoft doesn't lose a customer, because Y would not have bought it anyway. Now, piracy adds to each individual's situation:
Y: $20 - $0 = $20
Microsoft: $0 - $0 = $0

Piracy didn't harm Microsoft at all here (because Y never would have had a rational reason to buy Microsoft Office in the first place), but Y is $20 ahead for it! It's not a huge stretch to imagine Z, who doesn't know anything about piracy, so Z takes the $1000 contract and uses OpenOffice.org to complete it.

It is incorrect to count either of these cases as a loss for Microsoft (since Z used OpenOffice.org instead of Microsoft Office, and Y would not pay $50 to gain $20 in the first place, even though Y used Microsoft Office in the end).

Please understand that I am not completely against piracy. In fact, given a little more in-depth economic analysis, piracy adds an extra $20 to the society described above* (X gains $50, but that is offset by Microsoft losing $50; Y's $20 comes at no loss to anybody, so it's pure gravy**). I think that the enforcement of laws like Hadopi only make situations worse for individuals and ISPs, and the nature of the Internet makes the enforcement itself unreliable. I only take issue with the argument that piracy does no harm [to anyone], because it's really hard to imagine a world where that can possibly be true.

*The "extra" $20 here comes from the assumption that Microsoft will not sell Office for less than $50. I realize that Y would have paid anything in [$0, $20) for Microsoft Office, and if we relaxed the assumption that Microsoft fixes the price at $50, then we can support that Microsoft would sell it for $20, and the extra $20 would go away. I settled on the assumption of a fixed $50 price for Office because it approximates the effect of manufacturing costs, that I hand-waved as "negligible". If manufacturing costs are added in (e.g., $20, where the shelf price of Office is still $50) then what you end up with is that society is actually ahead by between $20 and $40 instead of $20: X's piracy harms Microsoft by $30 and it harms Microsoft's suppliers by whatever their margins are. Y's decision to pirate Office still doesn't cost Microsoft a customer, but now it's because Y will only buy below $20, and Microsoft will only sell above $20, so no sale would be made, assuming Microsoft will not sell at a loss. We assume that Microsoft's suppliers do not operate at a loss or break-even, but their margins must be less than 100%, because they need to purchase raw materials; therefore, their margins are between $0 and $20. X is still up by $50 as a result of piracy, but now Microsoft is only down by $30, and the suppliers are down by $0 - $20, so from piracy, this society now gains: $50 (X's relative gain) - $30 (Microsoft's lost margins) - $(0-20) (Microsoft's suppliers' lost margins) + $20 (Y's relative gain) = "between $20 and $40". In both cases, society's net gain** demonstrates an increase in productivity resulting from piracy, and this gain is higher when margins are lower. This makes intuitive sense: if you pirate something, then the maximum economical benefit you can receive from pirating (rather than buying it) is the unit price. The maximum amount that the company misses out on is their margins, which are always less than the unit price, and usually much less.

**Though, on an individual level, piracy can provide a net benefit to society as a whole, note that producers are harmed, and if piracy is prevalent enough, it can indeed create a situation where a producer no longer has enough incentive to create new innovative products, resulting in a net loss to society far greater than net gains from small-scale piracy.

Comment Re:Pirate Party (Score 1) 376

"I believe P2P is only hurting sales a few percent at most"

More like not at all. That's like saying consumer choice is hurting sales because people can choose whether or not they will buy something.

I hypothesize that there are some potential consumers that do not end up buying a digital media product, because they know how to get it for free via P2P, if they can afford the extra risk associated with doing this.

Given that this hypothesis is correct, then there is some amount of harm done to the gross revenues of copyright holders, more than "not at all". However, I do not believe that this group is as large as the group of those people who would not otherwise buy the product (maybe because they've never heard of it, are poor, have no credit card because they are young, etc.), and to whom P2P offers a way to experience that product.

I further believe that both of these groups combined are much smaller than the group of people who are not directly involved with P2P who will inadvertently be affected by Hadopi.

Slashdot Top Deals

BLISS is ignorance.

Working...