Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Sigh (Score 3, Interesting) 534

George Hotz ("geohot") tried his hand at it, given that he had been rather successful at cracking Apple's iStuff. He found an exploit that gave hypervisor access, and in response, Sony removed OtherOS in a firmware update, as geohot's hack required use of OtherOS.

So this can all be traced back to geohot getting involved... though in my opinion, Sony shouldn't have responded by removing OtherOS, causing all the collateral damage. It inevitably was going to result in a lot of really serious people getting involved and, by extension, more stories like this.

Comment Re:Amazon Response (Score 4, Informative) 204

A bald-faced lie? They said Wikileaks was violating several of the terms of service. One of the terms of service is "don't use our service to break US law". It's pretty clear that Wikileaks was violating US law. Ergo, not a lie.

Nearly every legal expert who has spoken on this topic has argued that Wikileaks has not violated US law.

At any rate, you're nitpicking over the wording used by the Amazon representative. Perhaps "doesn't own or otherwise control the rights to the classified content" was not the clearest way to put it, but unless you're deliberately being dense, the meaning is clear: Wikileaks is not permitted by US law to distribute these documents. Clearly, distributing documents in violation of US law qualifies under "don't use our service to break US law".

Publishing classified documents is not illegal, unless the documents fit certain criteria that (so far) these leaks do not. The person or organization who leaks the documents does have some liability, but not Wikileaks. As has been said many times before, Wikileaks is analogous to the New York Times in the Pentagon Papers incident.

Comment Re:Surprising in its unsurprisingness (Score 1) 833

They've been posting things that embarrass the government and affect its public image.

Specifically, I think you mean the US government. One thing (not the only thing though) that bothers me about Wikileaks is that it seems to be exclusively, or at least principally, dedicated to embarrassing the US government.

Here's one that I'm particularly OK with. If I recall correctly, this was the first time that I had heard about ACTA.

Comment Re:Order of Magnitude (Score 1) 360

Interesting. After installing the platform preview and confirming the results of the article (1.0 ms, +/- 0%, exactly the same as Sayre and the author, even though the other times were different), I am also getting the same results when looking at your test.

Yeah, bit twiddling makes things faster, but I can think of a logical explanation for that one.
After you run "n += i" 92682 times, n can no longer fit in 32 bits, whereas all the bitwise operations fit comfortably within that range for all values of i, so you can avoid using the "much slower" bigints.

A more far-fetched but powerful explanation is that it could be optimizing certain kinds of iterated bitwise operations on vectors. Iteratively applying |=, ^=, and &= all can be done in O(1) time. Bitwise AND: all zeroes. Bitwise OR: ceil(log-base-2 of 50000000) == 26 binary ones at the end. Bitwise XOR: complicated, but doable. (i XOR n) is the current value. If i is even, then (i XOR n) XOR (i+1) == (n-1 if n is odd, or n+1 if n is even). (i XOR (n-1 if n is odd, or n+1 if n is even)) XOR (i+1) == (n). So, you can get any value of n for any odd value of i using modulo division, as it cycles between n and n +/- 1 on every odd value of i. Do another XOR if you need to figure it out for an even value of i.

I say this is "far-fetched", because it seems like this kind of optimization would rarely be worth the cost.

Next test (also helps to explain what changes when "g = n" is commented out):
"n += 1; n -= 1;" This adds 1 to and subtracts 1 from n each loop, which could hypothetically be optimized away. Observe what it does for commented and uncommented "g = n". I'd recommend increasing the max value of i by a factor of 100 to try and reduce the noise. It should take about a minute and a half or so for each on the machine that you got ~1600 ms for your code.

Hope this helps.

Comment Re:Order of Magnitude (Score 1) 360

That's why I'm suspicious about this: dead code probably should not cause an order of magnitude increase in running time.

Actually, that's precisely what a good dead code elimination will cause. Consider this loop in C++:

You're right. I should have narrowed the scope of my claim. My apologies.

That's what I was trying to say... it is more likely a symptom of cheating (at least some level of catering to the benchmark rather than to the JS code) than it is a symptom of a botched optimizer that can consistently (95% confidence interval: +/- 0.0%) optimize some code down to exactly 1.0 ms, but "somehow" manage to perform far, far worse than other existing browsers when changing the source in a trivial way, and "all of a sudden" become way more inconsistent (95% confidence interval: +/- 1.9%).

Two things of note here.

First, this behavior only shows on one particular function in one specific test of the entire test suite. In fact this is precisely how it was found - the result is so ridiculous (IE 10x faster tan all other browsers!) on that particular test and not on any other - that it immediately stands out, and that immediately prompted an investigation. (If you RTFA, it's actually old news, it just took a while to gather all evidence and officially submit it to MS as a bug report.)

That's not how any sane person would cheat - you'd nib a few milliseconds down here and there, showing advantageous but realistic figures across all tests. Especially with closed source, that is nigh impossible to catch.

That's how I would do it, too. Microsoft has a lot of smart people -- I'm sure many of them would have thought of that too.
But that didn't happen here. For some reason, it runs too quickly, quickly enough to warrant further scrutiny.

Second, it's not "far worse than other existing browsers" when changing the source. It actually still beats FF4 in this test even with the change! Chrome and Opera are faster still, but it's not like the difference there is 2x or even 1.5x.

Actually, when you ignore the anomalous 1.0 ms, it runs around 2x longer than Chrome and between 2x-3x longer than Opera. I won't install FF4 to try it out, so I'll have to give you the benefit of the doubt on that one.

Ultimately this needs more testing. Best of all would be to try to find some other pattern of dead code that is clearly unrelated to this test (so it couldn't be "wrongly detected" if this is a cheat), but which the optimizer handles in the same way. Finding a few such would definitely prove that this is just optimizer at work, and weird results are likely due to bugs in it (like incorrectly handling "return" as a side effect where it is not). But if no other patterns are found that exhibit this behavior, then this is strong evidence for hardcoding for the test.

Of course. This is circumstantial, so it can't, by itself, prove that they cheated. I also haven't installed the preview to run it myself, yet. But, assuming that the benchmark times are accurate, it'll be hard argue that a benign explanation is more likely than cheating. That's all I'm saying.

Particularly, the 95% confidence interval of +/- 0.0% strikes me as the most suspicious. It's way too convenient that the test ends up taking the same amount of time, to a certain resolution, at least 95% of the time it is run, whereas the alternate spellings of it are much less consistent. Also... the same 1.0ms on different hardware (Sayre's testing machine as well as the author's laptop), all while keeping a 95% confidence interval of +/- 0.0%? That's... dare I say it... inconceivable!

The "roundness" of 1.0ms is convenient, too, but I'm not going to count that as a strike against them. Even though this whole thing is already circumstantial... sometimes, numbers are round.

Comment Re:Order of Magnitude (Score 1) 360

To me, an order-of-magnitude difference in an interpreted language

There's no such thing as an "interpreted language". A particular implementation of the language may be an interpreter. All modern browser JS implementations, including the one in IE9, are JIT-compiling to native code.

When writing my post, I went by what Wikipedia said about that term, which conveniently accounts for JIT compilation in its description. Turns out there is such a thing!
I don't suggest that there are languages that must be interpreted, only languages that are interpreted (in this case, "interpreted" meaning "not directly executed", which includes JIT compilation).

by adding non-functional statements

That sort of thing is actually precisely what a good optimizer should be able to catch.

That's why I'm suspicious about this: dead code probably should not cause an order of magnitude increase in running time.

compiled-in functional equivalent of that particular JavaScript function.

The equivalent of that particular JS function is "void foo() {}". It does a computation, but does not use the result of said computation in any way (doesn't return it as a value, and doesn't update any global state).

Wow, you're absolutely correct. I just assumed that SunSpider actually checked to make sure that the engine came up with the correct values to catch this kind of thing. So it now seems more likely to me that it just says "if the code matches the cordic function, then sleep for 1 ms, and return." That actually makes more sense with the numbers we see.
So in this case, that code is probably just stubbed out. Why have a stub for a function used in a benchmark test if you're not treating the benchmark specially (which, I believe, is an example of cheating)?

Now, why the optimizer is so fragile in IE9, is a good question. But the order-of-magnitude difference itself is not suspicious; rather the fact that making trivial changes to the source trips the optimizer is.

That's what I was trying to say... it is more likely a symptom of cheating (at least some level of catering to the benchmark rather than to the JS code) than it is a symptom of a botched optimizer that can consistently (95% confidence interval: +/- 0.0%) optimize some code down to exactly 1.0 ms, but "somehow" manage to perform far, far worse than other existing browsers when changing the source in a trivial way, and "all of a sudden" become way more inconsistent (95% confidence interval: +/- 1.9%).

Comment Order of Magnitude (Score 1) 360

I don't know about those of you suggesting that this isn't cheating. I'm going to go ahead and agree with the author of the article on this one: it's most likely cheating, even following Hanlon's Razor.

To me, an order-of-magnitude difference in an interpreted language by adding non-functional statements is most likely due to using a compiled-in functional equivalent of that particular JavaScript function. The engine probably matches the parse tree to determine whether or not to run the (much faster) machine-code version, and the non-functional statements make the parse tree not match, thus reverting back to executing just-in-time.

Sure, it's possible that the reasoning behind this difference isn't to get ahead on this benchmark. But I'm going to use Occam's Razor to suggest that the order-of-magnitude difference is a result of using pre-compiled versions of specific JavaScript functions, rather than assuming that Microsoft engineers who can optimize that JavaScript function down to 1 ms (with, by the way, a 95% confidence interval of +/- 0.0%!!!!!!!) are incapable of optimizing a functionally equivalent but trivially different version of it below 19.5 ms (with a 95% confidence interval of +/- 1.9%)

Slashdot Top Deals

In any formula, constants (especially those obtained from handbooks) are to be treated as variables.

Working...