Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment: Re:NIH (Score 1) 249

by n0-0p (#38106346) Attached to: Google Upgrades WebP To Challenge PNG Image Format
IE is currently around 40% of the market--a far cry from the +90% they were at when they stalled adoption of PNG. And while I agree that JPEG XR is a good format, MS chose to release the code under a license that is GPL incompatible. So, a clean-room re-implementation would be necessary before most open source projects could touch it.

Comment: Re:Simple (Score 1) 492

by n0-0p (#35443764) Attached to: Safari/MacBook First To Fall At Pwn2Own 2011

Then the Chrome / Windows machine, which no one tried to attack (one person found an exploitable hole, but sold it to Google for $1,337 instead of entering it into the contest).

You're confusing Chrome and Android: http://jon.oberheide.org/blog/2011/03/07/how-i-almost-won-pwn2own-via-xss/

I talked to the guys who won yesterday, and one of the Team Anon guys who was originally signed up for Chrome. Some of them said their WebKit bugs affected Chrome, but no one had figured out how to break the Chrome sandbox. So, they just withdrew their names rather than waste everyone's time with an exploit they knew wouldn't work.

Comment: Re:Am I reading this correctly? (Score 5, Informative) 417

by n0-0p (#35332936) Attached to: Apple Asks Security Experts To Examine OS X Lion

You're joking, right? Apple is historically months behind in patching publicly disclosed vulnerabilities in core libraries they share with other Unix-like systems (Samba and Java are two key examples). Overall code robustness is abysmal in any Apple product I've assessed--they fall over with trivial fuzzing or a few hours of analysis. They're an absolute pain in the ass to deal with when trying to resolve a responsibly reported vulnerability: they often don't seem to have qualified people triaging inbound reports, and when they do finally acknowledge the correct severity of a reported issue it can take years before they finally push out a fix. And to top it all off, their core security counter-measures (e.g. ASLR and NX) are useless as anything more than marketing fluff because they're not implemented consistently.

Seriously, I've been in the security field for almost 15 years and dealt with reporting vulnerabilities to dozens of companies. Microsoft is a pain to deal with because of their compatibility matrices and long release cycles, but they're generally competent. Whereas Apple is just an absolute train-wreck. The only reason every Mac isn't infested with malware is that they're not a big enough chunk of the market for it to be worth the effort. If they ever cross the magic 15% threshold they're in for a very rude awakening.

Comment: Re:Partial Optimization? (Score 1) 169

by n0-0p (#35254158) Attached to: Chrome 10 Beta Boosts JavaScript Speed By 64%

Your guess is correct; for rarely followed code paths it does take significantly longer to (aggressively) optimize the code than it does to run it. Also, premature optimization can generate pathologically suboptimal code, meaning the performance can be much worse than the unoptimized case.

My understanding of how Crankshaft works is that the emitted code keeps some basic information about the data and frequency for any given code path (it's probably function level, but I don't know the code so I can't say for sure). Once the data and frequency of travel crosses a threshold the code path gets flagged for aggressive optimization. This kind of housekeeping adds very little overhead, so the decision cost overall should be very low. And the useful thing about spot optimizations like this is that their relative infrequency means that you can afford to do really aggressive optimizations that would be far too expensive to run over all of the code at load time.

The funny thing is that none of this is new. It's all decades-old compiler research stuff that mostly evolved out of the Self language. And Mozilla's tracing engine attempts similar optimizations, although it uses a different technology with different strengths and weaknesses.

Comment: Re:wow (Score 1) 169

by n0-0p (#35254052) Attached to: Chrome 10 Beta Boosts JavaScript Speed By 64%

If you're using SunSpider as your sole benchmark then you're already behind. SunSpider has outlived its usefulness (which the article touches on). In order to get a win of a few hundredths of a percent on SunSpider you have to add in premature optimizations that hurt page-load times and the performance of long running JavaScript applications. (Or you could add some dubious optimizations that are targeted specifically to the test, but that sounds a bit like cheating on a benchmark to me.)

SunSpider was good for it's time because it set a minimum bar for all browsers. However, the beta versions of all the new browsers are now within a hair's width of each other's performance on SunSpider. Rather than split those hairs, we need a new generation of tests that more accurately models real-world usage and JavaScript in the large. Mozilla and Google are both moving in that direction with Kraken and the V8 benchmark suites (respectively), but it's just a start. I'd like to see comparable benchmarks from every JS engine maker, or maybe a broadly-scoped, independent benchmark.

Comment: Re:Waste of time and money (Score 1) 79

by n0-0p (#35111926) Attached to: Hack Chrome, Win $20,000

This is pure marketing. If they want to prove to me it's secure, ask for a public code review and reward those who find clear problems, and compile from that reworked code.

The codebase (minus PDF, Flash, and branding) is open source. Google pays out anywhere from $500 to $3113.70 to anyone who reports Chrome/Chromium security vulnerabilities to them. And if you look at the release notes on Chrome and Safari it's obvious that Google has a full-time team searching for and fixing security issues in both Chrome and WebKit. I'm not sure what else you want them to do, because they're already going well beyond anything you suggested.

Comment: Re:Pot Calling the Kettle Black (Score 1) 380

by n0-0p (#35111822) Attached to: Google's Search Copying Accusation Called 'Silly'

You "do a lot of internet marketing" and you don't understand the difference between returning a competitor's site as a search result (what you identified) and stealing a competitor's search results then presenting them as your own (what is being accused)? Might I suggest that you consider a different profession in which you might be more qualified?

Comment: Re:One of the best things about Chrome ... (Score 2) 182

by n0-0p (#34878850) Attached to: Google Pushes New Chrome Release, Pays $14k Bounty

If you don't like the single user version then install the system-wide version from the google pack. And it doesn't leave past versions around; it leaves exactly one previous version when it's updating because it uses differential compression against the old version and falls back to the previous version if the update failed.

Google

+ - Google to Pay For Web Security Bugs-> 1

Submitted by n0-0p
n0-0p (325773) writes "Google just announced they will pay between $500 and $3133.70 for security bugs found in any of their web services, such as Search, Youtube, and Gmail. This appears to be an expansion of the program they already had in place for Chrome security bugs. The rules and qualification details were posted today at the Google Online Security Blog."
Link to Original Source

Comment: Re:Thanks for the hard work (Score 1) 352

by n0-0p (#34005596) Attached to: Firefox 4's JavaScript Now Faster Than Chrome's

Kraken seems biased heavily toward things like looped and nested calculations, which is where tracing should be a big win. However, it avoids property access, dynamic allocations, and other areas where JaegerMonkey doesn't shine, but are an essential part of web applications.

This is not to say that every JavaScript team doesn't take a similar approach to benchmarks, but it's really hard to assess any of them accurately when everyone is playing the benchmark game.

Comment: Re:Thanks for the hard work (Score 2, Interesting) 352

by n0-0p (#34005038) Attached to: Firefox 4's JavaScript Now Faster Than Chrome's

I'll try explaining this again. SunSpider doesn't perform sufficient runs to take advantage of the tracing logic. And given the way the test is designed you will actually take a performance hit if you burn many cycles on front-end analysis. So, you consistently hit the unoptimized path where a good implementation uses simple translation logic for emitted instructions, along with a fast and light-weight assembler. (Comparing to YASM is silly, btw, because the needs of real-time JIT are very different from compiling in advance.)

Since Mozilla already had most of the instruction generation logic from their old bytecode implementation, and the test isn't hitting their trace optimizer, the biggest improvement here is coming from the introduction of the Nitro assembler. That's not true for most other JS benchmarks, but it is true for SunSpider. This is why I said I want to see their performance on benchmarks that would take advantage of their tracing optimizations in real-world scenarios--not a test like SunSpider which is heavily weighted towards the compilation speed and baseline (unoptimized) execution speed.

Comment: Re:Thanks for the hard work (Score 1) 352

by n0-0p (#34003688) Attached to: Firefox 4's JavaScript Now Faster Than Chrome's

Seeing that Firefox on a few weeks ago was starting to lag pretty severely behind Chrome, I applaud and thank the Firefox team for their hard work. This is also a boon for their technique, the so-called "shotgunning" method of pushing through compilation the old way if it will complete faster than the optimizations. I had become afraid I might have to move to Chrome, looks like that won't be necessary.

You don't seem to understand how JaegerMonkey works, or what the SunSpider benchmark actually tests. The entire speedup here can be attributed to Firefox not compiling JS "the old way." Instead of defaulting to bytecode like they were previously, they always emit compiled instructions via Nitro's assembler. And given how the SunSpider benchmark works, all that is being tested is their parsing plus Nitro's assembly. The SunSpider benchmark doesn't even run long enough for Mozilla's tracing engine to be a significant factor (because the benchmark was created by Apple to showcase the performance of Nitro). So, not to be dismissive, but it seems like Apple (as the creator of Nitro) is more responsible for the speed increase.

Kudos to Mozilla for the overall improvement, but I'd really like to see results on a benchmark not so heavily biased to such uncommon use cases (compilation speed without hot path optimizations). In particular, I'd like to see benchmarks of common use cases that factor in the performance of their tracing engine, which is the piece of their stack that Mozilla has invested so heavily in. The Kraken benchmark provides some interesting stress tests along those lines, but it's still very narrowly targeted and not representative of current or anticipated use cases.

Comment: Re:Conclusion: Firefox 3.6 scales best across core (Score 1) 141

by n0-0p (#33920808) Attached to: How Do Browsers Scale?

Except their data doesn't actually show that, and Firefox 3.6 has far worse absolute performance than the other browsers. So, the effect they're seeing is probably just the other browsers (including Firefox 4 beta) performing much better, but hitting a wall due to cache pressure and/or IO bottlenecks. Whereas Firefox 3.6 is slow enough that it appears to be scaling well, but really just runs slower than the system can perform.

The tree of research must from time to time be refreshed with the blood of bean counters. -- Alan Kay

Working...