Then the Chrome / Windows machine, which no one tried to attack (one person found an exploitable hole, but sold it to Google for $1,337 instead of entering it into the contest).
You're confusing Chrome and Android: http://jon.oberheide.org/blog/2011/03/07/how-i-almost-won-pwn2own-via-xss/
I talked to the guys who won yesterday, and one of the Team Anon guys who was originally signed up for Chrome. Some of them said their WebKit bugs affected Chrome, but no one had figured out how to break the Chrome sandbox. So, they just withdrew their names rather than waste everyone's time with an exploit they knew wouldn't work.
You're joking, right? Apple is historically months behind in patching publicly disclosed vulnerabilities in core libraries they share with other Unix-like systems (Samba and Java are two key examples). Overall code robustness is abysmal in any Apple product I've assessed--they fall over with trivial fuzzing or a few hours of analysis. They're an absolute pain in the ass to deal with when trying to resolve a responsibly reported vulnerability: they often don't seem to have qualified people triaging inbound reports, and when they do finally acknowledge the correct severity of a reported issue it can take years before they finally push out a fix. And to top it all off, their core security counter-measures (e.g. ASLR and NX) are useless as anything more than marketing fluff because they're not implemented consistently.
Seriously, I've been in the security field for almost 15 years and dealt with reporting vulnerabilities to dozens of companies. Microsoft is a pain to deal with because of their compatibility matrices and long release cycles, but they're generally competent. Whereas Apple is just an absolute train-wreck. The only reason every Mac isn't infested with malware is that they're not a big enough chunk of the market for it to be worth the effort. If they ever cross the magic 15% threshold they're in for a very rude awakening.
Your guess is correct; for rarely followed code paths it does take significantly longer to (aggressively) optimize the code than it does to run it. Also, premature optimization can generate pathologically suboptimal code, meaning the performance can be much worse than the unoptimized case.
My understanding of how Crankshaft works is that the emitted code keeps some basic information about the data and frequency for any given code path (it's probably function level, but I don't know the code so I can't say for sure). Once the data and frequency of travel crosses a threshold the code path gets flagged for aggressive optimization. This kind of housekeeping adds very little overhead, so the decision cost overall should be very low. And the useful thing about spot optimizations like this is that their relative infrequency means that you can afford to do really aggressive optimizations that would be far too expensive to run over all of the code at load time.
The funny thing is that none of this is new. It's all decades-old compiler research stuff that mostly evolved out of the Self language. And Mozilla's tracing engine attempts similar optimizations, although it uses a different technology with different strengths and weaknesses.
This is pure marketing. If they want to prove to me it's secure, ask for a public code review and reward those who find clear problems, and compile from that reworked code.
The codebase (minus PDF, Flash, and branding) is open source. Google pays out anywhere from $500 to $3113.70 to anyone who reports Chrome/Chromium security vulnerabilities to them. And if you look at the release notes on Chrome and Safari it's obvious that Google has a full-time team searching for and fixing security issues in both Chrome and WebKit. I'm not sure what else you want them to do, because they're already going well beyond anything you suggested.
You "do a lot of internet marketing" and you don't understand the difference between returning a competitor's site as a search result (what you identified) and stealing a competitor's search results then presenting them as your own (what is being accused)? Might I suggest that you consider a different profession in which you might be more qualified?
If you don't like the single user version then install the system-wide version from the google pack. And it doesn't leave past versions around; it leaves exactly one previous version when it's updating because it uses differential compression against the old version and falls back to the previous version if the update failed.
Link to Original Source
Kraken seems biased heavily toward things like looped and nested calculations, which is where tracing should be a big win. However, it avoids property access, dynamic allocations, and other areas where JaegerMonkey doesn't shine, but are an essential part of web applications.
I'll try explaining this again. SunSpider doesn't perform sufficient runs to take advantage of the tracing logic. And given the way the test is designed you will actually take a performance hit if you burn many cycles on front-end analysis. So, you consistently hit the unoptimized path where a good implementation uses simple translation logic for emitted instructions, along with a fast and light-weight assembler. (Comparing to YASM is silly, btw, because the needs of real-time JIT are very different from compiling in advance.)
Since Mozilla already had most of the instruction generation logic from their old bytecode implementation, and the test isn't hitting their trace optimizer, the biggest improvement here is coming from the introduction of the Nitro assembler. That's not true for most other JS benchmarks, but it is true for SunSpider. This is why I said I want to see their performance on benchmarks that would take advantage of their tracing optimizations in real-world scenarios--not a test like SunSpider which is heavily weighted towards the compilation speed and baseline (unoptimized) execution speed.
Seeing that Firefox on a few weeks ago was starting to lag pretty severely behind Chrome, I applaud and thank the Firefox team for their hard work. This is also a boon for their technique, the so-called "shotgunning" method of pushing through compilation the old way if it will complete faster than the optimizations. I had become afraid I might have to move to Chrome, looks like that won't be necessary.
You don't seem to understand how JaegerMonkey works, or what the SunSpider benchmark actually tests. The entire speedup here can be attributed to Firefox not compiling JS "the old way." Instead of defaulting to bytecode like they were previously, they always emit compiled instructions via Nitro's assembler. And given how the SunSpider benchmark works, all that is being tested is their parsing plus Nitro's assembly. The SunSpider benchmark doesn't even run long enough for Mozilla's tracing engine to be a significant factor (because the benchmark was created by Apple to showcase the performance of Nitro). So, not to be dismissive, but it seems like Apple (as the creator of Nitro) is more responsible for the speed increase.
Kudos to Mozilla for the overall improvement, but I'd really like to see results on a benchmark not so heavily biased to such uncommon use cases (compilation speed without hot path optimizations). In particular, I'd like to see benchmarks of common use cases that factor in the performance of their tracing engine, which is the piece of their stack that Mozilla has invested so heavily in. The Kraken benchmark provides some interesting stress tests along those lines, but it's still very narrowly targeted and not representative of current or anticipated use cases.
Except their data doesn't actually show that, and Firefox 3.6 has far worse absolute performance than the other browsers. So, the effect they're seeing is probably just the other browsers (including Firefox 4 beta) performing much better, but hitting a wall due to cache pressure and/or IO bottlenecks. Whereas Firefox 3.6 is slow enough that it appears to be scaling well, but really just runs slower than the system can perform.