Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:IonMonkey, JagerMonkey, TraceMonkey, SpiderMonk (Score 2) 182

The problem is that as runtimes evolve the compiled format changes. Furthermore, the end result of the compilation depends on the exact processor being used by the user, and at least in SpiderMonkey on things like the location of the Window object in memory.

Not only that, but the final compiled version is unsafe machine code, so a browser couldn't trust a web page to provide it anyway.

So pages wouldn't be able to provide a final compiled version no matter what. They may be able to provide bytecode of some sort, but again the bytecode format browsers use is not fixed (assuming it exists at all; V8 doesn't have a bytecode) and compilation of JS to bytecode would have to be replaced by some sort of bytecode verifier for security reasons, so there may not even be much of a performance win from the switch.

Comment Re:IonMonkey, JagerMonkey, TraceMonkey, SpiderMonk (Score 5, Informative) 182

A short summary:

1) TraceMonkey turned out to have very uneven performance. This was partly because it type-specialized very aggressively, and partly because it didn't deal well with very branchy code due to trace-tree explosion. As a result, when it was good it was really good (for back then), but when it hit a case it didn't handle well it was awful. JaegerMonkey was added as a way to address these shortcomings by having a baseline compiler that handled most cases, reserving tracing for very hot type-specialized codepaths.

2) As work on JaegerMonkey progressed and as Brian Hackett's type inference system was being put in place, it turned out that JaegerMonkey + type inference could give performance similar to TraceMonkey, with somewhat less complexity than supporting both compilers on top of type inference. So when TI was enabled, TraceMonkey was switched off, and later removed from the tree. But keep in mind that JaegerMonkey was designed to be a baseline JIT: run fast, compile everything, no fancy optimizations.

3) IonMonkey exists to handle the cases TraceMonkey used to do well. It has a much slower compilation pass than JaegerMonkey, because it does more involved optimizations. So most code gets compiled with JaegerMonkey, and then particularly hot code is compiled with IonMonkey.

This is a common design for JIT systems, actually: a faster JIT that produces slower code and a slower JIT that produces faster code for the cases where it matters.

https://blog.mozilla.org/dmandelin/2011/04/22/mozilla-javascript-2011/ has a bit of discussion about some of this.

Comment Re:64bit (Score 1) 224

The fact that there is no 64-bit MSVC compiler that can produce 32-bit binaries has certainly been a problem for a number of people. It means that trying to do PGO on a large codebase being compiled into a 32-bit binary runs out of address space. Both Mozilla and Google have run into this, for example; in Google's case the result was them not using PGO at all.

Comment Re:Windows being the laughing stock of the OS worl (Score 1) 224

Compiling is easy in a vacuum,

Fixing all the bugs introduced by the different compiler that you haven't worked around yet, then fixing all the issues due to the 64-bit plug-ins (esp Flash) having a different set of problems than the 32-bit ones, then fixing any remaining issues due to Windows-specific code possibly making dumb assumptions about sizes of things is a different matter altogether.

Which is why 64-bit nightlies _existed_. They just don't work that well, on average.

Then the question becomes whether to make (and test, which causes even more load on the test infrastructure) these builds, which no one plans to ship to actual end users anytime in the next 6+ months. That's what the discussion was really about: does Mozilla keep spending time keeping these builds limping even though they don't have the time to make them actually tier-1, or do they just stop doing them for now and start again when they have the resources to actually do it right?

Comment Re:Mac OS X 10.5 (Leopard) (Score 1) 137

The amount of effort needed to support multiple versions of OSX at the same time is much larger than the amount of effort needed on Windows, because Microsoft usually bends over backwards to not break compat, while Apple will go out of its way to do so.

Combined with the lower user base on Mac and the faster OS update cycle of Mac users, this means that dropping support for old MacOS versions is a much simpler call than dropping support for old Windows versions: They're more work to support, and the number of users using them is much smaller.

For perspective, about half of Mozilla's Windows users are still on WinXP (which approximately matches the overall fraction of Windows users on WinXP), while the fraction of Mac users on 10.5 was 10% and falling rapidly when support was dropped.

Comment Re:Memory hog? (Score 2) 302

Ah, 350MB in task mgr would match the ~400MB resident metrics from about:memory.

And again, about 100MB of that is not even Firefox itself...

For the rest, the basic problem is that web sites are doing a _lot_ of JS, as are the extensions you have installed. So they're using a lot of memory for all those JS objects. :(

It would be interesting to see how much memory other browsers use on that set of sites, for what it's worth.

Comment Re:Memory hog? (Score 3, Interesting) 302

That's showing about 400MB RAM usage, and about 800MB address space. But address space includes mmapped files and reserved address space that is not actually backed by memory; it only matters for purposes of running out of a 32-bit process's 4GB address space.

So OK, 400MB memory usage. Of this, about 260MB was actually allocated by the browser (see "explicit"; the rest seems to be things the OS is putting into the process memory space space (e.g. the code of the browser, the code of the libraries the browser links to, etc).

Of this 260MB, looks like about 70MB is RAM used by your extensions (17MB for adblock plus, 6MB for https-everywhere, etc). Another 30MB looks like it might be JS GC heap fragmentation from those extensions.

Another 40MB is the yahoo mail tab; almost all of this is the various JS gunk it's doing.

7MB is Wired.

About 6MB for Slashdot.

Another 5MB for about:addons, and about 15MB for the browser UI.

30MB unknown to about:memory.

16MB in-memory cache for the bookmarks and history databases.

10MB images.

7MB web workers used by ghostery.

That accounts for most of the memory listed as far as I can tell.

Comment Re:Memory hog? (Score 1) 302

Which tabs? Note that some web apps keep allocating more and more memory until you reload the page (e.g. Google Reader will do this) because they "cache" all sorts of stuff in global variables and whatnot.

So it's pretty easy to hit 800MB in all browsers with 5-6 tabs, especially if you leave them open for a while. :(

That said, I'd be interested in how the output of about:compartments for you compares to the list of 5-6 tabs you have open. What does about:memory say about where the memory is being used?

Slashdot Top Deals

BLISS is ignorance.

Working...