You can download every version of Firefox we've ever released here ftp://ftp.mozilla.org/pub/firefox/releases/ . We have no interest in forcing users to run the latest version.
Memory use climbing like that is almost certainly a problematic (leaky) extension. Check out the tail end of Nicholas's slides for the list of the top leaky extensions, which includes/included AdBlock+, LastPass, etc. Once you get near 2.2 or so GB on WInXP the OS starts freezing you for longer and longer as it twiddles page tables and the like, eventually freezing you for more than 30 seconds. (watch user vs system CPU when it freezes).
One other surprising source of a slow browser: Huffington Post. Their pages have a Twitter feed display which progressively slows down your browser, since it keeps adding more data to the scrolling list of tweets. Never leave a HuffPost page up (with a common hashtag for the search, at least).
I can (now) run for weeks with a browser with 850 tabs (though only 50-150 of them loaded, normally).
340 tabs... Piker.
Ironically the biggest memory user in about:memory is facebook (>100MB). Ironic, because I've never opened a tab to facebook, and I don't have a facebook account. This is the result of all those "Like" buttons, each of which takes an insane amount of memory. 100MB for facebook Like buttons. Really.
a) that was a long time ago
b) you didn't say 'why' - the quality/performance/etc bar for patches to pass for a browser (now) used by 400m people is pretty high - but that bar is very similar for internal developers (employees or not). Back in the Netscape days, new Netscape employees would frequently get annoyed that they weren't just able to check their code in; they had to get reviews and then have someone with checkin privs do it, just like any other contributer.
c) some patches make user or web-visible changes that aren't agreed to
d) some bugs (when they get serious look in patch review) aren't ones we'd agree are bugs, or fixing them would cause other problems.
n.b. I work for Mozilla now, and in the pre-FF days I was a 3rd-party mozilla contributor and release 'driver'
Mozilla/Firefox is experimenting with SPDY, as you can see if you read Patrick McManus' blog (http://bitsup.blogspot.com/2011/09/spdy-what-i-like-about-you.html)
SCTP can be run over UDP, but in any case needs the server you're talking to to support it (like SPDY). There are some parallels between them, though SCTP has some advantages in not blocking when a single packet is lost like TCP (which SPDY runs over). A researcher at UDel has some nice examples of HTTP-over-SCTP (and SPDY as well).
Running Firefox on Fedora 11 - I typically have 10-15 windows open with *450* tabs - and it runs for months like that without loading the system. NOTE: Flashblock is a must, *especially* under Linux.
One thing that really does help is BarTab (under 3.x; under 4.x until it's available you can set a simple config var to get a similar result). This keeps it from actually loading tabs when restoring a session until you switch to that tab. The only way you can get away with so many tabs in any browser...
Or look at a couple of Ojo 900's (also SIP) - not large screens, but rock-solid 30fps even at low bandwidth; they'll run 24-7 and the fee is low (and they're cheap). You can dial the bandwidth down if you want; they'll run a nice experience down to 120Kbps, and can do down to 80K@30fps. At 250K they're great. I know someone who had dinner with his fiance every night, 1000 miles apart, using Ojos. They'd leave them on as soon as both of them were home, in the kitchen while they both cooked.
512 byte sectors were known to be a performance bottleneck LONG before the late 90's. At Commodore, I put full support in for any power-of-two HD sector size back around 92-ish (plus the FS supported allocation blocks of powers-of-two multiples of sectors, which most filesystems have for quite a while). Larger sector sizes was a win on performance, much the way larger allocation sizes in the FS was a big win, though more related to low-level overhead. Certainly on the platters larger sector sizes is a win - we experimented with that for floppies, and used a floppy format that got much of the benefit (removal of all inter-sector gaps, replaced with a single gap per track - you lost the ability to write single sectors, but got 10% more storage and faster transfer times - effectively one big sector, but with additional sync/etc headers that let you read data starting at any logical sector, to reduce rotational latency). Similar logic applies to HD platters - larger sectors lets you get more user bits on a track and higher transfer rates, but decreases the granularity for writing data.
(From the fingers of a multi-decade Emacs user, who misses the Amiga (and Sun) keyboard layouts).
That's fine, if you can read the papers, and read the papers confirming (or not) the observations, etc. For example, with the whole autism/vaccine kerfluffle, the original paper by a British doctor has been debunked, and apparently he made up and/or mis-represented his data. Plus various doctors (which the public conflates with scientists, which is sometimes true and sometimes not) make all sorts of claims, often based not on scientific methods or verifiable proof, but instead on personal opinion/experience and a few particular cases they've seen. The problem is that it's way to easy to jump to an unwarranted conclusion, or to do what humans are all too good at - picking facts that support what we already believe or want to believe.
The public has little or no understanding of how science works (even many non-scientist academics don't). Combine that with the modern media's preference to not interpret, but instead present all points-of-view as equivalent (or to prefer certain points-of-view based on politics), and it's easy to see how the public can reach the belief that science is just opinion too - that you can pick who to agree with, based on what you want to be true.
Not all that much of the OS was assembler - the biggest piece was FFS (which subsumed OFS), and honestly probably shouldn't have been in ASM - but space was *tight*. Sure, quite a few of the drivers were in assembler, and performance-critical parts of Exec were in ASM, but that was almost required at the time for low-level HW interfacing. Much of the OS was in C. (I was responsible for removing the majority of the BCPL code (look it up on Wikipedia) used in AmigaDOS for OS 2.0.)
It was all fairly carefully designed, and a lot of work went into making it bulletproof and snappy. While there are huge benefits to memory protection nowadays, most Amiga programs and certainly the OS were quite resilient to pressures, such as allocation failures, which would crush almost all apps today. Error paths were much more likely to get tested, and the path wasn't the library calling exit(1) for you when an allocation failed.
That said: it's 15 years behind the times now. No major improvements have been made (some, yes, but nothing major). Dave is basically right - and we were in the last year trying to break with the old hardware design, though there was one last big step left in it that actually got to the early prototype stage (AAA). We hadn't planned out where software would go, but if you look at what Apple did you probably get a hint of what we might have done. It would have been tough, though, since we didn't have the resources to throw at emulation at the time that Apple did. In the last year, the SW group (which I ended up running a good part of) was down to a handful of people ( 10 I think). I think the "OS" group was down to maybe 3 or so. The writing was mostly on the wall by around a year before *poof*, and much of the team left in '92-93 to places like Scala (where many still are, and where I went after bankruptcy), 3DO (which had a strong ex-Amiga and ex-Commodore influence from the start), etc.
I wish it had been open-sourced back in '95 or so. It may not have survived intact, but it might have formed the core for a strong competitor to Linux/etc and at least pushed them to improve their responsiveness much earlier on.
Cornell researchers have built a robot that works out its own model of itself and can revise the model to adapt to injury. First, it teaches itself to walk. Then, when damaged, it teaches itself to limp.
Although the test robot is a simple four-legged device, the researchers say the underlying algorithm could be used to build more complex robots that can deal with uncertain situations, like space exploration, and may help in understanding human and animal behavior.
This is where it gets interesting. Antigua has complained to the WTO about this. And, the US doesn't have much of a case. The WTO has already ruled in favor of Antigua, and that was before the legislation even passed. Antigua's case is even stronger now. At this point, you may be saying so what, Antigua can't really hurt the US with trade sanctions. But the WTO can do a lot more than just authorize trade sanctions. They can exempt Antigua from their WTO obligations, specifically their obligation to support US intellectual property laws. I wonder what the RIAA would think of cheap-mp3s.ag, 100% legal according to international law? Maybe the corporate lobbyists can get the US to actually respect things like their treaty obligations and international law.