timothy should get fired
You can't fire him. He's a 5-line perl script. All you can do is file bug reports.
LGPLv2 upgrades to GPLv2, which contains the "or any later version" clause
False. LGPLv2 does not contain an 'or any later version' clause. The rest of this paragraph is not quoted because it didn't make any sense.
(It may not be compatible with GPLv2 minus the "any later version" clause, but that's an obvious result of having one thing saying version 3 or newer, and the other thing saying version 2 only).
The 'or later' clause is not part of any version of the GPL or LGPL. It is simply a convention that the FSF encourages users of their licenses to adopt. You don't have to modify the license to remove it (which, by the way, you can't legally do because the FSF asserts copyright on their licenses and does not permit derived works).
It's perfectly normal for an embedded system. It is not normal, or sensible, for a general-purpose computing device. It is certainly not sensible for a thing that needs to receive regular security updates to have most of the (vulnerable) code in read-only storage.
And complaining that a 1GHz phone with 512MB of RAM is underpowered is ridiculous. It has far more horsepower than you need to run 4.1, it's only some of the newer apps that will struggle. I had a laptop with worse specs that ran far more demanding applications than anything I'd run no a mobile phone.
Bullshit. The problem is Android's notion of a system application. These are things that can't be uninstalled and must be on the internal storage. Some of these really are system services, but others are just shovelware. The 512MB on the Nexus One is more than adequate for a more recent Android, if you move some of the non-essential crap onto the SD card. The Nexus One came with a 4GB SD card and supports up to 32GB, so there's no reason not to do this, except that then you'd be able to uninstall some of the Google stuff.
This model, by the way, is especially wasteful because often these system components need updating, and due to the design of the Android filesystem layout they can't overwrite the old components, so you end up having to have two copies of a load of stuff installed, and you can't delete the unused one even though that's the one on the smaller storage device...
Star Division was bought by Sun and the bits they owned were open sourced as OpenOffice. It was then renamed OpenOffice.org once they noticed someone else owned the OpenOffice trademark.
For years, Sun contributed 80% of the new code. Novell contributed about 10% and sulked that they weren't recognised as much as they felt they should be.
Novell started go-oo.org, containing their own patches to OpenOffice.org, including several things that were of dubious legality (e.g. implementing Microsoft patents that Microsoft guaranteed that they would not sue Novell for, but didn't extend this guarantee to anyone else).
Sun bought Oracle and most of the OpenOffice developers left (some voluntarily, others not) and found new employment.
Novell saw this as an opportunity to become the dominant players and pushed the LibreOffice brand for OO.o plus their patches. Lots of people fell for this and LibreOffice started to gain a lot more traction.
Most of the work in both forks is now by ex-Sun people. The code is horrible in both, although both teams are slowly trying to fix it.
Back in the mid-90's a friend of mine argued that the Mac OS9 kernel was superior to the NeXTSTEP kernel (Mach) because OS9 used cooperative (a.k.a. non-blocking) multitasking and Mach was pre-emptive
It wasn't a totally stupid argument. Cooperative multitasking can achieve higher throughput (on single-processor machines, at least), because you have less cache churn and you can schedule exactly when you want to yield. The down side is that one misbehaving thread can make the entire system unresponsive. Most supercomputing workloads use a cooperative model for this reason: throughput is the most important consideration and all of the code on a given node is trusted.
When OS X was introduced, it ran on 266MHz PowerPC machines. Efficient CPU usage was a lot more important than it is now. I'm typing this on a Mac that only sees the CPU usage go over 20% when I'm doing a big compile job. A little bit of deviation from maximum theoretical throughput is lost in the noise.
Writing complex applications in JavaScript is possible, but so is writing complex applications in assembly. That doesn't make it a good idea.
The moon is made of green cheese. -- John Heywood