Don't see any Linux vendors bragging about what a big extra "feature" GRUB is, and it does the same thing. Often more transparently.
Really? I admit I haven't used GRUB for a couple of years and it may have improved since I last did, but I don't remember it letting me pop in a Windows CD, helping me resize my existing partitions, then installing Windows and setting up the correct third-party drivers for my hardware. Does it really do all of that now?
You're right, I've not heard of the Mali, but I see your point. It was only announced a couple of months ago, so it's not shipping yet and I've not seen anyone license it.
You're missing the point about the Atom and Nano though. Your current computer has more than 4GB of RAM, but your current handheld (or netbook) doesn't. You need the 64-bit ISA on a recent x86 system because of the performance that you get from a better-designed instruction set. On ARM, it's irrelevant. Only the extra address space is important.
None of the current low power GPUs has anything like 512MB of RAM - that's twice as much as the total amount of system memory. It will be at least two generations before you come close to having 4GB of address space available in a handheld, and more likely four. I'm not sure what your GPU is if it had 512MB of RAM three years ago. My laptop is a similar age and its GPU has only 128MB of RAM, most current ones ship with 128 or 256MB and only the top of the line comes with a 512MB GPU. Unless, of course, you're talking about desktop GPUs, in which can I can only assume that you are an idiot - you may as well compare the performance of the Cortex A8 and the POWER6.
I note that you didn't give any examples of processes that you might run on a handheld or ultraportable that need 64 bits of address space. I don't have any on the (64-bit) machine that I use for work (and only one that needs more than a 30-bit address space), but maybe you can think of some.
Your code is not as good as you think it is
Even if it is, the person reading it might not be as good a developer as you, or may be as good (or better) but with different experiences. In both cases, they may not be able to read and understand your good code without comments. When they change it and it breaks as a result, then it's your fault.
It's possible to write code that doesn't need any comments. Code where the next person to read it will understand exactly what it does and why, just from the code. The difficult thing is knowing when you've done this and when you've written something confusing. If you can tell the two apart with 100% accuracy, then you can skip writing comments.
Unfortunately, I've never met a developer that could. I've met quite a few that thought they could, however...
Um, what? ARM isn't trying to compete with GPU manufacturers. Most ARM SoCs come with a GPU from someone like PowerVR. ARM doesn't design GPU cores. As for 64-bit processors, there's not yet any reason to. Everyone wants to move to 64 bit on x86 not for the larger word size, but for the fact that the ISA is a bit more sane (more GPRs, fewer restrictions on target and destination registers, simpler memory model) giving an overall speed benefit.
On other architectures, this is irrelevant. The only time you'll need a 64-bit CPU if you've got a sane architecture is when you want more than 4GB of virtual address space. Given that current handhelds come with at most 256MB of RAM, and most don't enable swap (or, if they do, only about 64MB of it), this isn't likely to be an issue for a few years.
Adding addressing extensions to the ARM ISA to allow more than 4GB of physical memory might be useful then, but even now very few processes use more than 4GB of address space. On my current (64-bit) system, the largest of the 128 processes that I have running is using 1.17GB of virtual address space, the next largest is 564MB. None of the processes benefit from being 64-bit, they just benefit from the other changes to the ISA. They would actually be faster if pointers were still 32 bits wide and they managed to keep the other advantages of the architecture.
The TouchBook is also quite bulky; it's bigger than the Newton in all dimensions, and the Newton was a bit too bulky to fit in a pocket. I've basically abandoned desktops now and use laptops for everything, and for me the next logical step is a computer that fits in my pocket. Netbooks are a niche I just can't get excited about. They're too big to fit in my pocket, so I can only take them to places where I'm going to carry a bag. That makes them no more portable than a laptop.
Currently, I have a Nokia 770, which is more portable than a laptop because it fits in a coat pocket. I can even fit a folding bluetooth keyboard, which means I can take it to places where I won't want to carry something extra. The 770 has a relatively slow CPU and no GPU or DSP drivers. It also runs a pretty crappy Linux (Maemo, which is a bitch to develop for), but it's okay for web browsing and running vim. I'd like to replace it with something in a similar form factor (although a smaller screen surround would be good) and a Cortex A8 or (ideally) A9 and decent GPU drivers. Ideally running a BSD of some kind, but if not at least something like Debian.
The scary thing about Spam is that gmail actually manages to filter it with very few ( any ? ) false positives.
The scary thing about spam is that gmail insists on bouncing it to whoever is in the From: field, ignoring SPF, and resulting in you getting a few hundred spam emails courtesy of Google whenever a spammer spoofs your address.
'Derivative work' is a term that applies entirely to copyright law. The closest analogue in trademark law is 'passing off', which is very different. Distributing a derivative work of a copyrighted work without permission from the copyright owner is copyright infringement.
You can argue that the protections of derivative works are too strong (and I would agree with you) but arguing that they don't exist is entirely wrong.
They aren't used often because they are very expensive. You don't know how many digits are in a variable length integer until you run the operation, so you need to perform operations that don't combine well with pipelining (and you can't get fixed alignment, so you don't get good cache usage).
99% of the time, they are not even useful. A lot of programmers go through their entire careers without encountering integer overflow problems. Most high-level languages provide support for arbitrary-precision integers and programmers can use them when required, but few need to.
Storing time in them is entirely pointless, because a fixed width 64-bit integer is more than enough for vastly longer than any modern hardware or software is expected to last. This is true for a lot of things. The overhead of a variable width integer doesn't give you any benefit most of the time.
Nobody's gonna believe that computers are intelligent until they start coming in late and lying about it.