HTML5 is not a political action committee
No, it's a (draft) standard. It is being standardised by the W3C, which is a technical standards committee with a mission statement. The goals of the W3C are inherently incompatible with any standard that defines portions that can not be implemented in a compatible way by any vendor that wishes to participate in the market, and that includes DRM.
And that means that they will always be unwilling to reach all of their potential market. And that means that there is a market opportunity for companies that are willing to release without DRM.
As TFA says, the lack of DRM on CDs made it possible to sell portable music players like the iPod along with software to rip all of your CDs. And that increased the demand for recorded music. Now, a significant fraction of the big record labels' income derives from selling DRM-free music downloads. The weak DRM on DVDs was thoroughly circumvented, but it was still strong enough legally to block the sale of set-top boxes that ripped your entire DVD collection and exposed it via a nice menu. It was still strong enough that applications like iTunes and Windows Media Player never got the ability to rip DVDs to play back on your computer. It strangled technological innovation in an entire sector.
Most depressingly, it actually acted against the interests of the people pushing it. I'd love to subscribe to a service like Netflix that let me download films in a DRM-free format (it could even limit it to, say, 30 hours a month). I'd love to have an easy source of films to stick on my laptop or my tablet to watch while travelling. I'd even have liked to be able to buy a DVD at the airport to watch on the plane, but half the time the region locking on my laptop's DVD drive would prevent me from being able to play it back, so I don't even bother trying.
Heck, Google is trying to add native code support to HTML5 which is actually PROCESSOR specific.
Actually, the PNaCl stuff that Google is pushing defines an ABI for LLVM IR, which is platform independent. It defines a 32-bit address space and a set of calling conventions that don't match any host platform, so you need some adaptors for calling into native libraries, but that's fine because you should only be able to call into native libraries via well-defined interfaces anyway for security.
And, as the other poster pointed out, there is a difference between a public hotspot and being in a public place.
The whole "being inhabitable for hundreds of thousands of years" means that you have extremely large amounts of something that is stable enough to be safe to handle.
It depends a lot on the contaminant. Something like tritium is a problem, for example, because it has a relatively short half life, but it will bond to oxygen and form water and if you drink it then it can cause serious problems. Radon gas is also a problem (present in a lot of places with granite) because it is heavier than air and so accumulates in any enclosed space: if you breathe it in then it is quite dangerous.
There's also the problem that a lot of the byproducts of a nuclear reactor are only mildly radioactive, but highly toxic for other reasons. The low decay rate means that they remain toxic chemicals for a long time. On the other hand, this isn't too different from any other chemical plant if there's an accident.
One of the things I've learned from open source development is that it's really helpful to have some deadlines and milestones. If your group can set these itself, it works well. If not, then you end up with the Valve Time phenomenon, where things keep slipping because you want to get one more feature in, or one more bug fixed.
In a commercial setting, it's also often not enough to just get the work done. You have to also get it to your customers. This means that you need sales people to be aware of when you can deliver it and what it will do. You need to pay developers, which means that you need to ensure that the money going out is not more than the money coming in, except over short periods when you can cover it from reserves. This also requires that you be aware of when things will be finished and how much you are charging.
I did a lot of freelance work where the customer set a problem, I gave them a time estimate, and by the end I presented them with a solution. That works well, but it's hard to scale.
So anybody could implement any of these CPUs independently of the companies that started them, and not worry about any patent violations?
Well, not quite. It's definitely possible to implement a CPU that is compatible with a 20-year-old CPU without infringing any patents, because any patents that were used in the original have now expired. That doesn't mean that the implementation will be patent free, however. For example, our branch predictor is cleverer than any shipping CPU 20 years ago, but I've not done a patent search so I don't know if it's covered by more recent patents. I intentionally avoided some techniques I know to be patented, but some of the older ones might have been patented around 1996-2000ish. That said, if we have to change the branch predictor to avoid patents then it's not a big deal - it doesn't change the ISA (and our CPU runs in an FPGA, so deploying a new version takes a few hours).
By MIPS IV, you mean the R8000, right, or is it R5000?
R8000 was the first MIPS IV chip, yes.
I'm curious about whether Oracle would then come out w/ a SPARC v10 in that case, although it's not that there are others aside from them who would be interested in such a CPU.
Well, Sun did something similar with the UltraSPARC spec. SPARCv9 doesn't specify the privileged mode instruction set, and this used to vary. The UltraSPARC spec that they released along with the T1 specified this and was intended to provide a stable interface for operating systems. Some parts of that may have been difficult to implement without trampling on patents. There are also likely to be issues with things like SIMD extensions.
I'd love to see some of these CPUs, such as the MIPS, get resurrected and used in newer IPv6 routers and other networking gear.
All Juniper routers run a tweaked FreeBSD on 64-bit MIPS and a lot of low-end routers use 32-bit MIPS chips.
With more open hardware, it would be easier to manage
The problem is always competing with companies that have large volumes. Intel is basically a process generation ahead of everyone else, and they have economies of scale that only the big ARM SoC vendors can match. There isn't really a business case for using a reimplementation of an old CPU architecture when you can get a Cortex A15 that will outperform it for less money. The only reason you'd want to is so you can add some custom instruction set extensions that massively speed up your workload, and even then they'd need to give and order of magnitude speedup for it to be a net win. In Juniper's case, they go with in-order MIPS cores with large SMP and SMT support, so that they can do a lot of relatively simple packet processing tasks in parallel. There isn't much branching or floating point in packet filtering (it's mostly just integer arithmetic and a large amount of data shuffling and a lot of data dependencies that completely kill performance on superscalar or out-of-order chips), so general-purpose CPUs (and GPUs) are optimised in all of the wrong places. Two simple in-order cores for them can be faster than one complex out-of-order superscalar core.
"It is better for civilization to be going down the drain than to be coming up it." -- Henry Allen