Any pointers describing why they failed? L4 can run Linux as a guest with comparable performance to Xen's hypervisor, so it's not like it isn't powerful enough as a substrate.
If you're interested in understanding microkernel OS architectures, then Hurd might be useful to experiment with.
Not even. Mach is a horrible microkernel. I have no idea why GNU/Hurd hasn't switch to something more serious, like an L4 variant.
The performance hit comes from the hardware isolated process model used by modern microprocessors.
Lets be clear: the performance hit comes from the expensive x86/x64 trap handling. RISC processors are on the order of 30 cycles. x86/x64 is on the order of 2,000-3,000 cycles. The braindead x86 architecture is the only reason microkernels haven't already "taken over the world".
The L4 and EROS/CapROS microkernels did a lot of small hacks to reduce the above overhead, and they got some pretty decent performance even out of x86. But contrary to your previous claim, x86 makes good microkernels very difficult to construct.
While ideal, proper education takes decades to have effects. All the other changes are short-term with immediate benefits.
We choose between the party that taxes us to subsidize farmers and hollywood, or the party that taxes us to subsidize banks and oil companies. You may claim there is a difference, but I don't see enough of one for it to matter.
Even assuming I grant you the above, the difference is clearly between making food cheap and life entertaining vs. making your air unbreathable and gambling with your money to the point where you may not be able to afford to eat tomorrow. No difference you say? Yeesh.
As many others have stated, use a tool that computes a hash of file contents. Coincidentally, I wrote one last week to do exactly this when I was organizing my music folder. It'll interactively prompt you for which file to keep among the duplicates once it's finished scanning. It churns through about 30 GB of data in roughly 5 minutes. Not sure if it will scale to 4.2 million files, but it's worth a try!
I don't know much about oauth, but this sounds like a stupid move.
No, it's how it should have been to begin with. Bearer tokens are now pure capabilities supporting arbitrary delegation patterns. This is exactly what you want for a standard authorization protocol.
Tying crypto to the authorization protocol is entirely redundant. For one thing, it immediately eliminates web browsers from being first-class participants in OAuth transactions. The bearer tokens + TLS makes browsers first-class, and is a pattern already used on the web quite a bit, albeit not as granularly as it should be.
His criticisms against bearer tokens are based on the ideals of authenticating identity, but bearer tokens in OAuth are about authorization. These are very different problems, and authentication actually impedes the delegation patterns that people want to use OAuth for.
Giving someone a bearer token authorizes them to use a resource on your behalf. That third-party shouldn't also have to authenticate with the resource as well. It could be a person or service that's entirely unknown, so authentication requirements actually prevent work from getting done. This just leads to awkward workarounds, which OAuth was supposed to prevent!
Give this man some points! He has it exactly right.
Right, because real estate is at such a premium that we can barely manage to fit in four cores on a single die with 8M cache, so we couldn't possibly afford a few hundred transistors to decode the arcane instruction set.
Cores can be shut down to conserve power, as can caches in some cases, but instruction decoders cannot. I think you underestimate how power usage scales with numbers of transistors. Since this whole article is heavily biased towards low power and mobile computing, that's a very relevant factor.
Except that any such taps are instantly detectable, at which point communication stops. Thus, at most 1 bit of information leaks out to an eavesdropper.
This paper is a follow-up to the previous work you cited.
Of course, you're conveniently ignoring the microcode translator itself and the memory to store the microcode, which are significantly larger than merely thousands of transistors.
There's nothing inherently "superior" about ARM or PPC instruction sets.
Superior to x86? Sure there is. x86 is a mish mash of instructions many of which hardly anyone uses except for backwards compatibility, but that still cost real estate on the CPU die. That's real estate that could be spent on bigger cache or more registers. ARM is a much better instruction set by comparison.
Secretly filming your roommate having gay sex is a little worse than just saying something random and mean on slashdot.
Agreed, but you wouldn't have 15 charges levelled against you for a vicious comment. The "little worse" got him his jail sentence.
There is an open question of probable cause for search and seizure for such cases, because like it or not, citizens need to be protected from their governments just as much as they need protection from terrorists. Even if a terrorist could set off a nuke in the middle of New York city, governments would still have caused more death and misery than all terrorist attacks combined. Which is truly the greater threat?
But if you exclude the volume of material and energy, then evolution is more "complicated" than just coincidentally popping into being
No it's not. Natural selection is axiomatically very simple, requiring maybe 3 or 4 axioms (see cellular automota). Spontaneous creation of anything requires far more than that. The difference is quite clearly demonstrated by a system designed by genetic algorithms, and a system designed by a programmer. The former has maybe 10 simple rules, and the solution eventually emerges. The latter has literally tens of thousands of rules.