The same is true of Xen, for example; it boots as pure Linux and then Xen takes over Ring-0
No it doesn't. Xen boots and then launches a PV guest as dom 0. The PV guest starts in ring 1 in x86 or ring 3 on x86-64 with the CPU already in protected mode. The kernel entry point is also different for PV guests, so that you can have a single kernel binary that boots as a PV guest or a bare-metal OS. With newer Xen, the dom 0 guest can be PVH, so it runs in ring 0, but with Xen in the hypervisor 'ring -1' mode. It still starts via the Xen entry point, however, not the normal boot process.
Wow, that's the most oversimplified attempt at discussing economics I've seen for a long time. Hint: economics is complicated - if it were easy we'd have a simple working system that everyone could agree one (or, at least, the people who worked out out would get very rich by betting on the economy 100% correctly and other people would notice) - and any explanation that simple is likely to be wrong.
Consider the following counter example:
The government builds and maintains a road between two places. This employs people, taking them out of the labour pool where they could be doing other things, so it's a cost. But it also allows the two places to trade more cheaply, which increases wealth production. Similarly, employers at either end of the road would have access to a wider pool of employees and potential employees to a wider pool of employers and so people would end up in more productive employment.
Now, would the same apply if private industry built the road? This is where it starts to get more complicated. First, who would build the road? It might be some consortium of businesses at both ends who wanted to use it. If so, then they might charge money to anyone not part of the consortium to use it, which would give them a competitive advantage, but be less healthy for the economy as a whole by producing a barrier to competition (and, most specifically, a barrier to entry for new companies).
It might be a third party that thought that the road would be profitable, who would run it as a 'common carrier' toll road. This, however, would provide a disincentive for people to use it. If they priced it too low, then they'd go out of business (which would discourage future road-building companies). If they priced it too high, then they'd make it unprofitable for some users to use it, however given that the cost of the road is now a sunk cost the economy as a whole benefits if as many people as would gain any benefit at all from it use it.
In some cases, the benefit to the economy may be significantly lower than the cost of the road, so it would not make sense for the government to make the investment. It's often difficult to make that call, however. In the UK, be Beeching Report identified a large number of unprofitable railway lines and, to save taxpayer money, the nationalised railway service closed them. Unfortunately, it turned out that a lot of the unprofitable lines were ones that got people from near where they lived to a more profitable line. When they were closed, people at the edges ended up having to buy cars, which meant that they no longer used the larger lines either, and so pushed those into unprofitability (and so there was a second Beeching Report some years later which repeated the entire mistake). The cost to the economy of no longer having a widespread, cheap, railway network is widely agreed by economists to be significantly greater than the savings from closing the lines.
A nationally owned private rail operator may have seen further ahead, but most likely they'd have had shareholders making the same demands: sell off the unprofitable lines and concentrate on the profitable ones. A larger number of smaller railway operators would have had similar problems, with the ones operating the unprofitable lines going out of business and reducing demand on the profitable ones.
Volatile memory is defined as memory that requires power to preserve its contents. All of the 'cheap' memory that you can buy at the moment is volatile memory (and the non-volatile memory is either really expensive, really slow, or both). If you double the amount of RAM, then you double the number of capacitors that must be refreshed every few nanoseconds and so you double the power consumption. You also double the amount of heat that's generated and must be dissipated.
Now, what's the number one complaint that people have about most smartphones? Is it performance, or is it battery life? If it's performance, then doubling the RAM makes sense. If it's battery life, then doubling the RAM will make people complain more.
I implemented GC for the GNUstep Objective-C runtime using the Boehm collector (which, amusingly, performed better than Apple's 'highly optimised' collector in my tests, but then there's a reason that the AutoZone team doesn't work at Apple anymore...) and didn't notice any problems with pausing in typical applications, and did see a noticeable decrease in total memory usage, largely due to programs not leaving objects in autorelease pools for as long.
GC support was deprecated for two reasons. The first is that lots of Objective-C idioms rely on deterministic destruction. Although people are discouraged from using objects to manage scarce resources, you still have classes like NSFileHandle that own a file descriptor and close it when the object is destroyed. Nondeterminism of destruction made these problems really hard to debug.
The second problem was that the memory model for ObjC in GC mode sucked beyond belief. Or, rather, the lack of a memory model. It allowed mixing GC'd and non-GC'd buffers at will, but didn't expose this information in the type system. Consider this toy example:
void makeSomeObjects(id *buffer, int count)
{
for (int i=0 ; i<count ; i++)
buffer[i] = [SomeClass new];
}
Will this work? Who knows. If the caller passes a buffer created by NSAllocateCollectable() with NSScannedOption, or passes a buffer on the stack, then yes. If they pass the address of a global that's declared in an Objective-C source file, then yes. If they pass the address of a global that's declared in a C/C++/whatever source file, then no. If they pass a buffer created with malloc(), then no. If they pass a buffer created with NSAllocateCollectable() but without NSScannedOption, then no.
In short, the whole idea was a clusterfuck from the start. On the other hand, it may be one that's worth revisiting now that Objective-C (with ARC) has a notion of ownership for memory. Source code that uses ARC could be compiled to use GC without a huge amount of pain (although the ARC memory barrier functions seem to have been created with the express intention of not allowing GC to be added later).
UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things. -- Doug Gwyn