Volatile memory is defined as memory that requires power to preserve its contents. All of the 'cheap' memory that you can buy at the moment is volatile memory (and the non-volatile memory is either really expensive, really slow, or both). If you double the amount of RAM, then you double the number of capacitors that must be refreshed every few nanoseconds and so you double the power consumption. You also double the amount of heat that's generated and must be dissipated.
Now, what's the number one complaint that people have about most smartphones? Is it performance, or is it battery life? If it's performance, then doubling the RAM makes sense. If it's battery life, then doubling the RAM will make people complain more.
I implemented GC for the GNUstep Objective-C runtime using the Boehm collector (which, amusingly, performed better than Apple's 'highly optimised' collector in my tests, but then there's a reason that the AutoZone team doesn't work at Apple anymore...) and didn't notice any problems with pausing in typical applications, and did see a noticeable decrease in total memory usage, largely due to programs not leaving objects in autorelease pools for as long.
GC support was deprecated for two reasons. The first is that lots of Objective-C idioms rely on deterministic destruction. Although people are discouraged from using objects to manage scarce resources, you still have classes like NSFileHandle that own a file descriptor and close it when the object is destroyed. Nondeterminism of destruction made these problems really hard to debug.
The second problem was that the memory model for ObjC in GC mode sucked beyond belief. Or, rather, the lack of a memory model. It allowed mixing GC'd and non-GC'd buffers at will, but didn't expose this information in the type system. Consider this toy example:
void makeSomeObjects(id *buffer, int count)
for (int i=0 ; i<count ; i++)
buffer[i] = [SomeClass new];
Will this work? Who knows. If the caller passes a buffer created by NSAllocateCollectable() with NSScannedOption, or passes a buffer on the stack, then yes. If they pass the address of a global that's declared in an Objective-C source file, then yes. If they pass the address of a global that's declared in a C/C++/whatever source file, then no. If they pass a buffer created with malloc(), then no. If they pass a buffer created with NSAllocateCollectable() but without NSScannedOption, then no.
In short, the whole idea was a clusterfuck from the start. On the other hand, it may be one that's worth revisiting now that Objective-C (with ARC) has a notion of ownership for memory. Source code that uses ARC could be compiled to use GC without a huge amount of pain (although the ARC memory barrier functions seem to have been created with the express intention of not allowing GC to be added later).
Hi, I work on a subset of problems where aliasing is intentionally reduced and where the algorithms that we use are, by design, amenable to shared-nothing parallelism, although we used shared memory for efficiency. The code that I write has very little similarity in structure or requirements to the vast majority of software in the world, but because it runs on very expensive hardware I regard my views base on this code as being valid in any setting. Memory management is easy.
No, refcounts and GC are completely different.
You started well, but then ended up badly. Garbage collection is an abstract concept, reference counting is a concrete mechanism. GC means deleting objects after they are no longer referenced. It can be implemented manually, using reference counts, using tracing, or using reference counts with automatic cycle detection. The last two are equivalently expressive, but have very different cache interactions. The rest of your post talks about one specific subset of implementations of tracing garbage collectors.
It's possibly related to the results. Many of the top-tier universities use less industry-friendly languages for teaching undergraduates. At Cambridge we do a lot of ocaml, at MIT they use Scheme (and, apparently, Python), at a number of others they use Haskell. There are several reasons for doing this. The first is that teaching a less common language means that you don't start the course with half the students thinking that they already know the material. The second is that teaching a relatively simple yet expressive language teaches students to think about algorithms first and then about microoptimisation later when they learn their third or forth language.
In many other universities, there has been the growing trend to believe that the language that you should teach with is one that you would use to solve real-world problems. I believe that this is a mistake, because the requirements for a language for teaching and for creating maintainable large-scale applications are nowhere near the same. This would mean, however, that students from universities with this belief would have an advantage in these contests as they'd be using a language that they'd had a few years more practice with.
Looking at the results, however, I suspect that there's also a lot of apathy involved. MIT and Stanford are there, but they tend to encourage a very competitive atmosphere. Several universities that I'd expect to produce students that would do well appear not to have entered at all. Given the wide range of extra curricular activities available to students these days, I wouldn't be surprised if entering a competition is somewhere down the list.
Hyper-V, as that is a level 1 hypervisor
But if it kills another orc, it will become a level 2 hypervisor!
Or do you mean a type 1 hypervisor
The advantage is that other VMs run at the same speed as the base W2012 instance
This is not an advantage of Type 1 hypervisors. The advantage is (in theory) a smaller TCB.
One man's constant is another man's variable. -- A.J. Perlis