And those versions are few and far between, exception not the rule.
Not really. It depends on the environment and things like expected application running time. Things like Java, for example, use this kind of collector. They are used in production so they shouldn't be excluded from the discussion, thus meaning my statement is still correct.
Define a steady-state. Not every application has one. This is why real-time stuff doesn't do that - they allocate memory/blocks on the stack at the application (global) level. If you can load the application then everything the application will ever need is allocated. If you cannot load the application, then that's it.
I think we are "having an agreement". If something other than dynamic allocation can be used (the size of something is known at compile time, for example), then it should be allocated using a different mechanism.
From a security point-of-view, you need to be able to validate that a pointer is valid beyond whether or not it is NULL. You need to know that your application issued the pointer and that the data it points to is valid and within your application space. And this needs to be in real applications, not debug mode.
What do you mean? Under what circumstances is this kind of pointer validation required? It sounds like this is an attempt to detect other bugs, after-the-fact (reading uninitialized or over-written memory, for example).
Which is a major draw back for using a GC as it now has to crawl everything periodicially.
Whether this is a problem is really the core of this conversation. The problem is the pause time but the question is whether or not that is a real problem and whether other benefits exist to offset it, in the general case of your application.
So now you're adding indirect pointers for normal pointer usage...which again now means two calls for every pointer and now you've slowed down the whole application. Smart pointers do the same thing in a sense; as does the PIMPL design pattern - it can still be quite fast, but is still (provably) slower than directly using the pointers to start with.
I said nothing about indirect pointers at any point. The pointers are still directly to the memory being used. Managing the underlying memory slabs, directly, in no way invalidates this.
Except now you are again penalizing the performance by randomly moving the memory around at application run-time. So you are not just hitting the performance to remove unused memory, but to also "optimize" it. And in doing so you remove the ability of the application developer to run-time optimize the memory usage when necessary.
The application developer in managed runtimes has effectively no control over heap geometry. Technically, they aren't even allowed to think as object references as numbers since they can only compare them for direct equality/inequality.
Also, I am still not sure what you mean by "remove unused memory". Remember that the unit of work, in a managed heap, is either the number of live objects or the number of live bytes. "Unused" (or dead or fragmented) memory is not a cost factor.
These optimization opportunities do a great job of actually improving performance of the application (check the benchmarks - there is a surprising win in both throughput and horizontal scalability).
Seriously, GCs are probably one of the biggest hits for performance of applications on Android. It's one of the many reason that Java as a language is SLOW.
Can you substantiate that claim, since it sounds surprising. Their heaps aren't big enough to be seriously hurt by GC (unless they keep the heap right on the edge of full). Over all, Java is actually very FAST. The slowest part is generally VM bootstrap (just because it has a long path length and much of it can't be parallelized), followed by application bootstrap (which is not related to Java but many Java applications tend to be massive - application servers, Eclipse, etc). This win is some combination of GC memory optimizations but more so the "promise of JIT" which gives them a pretty serious win.