The problem with GC is that it's inherently lazy.
Take straight C for example. You need to define a variable, initialize it, do whatever with it, and then free it, within the scope of the function in order to make efficient use of it. Or you use malloc.
In C++ you can specifically tell the C++ runtime to delete objects, and use C style varibles if you want the tighter control, or stick entirely with malloc if you want to use as little memory as possible.
The goal with GC should be determined by the nature of the device. A desktop system with a lot of memory will have no problem deferring garbage collection, but then you get sites like Twitter, which endlessly "grow" the DOM and never actually GC anything until the tab is refreshed. Before Chrome finally released a 64bit version, one would only get about 2 days out of a twitter tab before it would crash. Do this on mobile and it will crash hourly, because even though the mobile device may have 1GB of memory and run in 64bit mode, it never actually "stops" running things in the background, they are just paused, and only unloaded when memory is needed. A headless device that needs to run in a wiring closet without being reset for months or years, needs to be able to detect when memory is failing to be freed otherwise the device may stop working.
I have an example of this with a Startech IP-KVM which runs linux, but because Startech doesn't release updates for the things they put their brand on after the warranty expires, this IP-KVM remains in a useless state (due to it running a version of VNC and the SSL part only working over Java) and needs to be power cycled by the remote-PDU before it can be used. The device just runs out of memory from DoS-like activity and it overwhelms the logging processes.
And that's sloppy programming. There is such a thing as GC-friendly coding conventions. A GC is not supposed to exist for programmers to go nilly-wily "someone is going to clean my butt for me".