A destructor is useless with a GC.
The finalize method was only introduced in some situation where you need to make memory cleaning easier & faster.
If you are developing a regular program, you will never need to ever think on finalize. Using WeakReferences, Handles (look NIO for instance), you don't even need to know what is finalize() and what is the subilities it has. In the same way as the
This is the problem with developper comming from C/C++, they have been so focussed on memory management problems that they can not understand it has been solved (with a CPU & memory cost) thru GC usage.
Having automatic memory handling, does not mean you are freed from error. One famous problem is the loitering references, a reference you did not take notice to kept a bit longer than required in your code, but unfortunatly this reference has reference to lots of objects that have also reference...etc. The problem is getting worse if the place you are keeping tis reference is under a static zone or in a daemon thread.
The known solution to this is :
1- never keep a cache of objects that are external to your domain unless you know what you are doing or unless you are using a weak reference to point each object and performing the appropriate presence test on them.
2- never store a reference to your domain in a foreign domain (worst example of this is swing client properties) unlessyou know what you are doing or unless you are using a weak reference to point each object and performing the appropriate presence test on them
3- never use string interning (cf.
If you follow those simple advices, you will never have to think about
Allright, here is my question : we know that GC are now well known techniques and have reached to a suitable level of maturity. Why is there no GC directly written in the microprocessor ? I mean, these are basic feature that a processor could control and perform better than anybody else. All languages could benefit this by replacing memory allocation techniques !
This would be a huge step forward IMHO.