I'm with you on value semantics, unless memory is tight, or if we're dealing with a UI application (as I noted above) where it makes no sense to have multiple instances of an object referring to the same UI object. (For example, it makes little conceptual sense to have multiple C++ objects that represent the same logical X window handle.)
Footnote: the last code I wrote was loading the entire planet file from OpenStreetMaps for in-memory manipulation and generation of custom vector map tiles. In that case, "tight memory" can mean 64GB of RAM. :-P
And it's not hard to build your own reference counting scheme (or use a library such as Boost).
Personally, by the way, I believe those who do not have to deal with a programming language such as C or C++ or Pascal where you need to think about memory management concepts (even if they're handled for you, such as with Boost) can run into a serious conceptual problem when writing applications that have tight memory requirements. I don't know the number of times I've had to explain to an Android developer that the reason why I set a field to nil was because the object itself may persist beyond the lifespan of the UI element which uses it, but we want the GC to be able to reclaim the rest of the memory associated with the object. (This can come up when fixing bugs in an Android activity object, where the activity is held by a network callback. Ideally this should be handled with a broadcast/receive design pattern, but at the very least, if your activity goes out of scope, you can release all the crap that the activity has strong references to that is no longer needed.)
Somehow a number of Java programmers I've personally met think somehow garbage collection is this magic thing which does what the programmer intends, rather than what is written in code. They're generally the ones writing Android apps which crash and have no idea why.