By this I mean that you should instrument the code with real, meaningful activity logging, not just some afterthought that grabs a stack trace and some state variables (although you'll want to have that data, too). If you instrument your code with logging that produces readily human-interpretable information about what's going on, the payback is huge, because it makes internal developers' lives easier, and it allows even first-level support folks to to a better job of triage and analysis. It's really important to make it meaningful to the human reader, not just "readable"--an XML representation full of hexadecimal doesn't cut it, it needs to include symbolic names.
Let the users see the logged data easily, if they ask for it, and maybe give them a single knob to turn that controls the level of logging. This will help technically sophisticated users give more useful reports, and it's really helpful in any sort of interactive problem resolution (OK, do X. Now read the last few log messages. Do any of them say BONK?).
It's really useful to include high-resolution time--both clock time and accumulated CPU time--in log messages. This is great for picking up weird performance problems, or tracking down timeouts that cause mysterious hangs. Depending on your architecture and implementation technology, other sorts of "ambient" data (memory usage, network statistics) can useful here, too.
There's a trade-off between logging by frameworks, mixins, macros, etc., and logging of specific events. The former approach gets comprehensive data, but it often can't provide enough contextual semantic information to be meaningful. The latter approach scatters logging ad-hoc throughout the code, so it's very hard to make any argument for comprehensiveness, but if done properly, it's spot-on for meaningful messages. Usually best to do some of each, and have good control knobs to select.
Logging can generate a lot of data, so it's important to be able to minimize that burden during routine operation (especially in deployed applications, where there should be a strict limit on the amount of space/time it takes up). But it's also useful (especially when it's configured to generate a lot of data) to have tools that allow efficient ad-hoc review and analysis--an XML tree view, maybe filtered with XSLT, can be easier than a giant text file.
In any complex system, logging is one of the very first things I recommend implementing. After the architecture is settled enough to know what will be some of the meaningful activities and objects to record, bolting in a high-efficiency, non-intrusive logging infrastructure is the very next step. Then comes business logic, user interface, and all the other stuff. Pays for itself many times over.