For coding Claude is still king but some LLMs like qwen-code are perfectly serviceable. For more general tasks they are getting pretty good. Not as good as the commercial models but you gain portability, offline use, privacy, and more flexibility (fewer guard rails to run into for example). As for real world, one use I have is a documentation script for cloud environments. I've added code to call out to a local LLM API endpoint to generate text blocks based on configuration sections. The config data can contain sensitive information about the environments so processing it with a local LLM was deemed the safest option.
Saves a ton of time even with error checking/corrections. Doing it in small chunks reduces errors but they still creep in so you do need to actually read the output.