Comment Not a "senior coder", I use it "sometimes." (Score 1) 57
The big thing for me is that AI doesn't "write the code I put in production" - it provides guidance on techniques to use, or solves bugs I have written.
The same as StackOverflow for me. Just more personalized to my exact situation.
"I'm writing a shell script to ssh into a remote system and run some commands, I have to use some environment variables defined locally on the system I'm executing the script on, and other environment variables that are defined on the remote system I'm connecting to, and I can't remember how to escape things properly to pass through correctly." I can just feed an LLM my exact command that isn't working right, and ask it to rewrite it. It takes 2-3 further prompts ("That produced this error message, please try again") but it generally bug fixes it.
Or "I need a python script to integrate this company's API, as documented on this url with this other thing, and do this task, what would be a good sample?" I don't take it exactly as it spits it out, but use it as a basis for my own code.
I would say that in the last four years of using LLMs to assist, maybe 10% of my actual deployed code is "directly from an LLM, because it produced clearly functional code" - usually only short snippets. One short function in a Python script, for example. Maybe another 20% was "came from an LLM prompt, then heavily rewritten, because I didn't want to feed potentially proprietary data into the LLM."