This is exactly what I was saying... up until I invested more time using the agent-based ones.
 
  So I've dealt with badly written code for most of my professional SW career (15 years) and these days, I work at a company that makes test instruments... we have many legacy embedded legacy devices and LLMs have transformed how I work. It's like having a new bit driver that augments your old screwdriver set.
 
  LLMs are chat bots. They get wound up by whatever you get juicing through them.  If you're trying to figure something out, you juice it up on that problem, describing it concisely, point it in the right direction as well as you can, and then let it unwind.  It sometimes takes some iteration, but it can definitely help you understand things. You have to learn how to manage its context, otherwise the output may be garbage.  And you have to know how far you can trust it, it's not always obvious when it is off track, but often it is quite obvious.  Also, learning to use it to produce code takes some practice, so if you aren't getting good results, maybe check out how other people do it and see if that works for you.  Definitely experiment.  For stuff that matters, I end up personally modifying most of the code before I deliver it.