Comment Test it occasionally (Score 1) 247
I've tested a variety of chat bots on a regular basis to see how they perform in terms of coding or suggesting logic solutions. I've tried ChatGPT, Copilot, and Llama. Most of the time they are pretty bad, particularly doing anything above entry-level.
So I could see them being useful for a complete beginner (as long as the beginner checks the results) or as a way to fill in some common boilerplate stuff, but it's not really useful for anything beyond that.
Most of the code LLMs have given me (in Bash, C, Java, and Python) doesn't compile or doesn't run properly or is _close_ but has the logic backwards. It looks okay, but it's not actually functional for the task at hand.
Where I have found LLMs semi-useful is in brainstorming. If I give it a problem I'm working on and ask for a couple of approaches it'll usually give me one way to solve the problem that makes sense. It usually can't actually code the solution properly, but it'll give me some ideas and then I can code the solution myself.
So I could see them being useful for a complete beginner (as long as the beginner checks the results) or as a way to fill in some common boilerplate stuff, but it's not really useful for anything beyond that.
Most of the code LLMs have given me (in Bash, C, Java, and Python) doesn't compile or doesn't run properly or is _close_ but has the logic backwards. It looks okay, but it's not actually functional for the task at hand.
Where I have found LLMs semi-useful is in brainstorming. If I give it a problem I'm working on and ask for a couple of approaches it'll usually give me one way to solve the problem that makes sense. It usually can't actually code the solution properly, but it'll give me some ideas and then I can code the solution myself.