An earlier post suggested that current AI is just pattern recognition within the searchable data. I tend to agree here. I've been trying to pair program with Github Copilot the last few months, I can get code snippets that are 80% complete at best and I'm never able to give a query that puts it across the finish line.
Some observations:
As I request changes to the code snippets, I see changes to variable names and other program logic unrelated to my last request. This suggests that it's not actually remembering context of prior requests.
If I observe Bug A in the generated code and request a fix, it introduces Bug B. If I then ask it to fix Bug B, then Bug A comes back. It kind of loops forever in this situation, it doesn't seem capable of fixing both bugs.
If I ask it to write a shell script, it is unclear if it is assuming bash, busybox sh, etc. The resulting script is incompatible with every shell.
This one bothers me the most. If I request a code snippet from a specific framework, say iOS Core Bluetooth as an example, it sometimes writes a piece of beautiful looking code. But when I go to compile it, I discover that it is making calls to functions that don't exist / never existed in the framework! This has happened for every framework I've tried, .NET, glib2, etc. I can't find the framework functions in a normal search, so it seems to be creating them on the fly. If I ask it for an implementation of the framework function that doesn't exist, it can't do it.
At current level, the AI is not actually intelligent, it's not doing actual design, and it won't be taking my job anytime soon. If I ask it to solve my problem, and the generated code from AI is result = funcThatSolvedYourProblem() it's not helpful at all.