Comment Re:AI is just limited. (Score 2) 102
I find the various LLMs are helpful as a form of search engine, enabling me to drill down to potentially useful information more quickly. However at the same time they are far worse than a search engine because they aren't able to actually give you the sources to check. When ChatGPT generates a chunk of code, if you ask it where it got it from, it will say it didn't get it from a specific site, it just knows this stuff. Which of course ends up wrong half the time. So you end up with wrong stuff confidently passed off as accurate, which is ultimately stolen from real human sources. When I was in uni it was drilled into me to list my sources. Why should LLMs be held to any different standard? Google's AI summary does show sources, at least few, which is good. I always check them.
Even Claude AI which is supposed to be geared towards coding suffers from these same problems. I am trying to do some esoteric Qt 6 programming involving OpenGL, and all the AIs really struggle here because there's a limited amount of source material to steal from. It's certainly not capable of digesting the API documents and synthesizing code to do something without first seeing someone else's code. Claude AI seems to work best if you use a popular library or framework with lots of online discussion and github code for it. The popular languages and frameworks of the day.