> and because they don't want to reveal just how
> hot-garbage their underlying code is
Code quality isn't the issue. I have no idea what their code quality is, it might be fine, it might be terrible, but the reason I can't tell is because that has nothing at all to do with the problem with their results.
The fundamental problem is that they've been actively trying to convince a lot of people, up to and including their shareholders, that the product is a fundamentally different thing than what it actually is. They use fancy terminology that most people don't really understand, like "neural network", to actively disguise the fact that the product is, at its core, basically just running statistics and spitting out statistically-likely combinations of tokens. It's _basically_ a really heavily souped up Markov chain generator, on really powerful steroids. The most important steroid in question, is an absolutely stupendous quantity of training data. But there are also some more clever things going on, e.g. with the details of how the data are tokenized, and I think they're more clever about how combinations of tokens work, than just the flat one-dimensional sequence of a traditional Markov chain. All of these enhancements make the output feel much more similar to real human speech or writing or whatever, than was possible even ten years ago. But fundamentally, that's all the thing is doing: generating output that's statistically similar to the training data.
There are definitely some actual uses for this technology, but they're not even vaguely commensurate with some of the things the companies involved want you to *imagine* the technology can do; and they never will be, no matter how much the technology matures. LLMs are not a path to general-purpose AI, no matter how much people want them to be. We're not materially closer to knowing how to create general-purpose AI, than we were in the seventies. For that, we're still waiting on some fundamental and completely unpredictable breakthrough. This doesn't mean the technology is useless; it's not; it has uses. And as it continues to mature, it'll be even more useful, for the sorts of things it's useful for. But it's not going to make humans obsolete, or do a lot of the other preposterous things the hype machine predicts.