Comment Re:Really? (Score 1) 289
We *do* understand LLMs well enough to build all kinds of things from that understanding.
I guess it hinges on what one means by understanding. I think your use is acceptable when given the qualifiers in your above statement, but not acceptable to attack people when they claim we don't understand LLMs. You should generally use the context that gives a reasonable meaning to what someone is saying. That's just how one uses context.
When people say we don't understand LLM (including the people who invented them), they are talking about a scientific understanding/quantification of why they work. Human history is full of examples where people could engineer useful products from things we didn't understand at a scientific level. Science is a relatively recent invention, but humans have been "engineering" things for presumably 300,000 years.
It's not just AI chatbots. There are LLMs tailored to coding, generating images, generating videos, finding security vulnerabilities, screening resumes, taking notes in meetings, the list is endless. It's not necessary to understand every nuance at the deepest level, it's only necessary to understand enough to do useful things with it, and to mitigate risks. That principle holds true for *all* of engineering.
The textbook engineering process is to take scientific understanding and to use those properties to create useful products. That does not imply that if you can engineer something you have a scientific understanding. My interpretation is that we got pretty lucky with this stuff. We created a simple model of the brain's visual system and eventually trained it on a large dataset, and it worked surprisingly well. We've been tweaking that for the last 10+ years and eventually created foundational language models with embeddings that we can use to create interesting if somewhat unreliable products. We don't really understand why these models work so well, but we continue to tweak them and combine them to create new products.
Now a possible angle to attack what I'm saying, it to focus on scientific understanding. What is scientific understanding? Can we ever achieve it for LLMs? If not, then is it really fair/interesting to talk about it in this context?