Comment Re:What I find amusing is... (Score 2) 34
It's not out of date, it's a simplification.
They don't innately understand their capabilities, but information about it's own capabilities may be fed explicitly into it by other means, just like any other data you want to endeavor to put into the context.
The concept of asking if it implements a certain behavior and either it's deliberately lying or it's not actually there relies upon a false assumption that of course it has innate knowledge of it's own implementation without any "help".
The core relevant issue is that the LLMs will generate an answer based on no data. Instead of "Information on that one way or the other is not available to the model" it sees the answer most consistent with the narrative to be "Those behaviors do not exist". LLMs tend to generate output that implies confidence regardless of whether there should be confidence or not. The workaround has been to try to do everything possible to make sure there is actual data in the context window and hope it just doesn't come up that much, but this is only so possible. Some coding has the opportunity to use test cases to add "the output given failed to work" automatically to the narrative to drive iteration and maybe get further.