Comment Re:And more AI nonsense gets exposed (Score 1) 80
Three examples given, all about doing something that could have been done with a traditional search, but faster. That sounds a lot like "somewhat better than search" to me. Got an example where it's something more than that?
The problem is the tendency to hallucinate. You call yourself TheStatsMan; you should know that LLMs are just statistical engines that string a bunch of words together which are statistically likely to follow from the prompt, given the body of text the LLM has been trained on. Garbage in, garbage out as they used to say. If you're doing something fairly common for which there are a number of good examples in the training set, then the LLM can come up with a reasonable procedure or explanation for you. If you're doing something relatively novel or which has bad examples in the training set, you get crap. But it's really convincingly written crap that sure sounds like the real deal. The question is, can you rely on the results?