Comment Re:Funny but serious (Score 2) 44
an amusing example of how training can go wrong
My understanding is that this isn't a consequence of a flawed training algorithm or process; it's instead a consequence of the limitations of LLMs, emergent from their training materials. It closely parallels another example I've seen around the net, that of asking an LLM about getting a car to the mechanic, noting it's a sunny day and the mechanic is just a block away, and having the LLM suggest walking... which is a consequence of the bias in training materials toward walking because lots of people make visible posts about their having done so (because it's looked on favorably), whereas people who drive short distances (of which there are many, probably outnumbering walkers) don't trumpet having done so online, leading LLMs to emit advice about walking when possible (and in the case of the mechanic example, having a lack of comprehension of the pivotal aspect of having the car make it with you to the mechanic's shop).