Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment This is like LLM electronic warfare (Score 1) 103

This is like a broadband (white noise) EW jammer; i.e. flood the frequency range (the token space) with random with white noise (a broad range of random tokens) in order to reduce the ability to receive a signal (i.e. information). Cool, but also worrying that such a small sample in the corpus can "kill" the model. Maybe ingestion tools need to have either a) a noise reduction filter, or b) filter out sources with high entropy.

Comment Re:So close ... (Score 1) 76

This! This I suspect is indeed the cause! Because the LLM tries to give you output that is "similar" (aka predicted) to follow on from the input. So if the prompt is "unthinking", "afactual", and generally untethered from reality then it's no surprise that the response is also unthinking, afactual, and untethered from reality. Garbage in, Garbage out.

Comment Re:Absolutely not (Score 2) 248

So in that case you shouldn't get advice from humans either (or hire humans to write code, books, summaries, etc); they a) learn from other people's work, b) they are fallible, and c) from time to time some of them will exploit others' work and claim it for it's own. The reason that LLMs do these things is because it has been trained (for the overwhelming part) on human output.

Comment Boy genius turns out to be ... (Score 2) 93

actually not a genius. Who knew, who could have predicted this, who would have guessed that this character is as slap dash as his superiors, inquiring minds want to know. The chances that he gets pulled up on this, let alone admonished, are very low. Creeping authoritarianism, film at 11.

Slashdot Top Deals

Within a computer, natural language is unnatural.

Working...