Comment Re:non-deterministic outputs mean you can't predic (Score 1) 22
Actually you don't always get the same result (or at least the exact same response to the same question). I've tried this with LLMs running locally (using ollama), making sure to restart the engine from scratch every time, so there is some randomness going on.
According to ChatGPT, Ollama does make use of a random number generator for some reason.