Uh, yeah, it does.
I don't think they do.
There are specific circuits active for fiction as distinct for the circuits for reality.
With the caveat "in some cases". I mean sure, if you start asking it about some very obvious piece of fiction, it can identify it.
However they are still horrendously prone to hallucinations (I wasted a bunch of time trying to get help with jaxtyping yesterday and it turned out the model I tried was simply inventing a capability that sounded plausible). And there's no chance I was exceeding the context window. If it can hallucinate so easily about reality then it cannot reliably distinguish fiction from reality.
Try going to Gemini right now and insisting in all seriousness that Dracula is right outside your door and see what sort of response you get.
Yeah but now do it about some bloke with a facemask and a hoodie pulled up. Still fiction(I hope!).
It is possible for a person to interact with a model for so long in what sounds like fiction-roleplaying manner that it "forgets" (due to long contexts / context compaction) that the person on the other end is being serious, not just roleplaying a story.
That kind of contradicts your claim it does know fantasy from reality. Basically it doesn't, but it can approximate it reasonably well in many cases.
According to Google,
They've implemented some guardrails, and there's various mechanisms to attempt to detect such things, but they are and always will be with this tech, flaky. It can in one line hand out the number for the Samaritans and then flip right back to playing along.