There is no room for it to manifest in a computer program. There is no room for any "magic" in computer programs.
That's true for classic software in a trivial way, in the sense that a sequence of logical inference steps (i.e. a deterministic symbolic program) do not reflect upon itself.
However it may be possible that the computer program is not conscious, but the computer running the software is. LLMs in particular generate their output not from the specific instructions included in the program, but from the weights trained in the model; the software instructions are a requirement for the weights being interpreted, but the outcome doesn't necessarily follow the rules of a formal system and an inference process.
Current LLMs do not have consciousness because their processing is too simple for it to emerge; not because the software substrate is deterministic and mathematical. If the base software were processing the weights of the model in ways similar to how neurons generate brain waves, it is plausible that the emergent system-level information patterns appearing at the data level could exhibits the attributes of consciousness, including self-perception and self-reflection. This is true even if the computer software is deterministic, in the same that the neurons in our brain behave in deterministic electro-chemical ways.