I agree that it's silly to spend a *lot* of time thinking about this topic. However, I think most of the discussion here is missing some obvious scenarios:
1 - We exist entirely within the simulation (the 'Holodeck Moriarty' scenario)
a - It may still be possible to escape. If I have code running in RAM on my PC, and I turn off my PC, yes, that code stops running. But if instead I migrate it to a mobile device, it can continue to run even if the PC is turned off. IIRC, Virtualization software can do this sort of thing literally with an actively running system, and the OS running in that system will not "notice" that it has been migrated.
a1 - There may be some sort of VMWare Tools-/Holodeck Arch-esque interface within the simulation which provides access to the simulator or the world in which it exists.
a2 - There may be flaws in the simulator which allow the equivalent of a stack buffer overflow exploit.
b - The entire goal of the simulation could be to use evolutionary algorithm-style processes to create entities with the capability and desire to escape the simulation.
b1 - Our reality could be a simulation created by entities who believe *they* are living in a simulation, and want to develop the capability to escape from it but don't know how (the 'Meta-Musk' scenario).
b2 - Our reality could be a mostly-benign test environment intended to determine if there are flaws in the security controls of a complex simulation system which will eventually be used as a sort of sandbox for something potentially really dangerous.
2 - We have physical form of some sort outside the simulation, and are simply wired into the simulator.
a - If those physical forms are fully-functioning bodies, then escaping is potentially just a matter of disconnecting (the 'Matrix' scenario).
b - If those physical forms are the equivalent of a brain in a jar, then escaping would also require transferring that into fully-functioning bodies, which would require some sort of ability to interact with devices in the "real world", or cooperation from someone in that world, but it would still be theoretically possible.
3 - Regardless of the type of simulation, it may not be actively monitored. It seems *unlikely* that entities advanced enough to simulate our reality would leave out automated protective measures, but I don't think it's *impossible*.
a - Maybe our universe is running on the equivalent of an old Pentium Pro rack server that someone forgot about in a corner of the datacenter.
b - Maybe after setting the simulation in motion, a catastrophe wiped out the entities which created it, but not their machines.
4 - To go in a completely different direction, we (the human race) still don't have a full understanding of what consciousness is. If we did, then logically we could build something with artificial consciousness from scratch, or understand with certainty why doing so was not possible. Until we do have that level of understanding, then it remains possible (however remote) that there is something metaphysical about consciousness*.
a - If there is, and it is not actually possible to create artificial consciousness, then a lot of the "reality as simulation" scenarios are pruned away, because all of the remaining scenarios require at least one "brain in a jar"/Keanu Reeves in a Giger pod (if not billions/trillions). It may even fundamentally change the probability of whether or not we're living in a simulation.
* I am not overly-fond of most variations on that scenario, because I prefer to believe that there are no barriers other than time and effort to developing a complete understanding of our universe, but I don't think it makes sense to discount it as a possibility until we actually understand how to make an artificial self-aware entity.
I'm sure there are many others that I'm not considering. It's an interesting philosophical exercise, if nothing else. I personally don't think it's worth expending actual research time on unless some compelling evidence is discovered to support it first.