It's possible that the summary is missing an important qualification; but wouldn't it only be possible, even in principle, to conclude that something could or couldn't be a simulation on a specific type of computer rather than in general?
 
If, say, you were able to demonstrate that you had an actual RNG, not a pseudorandom number generator, you'd know that it isn't being simulated on a turing machine; because those do determinism only. However, in practice, we build computers with what we think are RNGs all the time; because connecting the deterministic finite state machine to a peripheral that's full of thermal noise or radioisotopes or lava lamps or whatever is a totally doable design decision. Were someone in one of our simulations to conclude that non-deterministic behavior falsified the simulation theory they'd simply be wrong; because (it appears) that the reality in which we construct our computers is a little stingy when it comes to things like infinite state storage; but reasonably helpful on high quality entropy.
 
In the case of this 'non-algorithmic understanding'; it sounds like you may be successfully demonstrating that the simulation would only be viable on a somewhat more exotic machine; but basically just one that has a lookup table attached that it can use to check whether an unprovable statement is true or not. I would not want to be the one stuck building such a device; but it doesn't sound any more exotic than quite a few of the various 'oracle machines' that are supposed, for purposes of theorizing about computability and complexity, to have a black box capable of solving certain classes of problem.
 
We even interact with a much humbler example of an analogous thing more or less all the time: the reason we bother with storage devices is that there's no way to know what a given series of bits "should" be. Absolutely trivial(assuming sufficient time and RAM) to go through all possibilities for what it might be; but no way to decide between the possibilities. So we suck it up and plug in our flash drive; then copy off the cat picture that we actually want. Essentially a block device is an oracle that answers the otherwise algorithmically impossible question of "what is the state of those n bits?".
 
I don't say this out of any particular affinity for, or belief in, 'simulationism'; and further acknowledge that the authors may have made a meaningful(but rather narrower) statement by formalizing certain requirements for what a simulator would be required to be capable of; I'm just unclear on how you could make the claim to have disproved simulation, in general, unless you managed to come up with something that could not be implemented as an oracle even in principle, which it doesn't sound like they have.
 
It does seem to at least suggest the possibility of excluding 'trivial' nesting of simulations: someone simulating us would appear to need hardware that we would not be able to implement under the rules we are provided; just as someone in a deterministic simulation wouldn't be able to implement an RNG, which we at least appear to be able to do(at least, if they are PRNGs, they hide their state somewhere very cryptic); so if there is anyone out there who thinks that it's totally possible that, like, the universe is just big 486s all the way down, man, it would appear that they are on thin ice theoretically, with at least some details suggesting that the simulator need be fundamentally more capable, rather than just bigger, than a system that can be implemented within the simulation; but my impression is that any serious consideration of trivially nested simulations had foundered purely on the size problem among all but the densest rocko's baselisk bros already.