Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
Hmm. I suppose that can be true in an iterative setting (needing to store some data from every iteration), & that the only hope of avoiding that is rewriting the whole loop to be fully reversible so it does not consume space every iteration. (It cannot take more space than linear in the run time, at any rate.) I was imagining recursive functions with stack allocation for each, but I should know better since I use tail recursion all the time. So I guess I was only right about iteration- & tail-recursion-free code.
On the other hand, it should not require more than an exponential increase (hah, only exponential) in space for any terminating & non-interactive computation, since with that you could store every possible state of the original irreversible machine. For non-terminating computation, it is at worst linear in the runtime, as aforementioned.
B=A XOR B (leaving A unchanged) is a reversible operation & is what I meant. More generally, B=f(A) XOR B is reversible (in fact, self-inverse), where f can be any (even irreversible) function.
Sure, you need to save the input to otherwise-irreversible steps, but the point is that you can erase a known value, & since there was some method to compute the intermediate values in the first place, they can be removed from memory in reverse order. (This is a known method—I did not come up with it.) Then you only need enough memory to store the maximum intermediate storage size (which is not all intermediate results unless the computation is a single list of originally-irreversible steps with no subroutines & such), & you can eventually end up with just the answer (& any inputs) remaining in memory.
Did they try finding an entropy source on-calculator like Linux uses for
For that matter...what about the slight bias of actual physical dice rolled by humans? You only get ~258 bits from 72d12, assuming it is perfectly random. You need extra rolls to get the full 256 bits needed (with a sufficiently high probability), plus some strategy (hashing?) to mix the slightly spread-out entropy into a maximum-entropy key.