## Journal Journal: A

A reddit comment inspired a bit of an interesting thought.

The question is. If you had a fast TM (FAST_TM) with a little bit of paper tape, and a slow TM (SLOW_TM) with a lot of paper tape, could you simulate FAST_TM on SLOW_TM? Could you trade off memory for speed to get a (close to realtime even) representation of FAST_TM on SLOW_TM?

I think this gets interesting if you ask for a SLOW_TM to have with an infinite ( |â| ) paper tape. As long as you could 'read state' into those tape cells fast enough you could have stories of state going back as far back as the FAST_TM has not been in a stable/idle state. In an idle state you could start to play catchup. Between idle state points you would be at some indeterminate point between where the last idle state was that you could play catch up, in effect you'll probably have a whole stack of possible state points(what is the computational complexity of catching up on this stack for the size of the stack STACKSIZE?). So the load % of FAST_TM would determine the average lag time between your successful modelling of it.

This suggests that you could probably get a probablistic chance of modelling the FAST_TM by just adding memory of (O(S)+O(T))t where S is the memory required to record one state, t is the time and O(T) is the amount of memory you spend transitioning from state to state. I'm guessing this would be hard to do so on cpus with little memory involved, ie there's a high constant factor, but once you're past this constant factor it gets relatively easy to do...but then again maybe it doesn't? I think the lower bounds for turing machines of certain memory capability in terms of size are very small and we haven't got a lot of proofs for them.

Now here's an idea for a new kind of machine: a FAST_TM simulated on a new kind of SLOW_TM, ie where SLOW_TM has a bounded amount of memory proportional to some % of possible outcomes. Let's call this SIM_TM_LB. SIM_TM_LB is going to have a lower busy beaver-like number than a regular TM of its size because there's a certain % of possible outcomes that cannot be simulated (that overflow the stack allowed). going to be some lower limit L 1 L(M) BB(M) that the largest program available on simulated-fast-cpu can run. Proving what that is would be interesting because as you expand M (again in relation to the load average ratio between the two, the state transition memory footprint T, and the FAST_TM state size S) you're also define a L(M) BB(M) which means you're defining a new kind of number let's say Î that seems to be related to \omega: it's \phi = \sigma_i 1/L(i). Why is Î important?

It's yet another way of looking at problems where you're dealing with something smarter than you are. It's where you're playing a game with god. Where you're having an argument with an oracle. It expresses all information that you can possibly acquire rather than what your opponent can possibly know. Or does it?

This suggest \psi s parameters are T and S. Are there any others?

Also: what happens if we start allowing stack overflows to transverse from TM to TM? This seems to build a new kind of machine, also that has potentially really weird properties.