I'm crazy enough to believe I have found a path to unification that is actually quite simple: add a new relativity principle that states that laws of physics must be the same irrespective of the measurement instrument we use. Here is a parallel:

- Special relativity states that the laws of physics must be the same irrespective of your state of motion. So a complete description of an experiment must include which referential you are using. There is no absolute space, no absolute time, no aether. And we need to add new transformation laws from one referential to the next, which are Lorentz transforms.

- General relativity states that the laws of physics must be the same irrespective of acceleration. So a complete description of an experiment must include accelerations, including gravitation. There is no flat space-time anymore, but something that is curved by gravitation fields. So we need to add new transformations from one curved space-time to another, use tensor math, covariant and contravariant quadrivectors, etc.

- My still incomplete theory of incomplete measurements (TIM) states that the laws of physics must be the same irrespective of the measurement instruments used. So a complete description of an experiment must include which instruments were used, including calibration and range. Just because two instruments are calibrated to coincide on a given range cannot be used to postulate that they match at any scale. Space, time, mass and other measurements are no longer continuous, but discrete (because all our physical instruments give discrete results). We need to add new transformation when going from one physical instrument to another, which correspond almost exactly to renormalisation in quantum mechanics, but give an explanation as to their origin.

The TIM focuses on what I learn about a system using a physical measurement instrument. This starts by defining what an instrument is:

- It's a portion of the universe (i.e. it's not "outside the matrix")

- which has an input and an output (e.g. the probe and the display of a voltmeter)

- where changes in the state at the input yield a change in the state of the output (change in voltage result in changes on the display)

- which ideally depend only on the input (the voltmeter picks the voltage at the probe, not somewhere else)

- and change the output (nothing being said about the change in the input, since even macro-scale experiments can be destructive)

- the change in the output being mapped to a mathematical representation (often a real number) through a calibration

The instrument gives me knowledge about the state at the input. Since the instrument has a limited number of states in the output, my knowledge of the system through this instrument at any given time is described by a probability for each of the possible states. If I have N states, the probabilities p_1...p_N are all positive, and their sum is 1. So the knowledge state can be represented by a unit vector in dimension N.

For example, if I care about "is there a particle here", the possible measurements are "yes" and "no". The knowledge state is therefore represented by a unit complex number. If now you want to answer that on a plate with 1 million possible positions, you have a field of 1 million complex numbers, with the additional constraint that the particle must be at only one position (which is expressed as the sum of the probabilities for all "yes" being 1). That field is remarkably similar to the wave function, and this reasoning explains why it is complex-valued, why it is a probability of presence, and why it collapses when you know where the particle is.

But the primary difference with QM and GTR is that space-time is no longer continuous. It is discrete, and the discretization depends on the instrument being used. Because it is discrete, there are never any theoretical infinities in the sums you compute (these infinities being the reason why QM and GTR are considered fundamentally incompatible).

Here is a layman view of the incompatibility between QM and GTR. Imagine ants that try to define the laws of physics on earth. They setup rules, e.g. their anthill is only at one place in the universe, so the sum of the probability to find the anthill over all of space-time is 1. But if they now start realising that the earth surface is not flat but curved, now the method above does not work. If you go to infinity along the surface of earth, you "count" the anthill multiple times, so your integral, instead of being normalized, diverges to infinity. It is only an analogy, but it is an interesting one.