Suggesting that the purpose of intelligence in this man's random musings might be to increase the background levels of entropy for your own benefit.
That's close, I think. I am not a physicist and I skimmed the equations, but here's my take on what they're proposing. Physical systems have states, which can be described by a state vector. The state of these systems evolves according to some set of rules that describes how the state vector changes over time. They've built a simulator in which the probability of a certain state transition is computed by looking at how many different paths (in state space, i.e. future histories of the system) are possible from the new state, in such a way that the system tries to maximise the number of possibilities for the future. In one example, they have a particle that moves towards the centre of a box, because from there it can move in more directions than when it's close to a wall.
They then set up two simple models mimicking two basic intelligence tests, and find that their simulator solves them correctly. One is a cart with a pendulum suspended from it, which the system moves into an upright position because from there it's easiest (cheapest energetically, I gather) to reach any other given state. The other is an animal intelligence test, in which an animal is given some food in a space too small for it to reach, and a tool with which the food can be extracted. In their simulation, the "food" is indeed successfully moved out of the enclosed space, because it's easier to do various things with an object when it's close compared to when it's in a box. However, in neither case does the algorithm "know" the goal of the exercise. So they've shown that they've invented a search algorithm that can solve two particular problems, problems which are often considered tests of intelligence, without knowing the goal.
Then, they use this to support the hypothesis that intelligence essentially means maximising future possibilities. Another way of saying this, I think, is that an intelligent creature will seek to maximise the amount of power it has over its environment, and they've translated that concept into the language of physics. That's an intriguing concept, relating to the concept of liberty, power struggles between people at all scale levels, scientific and technological progress, and so on. I can't imagine this idea being new though. So it all hinges on to what extent this simulation adds anything new to that discussion.
On the face of it, not much. You might as well say that they've found two tests for which the solution happens to coincide with the state that maximises the number of possible future histories. The only surprising thing then is that their stochastically-greedy search algorithm (actually, without having looked at the details, I wouldn't be surprised if it turned out to be yet another variation of Metropolis-Hastings with a particular objective function) finds the global solution without getting stuck in a local minimum, which could be entirely down to coincidence. It's easy to think of another problem that their algorithm won't solve, for example if the goal would be to put the "food" into the box, rather than taking it out. Their algorithm will never do that, because that would increase the future effort necessary to do something with it. Of course, you might consider that pretty intelligent, and many young humans would certainly agree, although their parents might not. It would be interesting to see how many boxed objects you need before the algorithm considers it more efficient to leave them neatly packaged rather than randomly strewn about the floor, if that happens at all.
There's another issue in that the examples are laughably simple. While standing upright allows you to do more different things, no one spends their lives standing up, because it costs more energy to do that as a consequence of all sorts of random disturbances in the environment. The model ignores this completely. Similarly, you could argue that since in the simulation (unlike in the actual animal experiment) there is no reward for using the object, expending the energy to get it out of its box is not very intelligent at all.
Conclusion, interesting idea, but in its present state, not much more than that.