Oh, it's practically a given that the non-linearity won't be the same at all scales. But complex behaviour can be produced by very simple non-linear systems - the Mandelbrot Set being the best-known example, so the presence of non-linearity merely creates a problem of practical computability rather than a problem of mathematical computability.
(Remember, to be computable in the mathematical sense, the algorithm has to complete in finite time. Which can mean anywhere from a few picoseconds to an hour after the heat-death of the Universe. To be practical, though, the model must produce results within the time the results are useful. To be commercially practical, it also has to produce results faster than other methods of getting those results.)
So we're looking at nested non-linear systems, no matter what starting point we're using.
Let's start with a bottom-up approach.
In the biological world, each cell has multiple mechanisms running in parallel where each mechanism is non-linear. The cell itself is a non-linear construct of these. There are different types of interconnect and these are also non-linear, so any network of cells is a non-linear construction of non-linear components. The brain has topological constraints, but unless there's grounds for believing those constraints to fundamentally alter the maths, the maths should be independent of implementation details.
This says we're looking at a nesting 3 deep. So we're looking at a chaotic system in which potentially all of the parameters are themselves chaotic systems in which potentially all of the parameters of that are also chaotic systems.
What else do we know? We know that the lowest-level systems are fundamentally unchanged from how they were 3.5 billion years ago when cellular life first arose. They may be chaotic but the building-blocks are all very simple. The only real internal changes have been in the organization of the building-blocks. All other changes within cells deal with interactions and mathematically interactions are on a different level.
Most of those lowest-level systems are common to heart cells, skin cells and brain cells. Now, this will include communication mechanisms and those we DO have to consider. Basic housekeeping that is a product only of it being biological can be ignored. Systems specifically activated in neurons and NOT common across all cells also have to be considered, even if housekeeping, as state is persistent in neurons by means of such housekeeping.
Now, the mechanics of these functions aren't what's important. What's important is what they do to the logic of a neuron to make it capable of data processing.
The cell itself is a network of these. In standard computer network terms, you're looking at the equivalent of a multicast-capable routing-capable ad-hoc network of moderate size. This is just for a single neuron, we're not even up to networking these things. Actually, strictly speaking, it's multiple such networks. In biological cells, you've independent chemical and electrical paths. Different latencies and different bandwidths.
Unless there is firm evidence that this is an implementation detail that does not alter the specification, I believe that it is wisest to assume it DOES alter the specification, that signal delays and other signal characteristics are important. Some variables from iteration X of the system are fed into iteration X+1, but others are fed into iteration X+N (where N can't be guaranteed to be a constant). This is what makes it a chaotic system of chaotic systems rather than merely a bigger chaotic system.
Now, the network of cells is basically more multi-path networking where again different types of interconnect have different properties. Further, not only are the nodes in the network effectively mobile and multicast, but the number of nodes is variable.
(We can ignore the number of connections a given neuron has by looking at the superset of functions exhibited by all types of cell in the brain, whether neuron, axion, or whatever, and by treating data as multicast to interested parties in a group rather than as a point-to-point.)
Since a network of networks is essentially just a network, there may be optimizations you can make. It may also turn out that some of the functions that are unique to "brain cells" really are just implementation details, that the same result can be produced with a simpler model.
Ok, that's starting from the lowest level and working up. What happens if we start at the highest level and work down (the classic comp sci approach)?
Well, the brain consumes and generates far more data than the senses can produce or the muscles can use. Therefore, really, the I/O is much more to do with synchronizing with an external reality than it has to do with what the brain is doing.
As James Burke pointed out in is original Connections series, what the brain perceives as it having perceived and what the senses are actually recording can be very different.
Ergo, at the highest level, it is reasonable to regard the brain as a virtual world simulator in which the brain has a point-of-presence in that simulation on which it bases its actions. This fits with what is known of mental disorders.
Disorders that reduce the connectivity of the brain (such as the two hemispheres being isolated from each other) produce multiple points of presence (and therefore multiple viewpoints).
Synaesthesia doesn't just produce an appearance of misdirected data (which would be the case if it was merely a switching error to the wrong processing unit in the brain), it produces something indistinguishable to the person from reality. The tidiest way to represent this is the person being aware of a virtual reality that has been so altered rather than the person being directly aware of anything external.
But what is this virtual reality made of? It's made of smaller units in the brain processing I/O where the bandwidth between units is improbably low. Thus, each smaller unit is the same as the whole. (Self-similarity in action.) Each component is a VR, where the larger VR is built from the interactions of those VRs. However, the total bandwidth between components is greater than the total bandwidth between the larger VR and the outside world. (Thus, it exhibits a property of fractals in which reducing the scale increases the complexity.)
So we're looking at a system that exhibits self-similarity and some interesting fractal properties. So it's safe to say it's a chaotic system. But because the non-linearity varies between layers, it's a chaotic system of chaotic systems. And since the properties are also true of the regions of the brain and cells, the nesting is again 3 deep with the possibility of optimizing to 2 deep.
(Since very simple unicellular creatures react to a stimulus that is indirectly processed internally, cells are themselves VR systems.)
So we get the same result with both approaches, which is good. We can also show that the simplest units are relatively simple systems that produce very complex results. This is also good as it means that whilst the brain might require a nested system of time-delayed equations where each have few hundreds of billions of terms to represent it mathematically even in this nested form, the terms are all very simple and many are very similar.
Mathematically, that's not going to be hard to formalize. Computationally, it would likely be easy enough to code. Practically, although this model requires nothing a Turing Machine cannot do in finite time, this model is useless unless you don't care that one second of brain function will take years (more likely decades) to compute on anything in the Top500 list today.