It's an interesting idea, and one I have given a little thought to. ( it would enable a very fault tolerant computer architecture) however, unless you implement highly redundant interconnects/busses, you still have the N-devices fighting for a shared resource problem.
If you make the assertion that all nodes have a private direct connection with all other nodes, and thus eliminate the bottleneck that way, you now have to gracefully decide how to handle a downed private link.
I suppose a hybrid might work. Fully dedicated links, and one shared bus. When dedicated link fails, communicate over the shared bus.
Scaling such a design would become prohibitively costly though. A 200 node design would have orders of magnitude more dedicated links.
The idea I had for playing with this idea, was to use some cheap wired home routers, set up private vlans on the 5 or so ethernet ports each has, then put private patch cables on each port, then put all the Wan ports on a dumb hub.
The local copies of Linux on each system can handle management of local device resources, and a daemon running on each node then handles listening/responding on each interface.
Just what such a thing would be good at doing escapes me though. To be really useful, you would need some way to have nodes specialize, then cooperate, without a central authority.
That way, should we decide to use this network to process live video, one node decodes the input stream, then dispatches portions of the decoded stream to peer devices, who then take the decoded stream and do whatever processing is requested, before sending the processed streams to yet another peer device which assembles the processed stream, then shuttles that to the endpoint node, which reencodes the stream and writes it to the output device. (Or some similarly cellular process)
I suppose this is kinda similar to how a neural colum works, where locally interconnected nets are restricted in the number of true local peers they have, and then communicate collectively to other neUral columns by dedicated interconnects. (Video input source in the above, could be from a camera, but it could also be from another network's output stream.)
The major logical tasks are:
Role selection in the assigned task for each local node.
How to issue instructions to the mesh nodes in a decentralized manner
Depending on how far you wanted to extrapolate this, each mesh node could be treated as a logical unit, where each logical node then is part of another, higher level node of similar topology: each mesh has a direct connection to each other mesh inside its higher order node, and one communal link all nodes can talk on inside that node.
Eg, if I make 5, 5node networks made out of such routers, I need 7 ports on each router. 5 for direct local traffic. 1 for local shared connect, 1 for direct connect to another 5node group. Clever use of subnetting and routing on the shared net would enable there to be a dumb gateway device to allow the shared higher link to function. Each 5node network is connected to every other 5-node network in the scaled up version.
Decisions on how to process incoming data might be tied to which interface received it, or any number of other methods.
Spying on the system state of the whole system should be possible through the shared link infrastructure, though ideally any node you interact with the system with should be a proper peer in it, and nit something sitting on the shared net only.
The drawback of such a design will be signal propogation latency, and keepin all the subnodes, at all levels, synchronized. The human brain uses a support network of astrocytes and glial cells to guide dedicated link physical routing, and to tune propogation delay between neural columns through selective mylienation of trunk bundles.
You could probably fake it with introduced waitstates.
At some point though, the behavior of the whole will revolve around the basic logic baked inside each physical compute unit. Ideally, that is universal, and consistent all over the system. The magic happens based not only on the message sent, but which interface recieved it, etc. The os kernel would have to be baked in at that level. All other results would be totally emergent.
The parallels with neural networks shouldn't be overlooked imo. There's a huge body of work to draw from on that subject.