The kindest thing to say about the biological connections is that NN's
were initially inspired by toy abstractions of neurons,repeated
pointlessly and ad nauseam by successive researchers with no actual
domain knowledge in the introductions of their papers and later books.
Which is why they worked in somewhat obscurity for so many years,
particularly after the rise of the more practical theory. However,
history might show that their toy models captured something essential
particularly with things like convolutional neural networks. After
all that really is the essence of a good model.
Beyond the dubious value of intriguing a new reader, biological
analogies were neither useful nor actually used in most papers in the
field of neural networks.
The usefulness is an architecture that works. And most papers are not
useful.
The truth about ML is that it has almost nothing to do with mimicking
brains or intelligence, and almost everything to do with statistical
models and likelihood maximization scaled to large datasets.
Those statistical approaches are mathematically viable which is
partially why they are studied. That makes them good for papers that
look significant. However, the current, good empirical results are
based on deep learning and are not so tractable. In fact, they work
so well that the neuroscience people are now studying the connections
between real brains and deep learning though it is controversial.
This is no criticism of ML, that foundation is sound and represents
the best scientific guess at a collection of rational decision making
mechanisms over the last 100 years.
The foundation is not sound. The theory doesn't explain why these
neural networks work so well. I suppose we're lucky these NN
researchers stuck with it. Hopefully the math will catch up.