I would not know why both can not be expressed with a set of small functions.
Well, what would those small functions be? Suppose we've got a data processing pipeline that essentially looks something like this:
valueA, valueB = step1(inputA, inputB, inputC)
valueC = step2(valueA, inputD)
valueD, valueE = step3(valueB, valueC)
resultA, resultB = step20(valueX, valueY, valueZ)
It probably makes sense to break out each individual step into its own function, but if your algorithm has 20 steps, each of which might need some input data from various earlier steps, what advantage is there in breaking this function down any more? It's just fundamentally a long sequence of operations.
The other example I gave is a slightly different case, but a similar conclusion. The general subgraph isomorphism problem is NP-complete, but you probably know more about your underlying data in realistic situations, so matching algorithms often wind up being some sort of deeply nested search with the order of tests determined heuristically. The similarity to the previous example is that again each level of the search might need context from various outer levels. So again, what is gained by breaking out the inner loops into their own functions here? They have no inherent meaning without that context, so it's not as if they're going to be reusable or they're going to represent some useful abstraction with a meaningful name. It's just splitting up code that is fundamentally related to avoid some dogma about deep nesting being bad.
I don't know what your background is or whether you've ever worked in the kinds of fields where these sorts of situation tend to come up. A lot of programs only need very simple data structures and algorithms, and if that's the kind of code you tend to work on then maybe what I'm saying just looks like contrived examples. All I can say is that I've worked on many projects over the years that do have need of more sophisticated data structures and algorithms than, say, typical business management software, and I've seen plenty of code where having functions of 50-100 lines or longer is entirely reasonable. Obviously I'm not talking about all of the functions, or probably even most of them, but I would argue that the important point is whether each function provides some clean, coherent packet of functionality, not whether it's 5 lines long or 105.
Perhaps your mind works different than mine. For me it is obvious that a set of smaller functions is easier to understand and maintain than a big one.
From the discussions so far, I suspect we've just worked mostly on different types of software. In some cases, I do write a lot of quite small functions. But I don't do that because being small is good, I do it when it happens to need only a few lines of code to represent whatever concepts I'm working with at the time.
When I see people like Robert Martin making sweeping generalisations about keeping functions very short or keeping under N parameters or whatever, and particularly when those people also imply that anyone whose code doesn't follow their rules is somehow a bad programmer or unprofessional or whatever insult they're hurling around this week, I feel a bit like a guy modelling the aerodynamics for a supersonic jet plane whose ten-year-old kid thinks wing design doesn't really matter because he can get enough lift already with his paper plane.