And if I've misrepresented rockmuelle, or misunderstood your question, qpqp, it's because I don't have an exact model of what you're saying.
Come now, don't blame everything on me!
What I meant by exact model is of course a predictable, and in a sense deterministic process; inasmuch as that is possible for the given case.
Even with machine learning you create a representation of the surveyed system, but this model will (currently, and in most cases) always be an approximation.
By mapping concepts, their (often ambiguous) meanings, usage scenarios and other relations from different areas to each other, supported by these approximations, it should in time be possible to avoid the issues related to the fuzziness and create a truly smart and adaptive system.
Of course, our universe (as far as we know) is (inherently?) non-deterministic. And obviously, if that is so, you'd have to somehow cheat (e.g. be able to observe our universe from more than the 4 dimensions we can perceive) to get a truly exact model, assuming that some (reachable) abstraction point is deterministic.
What I'm suggesting is that with some effort it should be possible for us to come up with something with the ability to understand something (like you did with my question, despite lacking an exact model;) ). And while ML is quite crude and more like a sledgehammer, an accurate definition is more like a chisel. At least with respect to the model(s).
Assuming such a system is created, it will have similar limitations like humans with regard to the ability to understand something, as we do not know everything as far as I am aware.
But anyway, the librarians didn't have the technical capability to create such a multi-dimensional mess like we currently can, so maybe these things we're talking about just have their own math that we just need to understand the proper rules for. It's all metadata anyway, but currently, I guess the closest we have to an exact model is in the hands of the NSA...