This has to do with Dijkstra's rejection of trying to learn new concepts with analogies to existing concepts we already know. I'd read an essay of his on the same subject some years ago. From the linked Dijkstra essay:
"The above tried to capture the most common way in which we seem to cope with novelty: when faced with something new and unfamiliar we try to relate it to what we are familiar with. In the course of the process we invent the analogies that enable us to do so.
It is clear that the above way of trying to understand does not work too well when we are faced with something so radically new, so without precedent, that all analogies we can come up with are too weak and too shallow to be of great help. A radically new technology can create such circumstances and the wide-spread misunderstanding about programming strongly suggests this has happened with the advent of the automatic computer.
There is another way of approaching novelty but it is practised more more rarily. Apparently it does not come "naturally" since its application seems to require a lot of training. In the other way one does not try to relate something new to one's past experience - aware of the fact that that experience, largely collected by accident could well be inadequate. [...] To ease that process of liberation it might be illuminating to identify the most common metaphors and analogies and to see why they are so misleading.
I think anthropomorphism is the worst of all. [...]
I skip the numerous confusions created by calling programming formalisms "languages", except a few examples. [...]
And now we have the fad of making all sorts of systems and their components "intelligent" or "smart". It often boils down to designing a wooly man-machine interface that makes the machine as unlike a computer as possible: the computer's greatest strength - the efficient embodiment of a formal system - has to be disguised at great cost."