When Dijkstra suggested that "It [anthropomorphizing] invites the programmer to identify himself with the execution of the program" he was a bit confused about the notion of anthropomorphization. To attribute human behaviors to objects, i.e., to anthropomorphize, is very different from projecting oneself into an object. Papert called this projection, e.g. to program a virtual or physical turtle, body syntonicity. There certainly is evidence that this can be a useful thing to do to write or debug programs.
I fail to see the relevance of the example provided by the recent article for or against OO. The code in both cases is essentially the same. Just because there is no explicit class teacher does not mean that example #1 is not OO. There are really cases in which OO does lead towards certain implementation approaches that are inefficient or overly complex for no good reason. Search for "Antiobjects" to find some examples where OO would suggest to put certain behavior into a certain classes in ways that may result in very complex code. The Antiobject approach, in contrast, can lead to a very simple solution. The two approach are not only different in terms of perspective and where the code really goes but in terms of actual code. An example would be to compare a concurrent search, e.g., multiple ghost tracking down a pac-man. In the traditional OO approach one would be tempted to put the complex, e.g., A*-based "AI" into the ghosts. In the antiobject approach one would put the tracking code into the background, e.g., the tiles and walls of a maze to implement, say, a Collaborative Diffusion approach. The collaborative diffusion approach is not only trivial to implement but also results in sophisticated collaboration patterns that would be much more difficult to match with approaches flavored by traditional OO design.