Slashdot videos: Now with more Slashdot!
When Dijkstra suggested that "It [anthropomorphizing] invites the programmer to identify himself with the execution of the program" he was a bit confused about the notion of anthropomorphization. To attribute human behaviors to objects, i.e., to anthropomorphize, is very different from projecting oneself into an object. Papert called this projection, e.g. to program a virtual or physical turtle, body syntonicity. There certainly is evidence that this can be a useful thing to do to write or debug programs.
I fail to see the relevance of the example provided by the recent article for or against OO. The code in both cases is essentially the same. Just because there is no explicit class teacher does not mean that example #1 is not OO. There are really cases in which OO does lead towards certain implementation approaches that are inefficient or overly complex for no good reason. Search for "Antiobjects" to find some examples where OO would suggest to put certain behavior into a certain classes in ways that may result in very complex code. The Antiobject approach, in contrast, can lead to a very simple solution. The two approach are not only different in terms of perspective and where the code really goes but in terms of actual code. An example would be to compare a concurrent search, e.g., multiple ghost tracking down a pac-man. In the traditional OO approach one would be tempted to put the complex, e.g., A*-based "AI" into the ghosts. In the antiobject approach one would put the tracking code into the background, e.g., the tiles and walls of a maze to implement, say, a Collaborative Diffusion approach. The collaborative diffusion approach is not only trivial to implement but also results in sophisticated collaboration patterns that would be much more difficult to match with approaches flavored by traditional OO design.
I was lucky enough to gather some parallel programming experience on the Connection Machine CM2, a 64k CPU (yes that is 65536 CPUs), 12 dimensional hypercube, a long time ago. The CM2 ultimately failed but we did get many great insights into parallel programming. At the time it was just not feasible for low cost, on your desktop, computing. It is NO problem to keep massive numbers of cores busy doing interesting computing. OK, the 12 dimensions are less clear on how to use them. At any rate, to claim that there is no need for 100 cores or more is really small minded because unlike the time when silly "the world does not need more than 5 computer" kinds of comments were made we already have evidence that there are powerful ways to employ massive parallel computing that can use thousands or even millions of cores.
Just because we are being caught in a sequential programming mindset does not mean that there is no room for parallel programming. If you are looking at a two dimensional array of data and think of a nested loop you ARE caught in a sequential programming mindset. Additionally, famous people, including Dijkstra, have poopooed some algorithms that are inefficient when execute sequentially to the point where researcher, or programmers, are not even looking any more for good parallel execution. Take bubble sort. Not sure it was Dijkstra but somebody suggested to forbid it. Yes, on a sequential computer bubble sort is indeed inefficient but guess what. If communication does matter and if you are using a massively parallel architecture (i.e., not 4 cores) bubble sort becomes quite efficient because you only need to talk to your data neighbors. Likewise there are AI algorithms that can be shown to be behave really well when conceptualized and executed in parallel. Collaborative Diffusion is an example: http://www.cs.colorado.edu/~ra...
Link to Original Source
Link to Original Source