While it might have been to encourage platform lock in/exclusives, and just to be evil in Sony tradition, it's possible to explain it from a technology standpoint.
Warning, pure speculation follows based on a very brief time working in the games industry.
The PS2 was notoriously difficult to utilise compared to the PS1 and the Dreamcast, but over time it managed to hold its own against the more powerful gamecube and xbox. At the risk of hugely oversimplifying what made the PS1 manage to hold on so long was that it had a dedicated vector processor which meant that the competition's (N64, Saturn) faster processors mattered much less. (The N64 used the main CPU for just about everything which made the 90-odd MHz MIPS much less impressive.) The PS2 architecture was an evolution of the PS1 by adding more dedicated vector units rather than going the T&L GPU route which was just about to hit the big time.
The PS3 swapped vector processors for the Cell which was an obvious choice considering. However all ATI and nVidia GPUs have their own vector processing capabilities and I'd imagine that the costs of developing a special PS3 GPU that gave proper emphasis to the Cell were HIGH rather than making a CHEAPER GPU which must have been the intention. So the Cell became half redundant. And with all the compromises that were made to get the costs down it wound up with too much power in one narrow field, no memory bandwidth and no unified memory and a weaker GPU than the 360.
The 360 used a plain architecture that could be leveraged relatively easily from the get go, but has a lower potential for hidden magic. The PS3 was designed to have the potential to blow it out the water but the reality is that no one has found any hidden stores of power. Much like the Itanium, it was only better in theory while in reality it was a struggle even to match the competition. The end result is that developers have to work harder to match the 360 excepting a small number of rendering effects which become easier on the PS3.