As someone with some game development experience, let me throw in some observations. (*based on the specs mentioned here).
The 3.2 Ghz Power PC CPUs in the Xbox 360 and PS3 were in-order execution units. As I remember, code on the 360 typically executed about 0.2 IPC -(Instructions per cycle), sometimes worse. The very best hand optimized assembler doing tasks like video decoding could execute about 0.9 IPC once properly cached and unrolled.
AMD and Intel have decades of R&D now into out-of-order x86 execution (the x86/x64 opcodes being translated to internal micro ops), which is a major factor in their performance. Even the Power PC G5 chip devoted a good chunk of its silicon to Out-or-order execution. The 360 and PS3 CPUs - designed almost 10 years ago - traded Out of Order execution for die size and clock speed.
The specs say that the 1.6 Ghz CPUs can issue up to 2 instructions per cycle. If real world performance works out to an IPC of 1.2 to 1.6, which seems very doable, then you will see a 3x to 4x increase in the real-world rate of instructions being performed . ( 0.2 IPC @ 3.2Ghz == 0.4 IPC @ 1.6Ghz ). This doesn't take into account any efficiency gains due to the instruction set, cache, etc.
And at the same time, I would imagine it's a whole lot easier to deal with other things on the chipsets at 1,.6Ghz than at 3.2 Ghz (mature tech and all that)