No but at the same time, A given chip with a higher clock speed WILL out perform the same chipset at a lower clockspeed.
Depends on the kinds of operations you're throwing at it. If it's simple integer math, then yes, every single time. If it's more complicated floating point math, then it'll depend on how efficiently it's implemented in the instruction set (which is why a 2.8GHz i3 will smoke a 5GHz P4 on almost every benchmark). If it's very large array math (such as most graphics computations and AI), then it'll depend on how parallel your code is and how many threads you can execute simultaneously. You can take a modern Intel chipset, and clock an i7 at the same speed as an i3: for some types of operations they'll score exactly the same on benchmarks, and for others the i7 will score about 4x better (twice the cores, and hyperthreading enabled = 4x the threads).
There's a reason that NVidia and AMD are competing on stream processors more than they are clock speed: modern graphics processing is embarrassingly parallel, and performance scales linearly with number of processors, while you see diminishing returns with clock speed.
As for gaming, and why they will have gone with a lower clock speed... very little in modern games is actually dependent on having a high clock speed. Almost everything that games do is dependent on graphics, which is a completely different problem, which leaves things like AI and object tracking, both of which benefit more from parallelization than they do an increased clock speed. They also need to worry about EnergyStar certification, and a consumer base that is increasingly aware of the power consumption of their electronic devices. Money is not infinite for their consumers, and they get better economy throwing a manycore low speed processor at it than they would throwing a high speed processor with a low core count.