I would have to agree with you, usage patterns will play into this a lot. But lets look at this another way.
Google takes the best chips, those chips are being run hotter then Intel recommends for the rest of the batch. So if Google wouldn't do this the cooling required would be related to Intel's recommendation by Google demanding a subset of chips that can run hotter they create a set of chips that require less cooling then Intel's recommendation. Thus the net cooling required is lower then if just using Intel's recommendation.
In the purest sense the GP makes sense but in the real world where the cooling cost is based on Intel's specs not what the chip can do Google is lowering the overall cooling cost, assuming 100% load at all time. But with usage patterns they are lowering the cost on a big segment that will run at 100% load while most other chips will not be running at 100%.