In short, there will need to be a serious collaborative effort between vendors and the scientists (most of whom are not computer scientists) in taking advantage of new technologies. GPUs, Intel MIC, etc. are all great only if you can write code that can exploit these accelerators. When you consider that the vast majority of parallel science codes are MPI only, this is a real problem. It is very much a nontrivial (if even possible) problem to tweak these legacy codes effectively.
Cray holds workshops where scientists can learn about these new topologies and some of the programming tricks to use them. But that is only a tiny step towards effectively utilizing them. I'm not picking on Cray; they're doing what they can do. But I would posit that before the next supercomputer is designed, that it is done with input from the scientists who will be using it. There are a scarce few people with both the deep physics background and the computer science background to do the heavy lifting.
In my opinion we may need to start from the ground up with many codes. But it is a Herculean effort. Why would I want to discard my two million lines of MPI-only F95 code that only ten years ago was serial F77? The current code works "well enough" to get science done.
The power problem - that is outside of my domain. I wish the hardware manufacturers all the luck in the world. It is a very real problem. There will be a limit to the amount of power any future supercomputer is allowed to consume.
Finally, compilers will not save us. They can only do so much. They can't write better code or redesign it. Code translators hold promise, but those are very complex.