Here's another, somewhat pessimistic piece they posted in 2008 - a digest of a DARPA report that went into significant technical detail.
The biggest hurdle is power, and the biggest driver of that isn't the actual computation (i.e., the energy to perform some number of FLOPS), but rather moving that data around (between cores, to/from RAM, across a PCB, and among servers). Other hurdles include how to manage so many cores, ensure they are working (nearly) concurrently, how to handle hardware failures (which will be frequent given the amount of hardware), and writing software that can even make use of such technology in anything approaching optimal fashion.
Not to say its impossible, merely hard given the present state of things and projecting a bit into the future. But as we know, "it is difficult to make predictions, especially about the future." [source?]