This stuff has been around for a while, and I have the following problems with it:
1. We already pretty much have CCN. They're called URLs, and companies like Akamai and others do a great job of dynamically pointing you to whatever server you should be talking to using DNS, HTTP redirects, etc. When I type www.slashdot.org, I already don't care what server it lives on. When I type https://www.slashdot.org/ I still dont care what server it is on, and I have at least some indication that the content is from someone authorized to speak on behalf of www.slashdot.org (PKI crap aside)
2. The article mentions that this tech would be used to relieve load at the core -- which I'm not sure I buy. The core is well known to be overprovisioned, and a recentish survey http://techcrunch.com/2011/05/17/netflix-largest-internet-traffic/ has shown that netflix and youtube consume 40% of downstream bytes -- both services already serviced by major CDNs pushing at least some traffic away from the core.
3. I'm unclear on the value proposition for us to redesign every router to be effectively, an HTTP proxy cache. These devices are well studied and even if we got a higher cache-hit-rate using CCN, I'm not convinced it would help anything. After all, we are doing just fine.
4. I think this approach is in the end, fundamentally wrong. Regardless of how much magic we use to find out what machine to get data from, we will always be transferring data from one computer to another (a caching router is effectively a computer). It seems to me that until we no longer need to move packets from some machine A to some other machine B, it makes sense to have host-centric primitives, and build our abstractions on top of them. That's what we've been doing, and it's been working pretty well.