So back in the day, we had a thing called the mbone, which was multicast infrastructure which was supposed to help with streaming live content from a single sender to many receivers. It was a bit ahead of its time, I think, streaming video just wasn't that common in the 1990s, and it also really only worked for actually-simultaneous streams, which, when streaming video did become common, wasn't what people were watching.
The contemporary solution is for big content providers to co-locate caches in telco data centers, so while you still send multiple separate streams of unsynchronized, high-demand streaming content, you send them a relatively short distance over relatively fat pipes, except for the last mile, which however only has to carry one copy. For low-demand streaming content, you don't need to cache, it's only a few copies, and the regular internet mostly works. It can fall over when a previously low-demand stream suddenly becomes high-demand, like Sunday night when NASA TV started to get slow, but it mostly works.
TFA (I know, I know...) doesn't address moving data around, but it seems like this is something that a new scheme could offer -- if the co-located caches were populated based purely on demand, rather than on demand plus ownership, then all content would be on the same footing, and it could lead to a better web experience for info consumers. That's a neat idea, but I think we already know how both the telcos and commercial streaming content owners feel about demand-based dynamic copy creation...