I do high-performance computing for a living, and Moore's Law has been on its last gasps for a while now.
Until around 2006, the smaller you made a transistor, the faster it could work. This was called Dennard scaling. But once transistors reach a certain size, current leakage and thermal issues prevent you from making the transistors faster.
While they can't drive transistors any faster, smaller processes still allow them to put *more* transistors on a chip. This is why we've gone from single-core to multi-core to multi-core with GPU compute on a die.
Despite all the complaints about "CPU's haven't gotten much faster since Nehalem", they *have* gotten quite a bit faster. You just have to rewrite/optimize/recompile your program to take advantage of multi-core, GPU compute, and SIMD instructions like AVX2.
This is the primary reason programs aren't running much faster than before. Silicon isn't getting any faster, and rewriting programs to scale isn't easy and sometimes isn't worth it so many people don't. Moore's Law no longer results in "free", "easy" speed-ups.
CPU's for the next few years are looking pretty incremental. I'd expect a one-off moderate increase in single-core performance once Intel moves off silicon onto III-V semiconductors (10 or 7 nm?), but past that you will likely be waiting several years for your graphene/nanotube/topological insulator/spintronics overloads to deliver something substantially faster.