It's just speech. It's a milestone. It's not difficult to exceed one exaflops (the name stands for operations per second, it's not a plural) once you got to, say, 0.99 exaflops. Scientists like to talk in orders of magnitude. Right now we are in the tens of petaflops, but didn't get yet to hundreds. Tiahne-2 gets to 55 pflops, but its sustained speed is a bit bigger than half of that.
Problem is much more about how to get there. It's not just machinery. Is how to actually write and debug programs at that scale. As we cannot make the cores much faster than what we have today, the solution is to add more cores.
The added cores increase the stress on the network, and makes programming such thing much more difficult. Good luck debugging a race condition on one million processes.
Other problems arise from things as mundane as equipment breaking. Think that if you have a single broken memory chip during the execution of a program, the whole computation is either compromised or just lost. And with millions of cores, comes millions of motherboards, power supplies, I/O system, storage, all kinds of electronic components which are subject to problems.
So, while technically, it's not a barrier per se, this huge number of variables that makes things exponentially more complex than what we have today is indeed a barrier. As someone asked here, we cannot just make a cluster of tianhe-2s. The thing would be breaking all the time, spending so much electricity and manpower for maintenance that its uptime would be smaller than a windows 98 unpatched machine connected to an open network.