The reality is that the tech industry has reached a dead end with the death of Moore's Law.
Is the problem really processing power, though? For a system like this, it seems like there are other problems bound to creep up:
* AFAIK, we still don't have good enough AI to figure out a spacial 3D world from visual input. I know it's still being worked on and there's been progress, but being able to place objects in the real world in this kind of augmented reality requires that the computer can figure out the layout of 3D objects within the real world.
* Even if you can render the graphics and place them appropriately in the world, there's still the problem of designing the UI. You need to create both the visual look of the interface, and figure out which gestures to use for different controls. The interface (input and feedback) needs to be easy and intuitive and provide clear feedback to user interaction.
* You also need to make the gestures such that they're read by the computer reliably-- that is, if I'm supposed to do a specific hand motion to activate a feature, the hand motion needs to be something that the computer will recognize almost every time it is performed, it needs to be distinct enough from other control gestures and natural gestures. Basically, people need to be able to control these systems without constantly activating various controls by accident.
These are fairly difficult problems for computers to figure out, and as far as I know, they're not really a problem of insufficient computing power. That is, as far as I know, it's not like we've developed code that can do these things and a UI that works well, but we need a computer 5x as powerful to run it in real-time. The problem is that we just don't have the design/code to do it.