I have considered combining AR with the Emotiv EEG controller for some years now.
The EEG input device allows full hands-free operation of the embedded platform (but has several outstanding bugs related to signal noise, and user training). This means google-glass can now be used without having to, for example, touch the eyepiece to take a picture, or start recording video-- Or regions of the AR image can be enhanced/manipulated based on user attention focus.
A low-cost (300$ is not low cost, unless you live in some hyper inflated local economy. Yes Silicon valley, you are a hyper inflated economy.) synthesis of these could enable all kinds of useful applications, from AR assisted night driving with bright IR LED based headlights and computer processing (does not blind other drivers, gives lots of illumination for the computer to do image capture with, and the resulting presentation requires no physical input method. Not even "pinch to zoom".)
To take off though, the fully integrated product needs to approach the 100$ price point. That includes hardware and software.
We aren't there yet.
(Other, highly lucrative applications: Soldiers with AR targeting. Limited upper body exo-harnesses intended to collect EEG motor-area data and correct body posture accordingly for precision sniping, etc. Hollywood already thought of this shit years ago. Tech is just catching up.)