Depending on who's numbers you use, anywhere between 60,000 and 100,000 keyboard users are injured every year. Some temporary, some permanently. In time, almost 100% of all keyboard users will have trouble with typing and using many if not all mobile computing devices.
My question to you is simple. Given that some form of disability is almost inevitable what's keeping you from volunteering and working with geeks who are already disabled? By spending time now building the interfaces and tools necessary to enable them to use computers more easily, it will ensure your ability to use them in the future.
This query is aimed more at the kind of disability we are susceptible to and I have been living with for the past 15 years. Even though we have speech recognition, it doesn't solve any problem except writing text. There have been a couple attempts at making speech recognition more useful to programmersl0] but they have failed.
The starting needs are clear.
Working full vocabulary continuous recognition system on linux.
An application independent framework enabling dictation into any application.
Tools that don't expect you to "speak the keyboard"
Tools that let you edit code as well as create
So why don't more geeks work on securing their own future or at the very least,helping out their fellow geeks stay on the economic ladder?
 voicecode and vr-mode: voice code is an amazing piece of work. It makes it possible for a disabled program to generate Python code very quickly. Unfortunately, it does not solve the editing problem. Even more unfortunately, it's fairly complicated set up and get working. VR-mode makes it possible to use naturally speaking select and say mode in Emacs. That is, if you can get it to work. It seems to have drifted into non-functionality as Emacs has moved forward.
Naturally speaking works well, can be cheap, and works somewhat under wine today. If we can make it work under wine reliably, it solves the linux desktop speech recognition problem in months rather than decades. Other tools such as Sphinx 1-4 are great IVR systems if you have a vocabulary and grammar under 15,000 words. In contrast, Naturally speaking's working vocabulary is in the hundred thousand word range. any disabled user will choose naturally speaking because it works so much better than the nearest alternative. We have people who are injured now and need these tools. They can't afford to wait 10 years or more for an oss solution. In this case, functionality trumps politics.
 Speaking the keyboard refers to speech user interfaces developed by people who don't use speech recognition. They expect you to say too much which creates a vocal form of RSI, see . Listen to what disabled users do, not what you think they should speak.
 See voicecode in  unfortunately, it's only for writing code, not correcting code. Code correction is a very different process and must be spoken in a different way such as "change index" instead of "search forward left bracket leave mark search forward right bracket copy region". This is also an example of "speaking the keyboard"."