Disclaimer: yes I work for Microsoft. No not on these projects.
This was demo'd live in front of 30K MSFT employees at our annual company meeting. It nearly brought me to tears. Yes, I can see through demoware and and yes it's highly imperfect, but honestly it was the single most impressive use of technology I've ever seen. It was both novel and simple. It combined hardware, algorithms, user experience, and cloud scale. I don't know if it will ever go anywhere though I expect that it will. The key point here is that these are off the shelf components. Kinect and gesture APIs combined with machine translation and text to speech. It's important that these are, all or nearly all public production APIs. Such a system 10 years ago even if possible, would never make it to market because of the tiny user base. Today we can build such apps for the 0.01% of the population that need Mandarin Sign Language translated to English. And it can be cost effectively. That is the point. Technology being used to address real problems for under served communities. So yes, maybe people researched automated sign language recognition years ago, but bringing it to market and enabling a scenario for real people is a wholly different beast