Kinect connects sign-language users with their computers

The 2013 Microsoft Research Faculty Summit, which started Monday and concludes today, brings more than 400 elite academics to Redmond, and some are sharing innovative projects like the Sign Language Recognition and Translation with Kinect.

The Inside Microsoft Research blog goes into detail about this project, a collaboration between researchers from Microsoft Research Asia and colleagues from the Institute of Computing Technology at the Chinese Academy of Sciences (CAS) to explore how Kinect’s body-tracking abilities can be applied to the problem of sign-language recognition.

Faculty Summit attendees can check out the project during the DemoFest portion of the Faculty Summit. Inside Microsoft Research sums up the project: “Hand tracking leads to a process of 3-D motion-trajectory alignment and matching for individual words in sign language. The words are generated via hand tracking by the Kinect for Windows software and then normalized, and matching scores are computed to identify the most relevant candidates when a signed word is analyzed.”

The algorithm for 3-D trajectory matching has led to a system that translates sign language into text or speech and enables communications between a hearing person and a deaf or hard-of-hearing person using an avatar.

Read more about it on Inside Microsoft Research and about the Faculty Summit on the Microsoft Research Connections Blog. If you missed Bill Gates’ keynote, you can watch it, as well as other sessions, on the Virtual Faculty Summit 2013.

Follow along with the rest of the summit by keeping an eye on The Fire Hose, the Microsoft Research Connections Blog, the Inside Microsoft Research Blog and Socl. You can also join the conversation on Twitter by following @MSFTResearch and using the hashtag #FacSumm.

You might also be interested in:

 

Athima Chansanchai
Microsoft News Center Staff