We already know the abilities of Kinect with voice and facial recognition but what we didn’t know was that Microsoft are catering to the hard of hearing. A patent has revealed Microsofts tech is capable of understanding American sign language, will be able to lip read and accuratly track toe movement.
“Where the user is unable to speak, he may be prevented from joining in the voice chat.” Explains the patent. “Even though he would be able to type input, this may be a laborious and slow process to someone fluent in ASL. Under the present system, he could make ASL gestures to convey his thoughts, which would then be transmitted to the other users for auditory display. The user’s input could be converted to voice locally, or by each remote computer.
“In this situation, for example, when the user kills another user’s character, that victorious, though speechless, user would be able to tell the other user that he had been ‘PWNED’. In another embodiment, a user may be able to speak or make the facial motions corresponding to speaking words. The system may then parse those facial motions to determine the user’s intended words and process them according to the context under which they were inputted to the system.”
It’s the right and extremely beneficial step forward for the disabled gamer. This shows that Microsoft care for all types of gamers and are willing to adapt the technology so everyone can experience the sensation that is Kinect. But how accurate is Kinect with tracking body parts? The patent continues to further detail.
“[Within the skeletal mapping system] a variety of joints and bones are identified: each hand, each forearm, each elbow, each bicep, each shoulder, each hip, each thigh, each knee, each foreleg, each foot, the head, the torso, the top and bottom of the spine, and the waist. Where more points are tracked, additional features may be identified, such as the bones and joints of the fingers or toes, or individual features of the face, such as the nose and eyes.”