A new 30-page patent, filled by Wayne Westerman and John Elias, cofounders of FingerWorks (acquired by Apple), sheds some light on what could be Apple's future plans for multitouch devices.
Wayne Westerman and John Elias note in the patent that while new technology can allow fingertip to chord and move data generated by today's multi-touch input devices, fusing additional information from other sensing modalities can significantly enhance the performance of a device and improve the user experience. In the filing, details are presented on voice commands, finger identification, gaze vector, biometrics, and facial expression fusion.
Finger Identification and Biometrics
According to the patent filing, FingerWorks hopes to make today's multi touch devices able to resize and rotate the mechanical drawings with just a finger touch. To simplify this task, a built-in camera could be used to distinguish between certain fingers, such as when the index finger is touching the display or not touching the display, as opposed to the middle finger. For less specific task, such as color change, could be done with voice commands.
User information could be protected with biometric features. Finger prints or hand prints can be used as a personalized password to prevent unauthorized access to sensitive information. This feature includes the documentation and recognition of hand size, fingerprint input, body temperature, heart rate, skin impedance, and pupil size of the user. Typical applications that might benefit from the fusion of biometric data include games, security, and fitness related activities. Although some might worry that hand characteristics alone would not provide a sufficient level of identity verification, it could be the first door through which a user must pass before other security measures are applied.
Gaze Vector and Facial Expression
Coming in with another way to integrate a built-in camera, the next idea outlined also has the potential to assist a user greatly. By using the gaze vector fusion, a camera could track the position of the user's head and eyes to decide what part of the screen he or she is looking at. This could be used for selecting a window that is located in a certain location on the screen. Just by looking at the window box on the screen, the user can select or bring that particular window to the front screen.
Furthermore, a built-in camera could be used to identify the user's face and use this information for many different applications. One example would be expression recognition. With this feature, a computer could identify whether a user had a frustrated or confused look on his face, and subsequently offer the user help. These ideas with the future multi-touch devices could definitely revolutionize the user's input experience. However, whether these ideas will be used in the near future is still up for speculation.