(a) AudioTouch is a micro-gesture recognition approach based on active bio-acoustic sensing without requiring any instrumentation on users' fingers or palm. (b) It recognizes micro-gestures with small differences among various finger gestures. (c+d) It also allows for discrimination of force, further expanding interaction vocabulary. (e) This approach enables several compelling application scenarios such as device-free input in mobile scenarios.
Title: AudioTouch: Minimally Invasive Sensing of Micro-Gestures via Active Bio-Acoustic Sensing
Authors: Yuki Kubo, Yuto Koguchi, Buntarou Shizuki, Shin Takahashi, and Otmar Hilliges
Conference: MobileHCI2019, ACM SIGCHI
We present AudioTouch, a minimally invasive approach for sensing micro-gestures using active bio-acoustic sensing. It only requires attaching two piezo-electric elements, acting as a surface mounted speaker and microphone, on the back of the hand. It does not require any instrumentation on the palm or fingers; therefore, it does not encumber interactions with physical objects. The signal is rich enough to detect small differences in micro-gestures with standard machine-learning classifiers. This approach also allows for the discrimination of different levels of touch-force, further expanding the interaction vocabulary. We conducted four experiments to evaluate the performances of AudioTouch: a user study for measuring the gesture recognition accuracy, a follow-up study investigating the ability to discriminate different levels of touch-force, an experiment assessing the cross-session robustness, and, a systematic evaluation assessing the effect of sensor placement on the back of the hand.
This work was partially supported by Japan Science and Technology Agency (JST) ACT-I Grant Number JPMJPR16UA.