2004 Conference on Computer Vision and Pattern Recognition Workshop
Download PDF

Abstract

We present a novel 3D gesture recognition scheme that combines the 3D appearance of the hand and the motion dynamics of the gesture to classify manipulative and controlling gestures. Our method does not directly track the hand. Instead, we take an object-centered approach that efficiently computes the 3D appearance using a region-based coarse stereo matching algorithm in a volume around the hand. The motion cue is captured via differentiating the appearance feature. An unsupervised learning scheme is carried out to capture the cluster structure of these feature-volumes. Then, the image sequence of a gesture is converted to a series of symbols that indicate the cluster identities of each image pair. Two schemes (forward HMMs and neural networks) are used to model the dynamics of the gestures. We implemented a real-time system and performed numerous gesture recognition experiments to analyze the performance with different combinations of the appearance and motion features. The system achieves recognition accuracy of over 96% using both the proposed appearance and the motion cues.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles