2014 22nd International Conference on Pattern Recognition (ICPR)
Download PDF

Abstract

Gesture recognition using RGB-D sensors has currently an important role in many fields such as human-computer interfaces, robotics control, and sign language recognition. However, the recognition of hand gestures under natural conditions with low spatial resolution and strong motion blur still remains an open research question. In this paper we propose an online gesture recognition method for multimodal RGB-D data. We extract multiple hand features with the assistance of body and hand masks from RGB and depth frames, and full-body features from the skeleton data. These features are classified by multiple Extreme Learning Machines on the frame level. The classifier outputs are then modeled on the sequence level and fused together to provide the final classification results for the gestures. We apply our method on the ChaLearn 2013 gesture dataset consisting of natural signs with the hand diameters in the images around 20-10 pixels. Our method achieves an 85% recognition accuracy with 20 gesture classes and can perform the recognition in real-time.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles