ICPR 2008 19th International Conference on Pattern Recognition
Download PDF

Abstract

We present a system that enables pointing-based unconstrained interaction with a smart conference room using an arbitrary multicamera setup. For each individual camera stream, areas exhibiting strong motion are identified. In these areas, face and hand hypotheses are detected. The detections of multiple cameras are then combined to 3D hypotheses from which deictic gestures are identified and a pointing direction is derived. This is then used to identify objects in the scene. Since we use a combination of simple yet effective techniques, the system runs in real-time and is very responsive. We present evaluation results on realistic data that show the capabilities of the presented approach.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles