Abstract
Human pose detection is the foundation for many applications, particularly those using gestures as part of a natural user interface. We introduce a novel task-based control method for human pose detection, encoding specialist knowledge in a descriptive abstraction for application by non-experts, such as developers, artists and students. The abstraction hides the details of a set of algorithms which specialise either in different estimations of pose (e.g. articulated, body part) or under different conditions (e.g. occlusion, clutter). Users describe the conditions of their problem, which is used to select the most suitable algorithm (and automatically set up the parameters). The task-based control is evaluated with images described using the abstraction. Expected outcomes are compared to results and demonstrate that describing the conditions is sufficient to allow the abstraction to produce the required result.