Abstract
Image exploitation systems that employ Automatic Target Recognition (ATR) technology generally require that a human-in-the-loop validates the ATR results, i.e., assisted target recognition. To evaluate the benefits of assisted target recognition, one must first understand the human?s interaction with ATR algorithms plus the many factors that influence both imagery analyst (IA) and ATR performance. Thus any assessment of assisted target recognition must be designed so that performance differences due to ATR assistance are isolated from other factors that affect image exploitation. An Aided Target Acquisition Perception Testing (ATAPT) Demonstration created procedures for assessing assisted and unassisted image exploitation, validating the methodology, metrics, and software architecture. This paper describes the ATAPT Demonstration and discusses the insights gained from this exercise that will improve future evaluations of systems that use ATR-assisted image exploitation.