Abstract
Localizing the user from a feature database of a scene is a basic and necessary step for presentation of localized augmented reality (AR) content. Commonly such database depicts a single appearance of the scene, due to time and effort required to prepare it. To account for appearance changes under different lighting we propose to generate the feature database from a simulated appearance of the scene model under a number of different lighting conditions. We also propose to extend the feature descriptors used in the localization with a parametric representation of their changes under varying lighting conditions. We compare our method with a standard representation and matching based on L2-norm in a simulation and real world experiments. Our results show that our approach achieves a higher localization rate with fewer feature points and a lower process cost.