Abstract
Canonical correlation analysis (CCA) has been known as a representative joint dimension reduction of multimodal material. However, CCA fails to capture nonlinear discriminant structures hidden in original high-dimensional multi-modal material. To address this issue, we propose a novel unsupervised joint dimension reduction method called discriminant-sensitive locality canonical correlation analysis (DLCCA). The method embeds the locality-based discriminant structures into the between-modal correlation and the within-modal scatters. The low-dimensional nonlinear correlation features characterized as great discrimination can be well extracted by the method in the unsupervised cases. The experiments of face and handwritten recognition has proved the effectiveness and robustness of DLCCA.