Abstract
This paper describes a multiresolution based method for face recognition under illumination variation. The idea of using the double-density dual-tree complex wavelet transform (DD-DTCWT) for illumination invariant face recognition is motivated by the structure of the DD-DTCWT; in addition to the shift-invariance and directionality, the transformation contains more number of wavelets in each level. Assuming that an input image can be considered as a combination of illumination and reflectance, we use a tunable logarithmic function to obtain a representative image. The image is then decomposed into several frequency subbands via DD-DTCWT. Because the illumination mostly lies in the low-frequency part of the images, the high-frequency subbands are thresholded to construct a mask. Principal component analysis (PCA) and the extreme learning machine (ELM) are used for dimensionality reduction and classification, respectively. Experimental results are presented to illustrate the effectiveness of the proposed method.