Please use this identifier to cite or link to this item:
|Title:||High speed and robust illumination invariant face recognition techniques||Authors:||Lian, Zhichao||Keywords:||DRNTU::Engineering::Electrical and electronic engineering::Control and instrumentation||Issue Date:||2013||Source:||Lian, Z. (2013). High speed and robust illumination invariant face recognition techniques.. Doctoral thesis, Nanyang Technological University, Singapore.||Abstract:||The variation in illumination is still a challenging issue in automatic face recognition although considerable progresses have been achieved in controlled environments. It has been proven that differences between various lighting conditions are much greater than differences between individuals. Though numerous approaches have been proposed to address the issue in recent years, their performances are still not satisfying. In this thesis, some high speed approaches for face recognition under varying illuminations are developed. Firstly, a novel illumination normalization approach with low computation complexity is proposed based on Walsh-Hadamard transform (WHT) characterized by low-complexity. By discarding an appropriate number of low-frequency coefficients in the block-wise WHT domain, the effects caused by illumination variations are removed. The proposed method is validated on the Yale B and the Extended Yale B databases. In addition, both analytical proof and experimental results demonstrate that Principal Component Analysis (PCA) and Null-space-based Linear Discriminant Analysis (NLDA) can be directly implemented in the WHT domain without the inverse WHT to reduce computational burden further. Secondly, a novel illumination normalization approach is proposed for face recognition under varying illuminations. In the proposed approach, illumination is estimated in two steps. First of all, low frequency Discrete Cosine Transform (DCT) coefficients in the logarithm domain obtained in local areas are used to estimate illumination coarsely rather than estimating illumination in a global way. After that, a refining estimation step with mean operator is applied to estimate the illumination of every point more precisely. Experimental results demonstrate that the method is superior to other existing methods. Furthermore, a simplified version of the method is also proposed. Both theoretical analysis and experimental results demonstrate the validity and high computational efficiency of the simplified version. Performances of the proposed methods under different values of parameters are also discussed. In addition to the aforementioned illumination normalization methods, an illumination invariant facial feature local relation map (LRM) is explored according to local properties of human faces. A face model under varying illuminations with an additive term as noise is investigated besides the common multiplicative illumination term. High frequency coefficients of the DCT are zeroed to remove the noise. Experimental results validate the proposed face model and the assumption on noise. Different from the common assumption, the illumination and the reflectance cannot be well approximated by only low and high frequency components respectively. In this thesis, an adaptive illumination normalization approach is proposed based on a data-driven soft-thresholding denoising technique. The proposed method models each DCT coefficient except the DC component as Generalized Gaussian distribution (GGD). More information of the reflectance in the low-frequency band is preserved while illumination variations in the high-frequency band are removed. Moreover, the key parameters are adaptively determined without any prior information. Finally, a novel robust face descriptor named Local Line Derivative Pattern (LLDP) is presented for face recognition to deal with not only illumination variations but also expression and aging variations. High-order derivative images in two directions are obtained by convolving original images with Sobel Masks. In the LLDP, an improved binary coding function and three standards on arranging the weights are proposed, and a novel distance measuring both pixel-level and global-level information is also introduced. Promising experimental results are obtained from various face recognition databases.||URI:||https://hdl.handle.net/10356/54910||DOI:||10.32657/10356/54910||Fulltext Permission:||open||Fulltext Availability:||With Fulltext|
|Appears in Collections:||EEE Theses|
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.