Please use this identifier to cite or link to this item:
Title: Sparse representation based classification for face recognition
Authors: Lai, Jian
Keywords: DRNTU::Engineering::Computer science and engineering
DRNTU::Engineering::Electrical and electronic engineering
Issue Date: 2015
Source: Lai, J. (2015). Sparse representation based classification for face recognition. Doctoral thesis, Nanyang Technological University, Singapore.
Abstract: Building a computer as intelligent as or more intelligent than human is the ultimate goal of machine learning. To explore this powerful ability in computer vision, it inspires numerous interesting and useful research areas and applications. Face recognition (FR), as one of the most challenging problems in machine learning and computer vision, provides a good test bed to showhow intelligent amachine can be. Moreover, it shows huge potential for commercial and surveillance, such as access control, computer-human interaction and anti-terrorist. FR remains a hot research topic though it has been studied for more than three decades. One important reason is that there is still large gap between human and machine to recognize the identity of a face image due to extreme illumination, expression and pose variation, disguise, and occlusion et al.. Recently, sparse representation based classification (SRC) has been introduced and applied to FR in dealing with some challenge scenarios. Different from the traditional classifiers, which represent the query image by each individual class, SRC considers the query image as a sparse linear combination of training samples of all subjects. The optimization of the sparse coefficients lets every training sample compete against the others to win its representation share of the query image. This is a good discriminating process. The advantages of SRC lead to some encouraging and impressive FR results. Although SRC shows itsmerit in handling the query image with sparse corruption (e.g., occlusion or disguise), there are still many problems remained to be addressed. In this thesis, we propose several algorithms for FR to tackle the limitations of SRC. Comprehensive results to validate our proposed feature extraction and classification frameworks are given. Very recently, SRC has been applied to feature extraction or dimensionality reduction. We first analyze the limitation of unsupervised procedure of SRC in dimensionality reduction. To address this problem, we propose a supervised sparse representation (SSR) utilizing training label information. By promoting the coefficients of correct class and suppressing those of the others, the proposed SSR learns a more discriminative representation than SRC does. Furthermore, we also propose a dimensionality reduction method, named discriminative sparsity preserving embedding (DSPE). The proposed DSPE captures the discriminative sparse and discriminative structure of labelled training data based on the proposed SSR and finds the low dimensional subspace that reduces the within class distances meanwhile keeping the between class distances. We then develop a modular weighted global sparse representation (WGSR) framework to handle the severely corrupted query image. In the proposed WGSR, an image is first divided into modules and each module is processed separately to determine its reliability. We propose to use the modular sparsity and residual jointly to determine the modular reliability. A reconstructed image from the modules weighted by their reliability is formed for the robust recognition. The proposed framework advances both the modular and global sparse representation approaches, especially, in dealing with query images with disguise, large illumination variations or expression changes. The third contribution of this thesis is the introduction of a class-wise sparse representation with collaborative patch (CSR-CP). To alleviate the problem of unsupervised optimization of sparse representation, we proposed a class-wise sparse representation (CSR) by employing class label information. The proposed CSR seeks an optimum representation of the query image such that it simultaneously minimizes the reconstruction error and maximizes class-wise sparsity. To tackle the problem of insuffcient training data, we further propose a collaborative patch framework, together with CSR, named CSR-CP. Different from the patch based methods, which optimize each patch independently, the proposed CSR-CP reconstructs the whole query sample by sub-images of all subjects across all pathes within one framework, and searches a patch-wise and class-wise sparse representation. As a result, a more reliable and discriminative result can be attained. Finally, we addresses two fundamental problems of SRC by proposing a sparse- and densehybrid representation (SDR) via the supervised low rank (SLR) dictionary decomposition approach. With sufficient representative and well control training data, SRC has proven its success in some challenging scenarios. A violation of these two conditions leads to a poor performance of SRC. However, FR is an application where we have a large number of subjects but sufficient representative and uncorrupted training images cannot be guaranteed for every subject. To alleviate the problems of SRC, a sparse- and dense-hybrid representation (SDR) framework is proposed. We further propose a procedure of supervised low-rank (SLR) dictionary decomposition to facilitate the proposed SDR framework. In addition, the problem of the corrupted training data is also alleviated by the proposed SLR dictionary decomposition. The application of the proposed SDR-SLR approach in face recognition verifies its effectiveness and advancement to the field.
DOI: 10.32657/10356/65460
Schools: School of Electrical and Electronic Engineering 
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:EEE Theses

Files in This Item:
File Description SizeFormat 
Laijian_Thesis.pdfMain article1.81 MBAdobe PDFThumbnail

Page view(s)

Updated on Oct 3, 2023

Download(s) 20

Updated on Oct 3, 2023

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.