Please use this identifier to cite or link to this item:
Title: Human gait analysis and recognition
Authors: Liu, Nini
Keywords: DRNTU::Engineering::Electrical and electronic engineering
Issue Date: 2012
Source: Liu, N. (2012). Human gait analysis and recognition. Doctoral thesis, Nanyang Technological University, Singapore.
Abstract: This thesis explores the research of human gait analysis and recognition. It is still a hot research area due to the great demand for automatic human identification at a distance in many security-sensitive environments. Till now, most of the existing methods concentrate on gait recognition under controlled environments. Although a few researchers have paid attention to the problems of gait recognition under varying walking conditions, including viewing angle variation, clothing or carrying condition variation, and so on; however, the performance still leaves much to be desired. Therefore in this thesis, we aim to improve the gait recognition results under these varying walking conditions and thus facilitate the implementation of gait recognition research in real applications. Firstly, we address the problem of gait recognition under clothing or carrying condition variations and propose two simple and effective gait feature descriptors to enhance the performance. It is reported that either selecting relevant dynamic features from the popular gait appearance feature, Gait Energy Image (GEI), or incorporating temporal information into GEI is successful in alleviating the effects caused by clothing and carrying variations. Therefore, we firstly propose a new feature, called Dynamic GEI (DGEI), to select the most relevant spatial dynamic features from GEI for recognition. Then, we also propose a spatial-temporal dynamic gait image (STDGI) to further incorporate the temporal information into GEI through color interpolation. Secondly, we focus on the problem of gait recognition across varying views and propose a new joint subspace learning (JSL) solution. Our proposed method is inspired by the fact that if a three-dimensional (3-D) object can be well represented by a linear combination of a small number of prototypes from the same view, then the representation coefficients with the same prototypes remain fairly similar across different views. We propose to use these representation coefficients as view-invariant features for recognition. To obtain the coefficients, we first conduct JSL to learn the prototypes under different views. Then we represent the samples as a linear combination of the corresponding prototypes and extract the coefficients for recognition. Thirdly, we investigate the view recognition problem of human gait sequences from videos. To our knowledge, little previous work has focused on this problem, yet it is important because most existing view-invariant gait recognition methods assume that the views of the testing gait sequences are known before recognition which may not hold in many practical applications. To address this problem, we propose a new subspace learning method, adaptive discriminant analysis with enhanced multiple kernel learning (ADA-EMKL), which could combine multiple gait descriptors and extract low-dimensional features for view recognition. In our proposed method, we generate high order kernels to model the non-linear relationship between base kernels and apply pairwise data correlation to alleviate the confusion among samples that could be easily misclassified. Last but not least, we develop a new Multiview Subspace Representation (MSR) method which can handle multiple intra-subject variations during walking. By assuming that the gait data from different views of the same subject lie in a low-dimensional linear subspace, we propose to use the subspace basis to represent the data set. Extensive experiments conducted on popular gait databases demonstrate the effectiveness and robustness of all the above proposed methods.
DOI: 10.32657/10356/50745
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:EEE Theses

Files in This Item:
File Description SizeFormat 
TeG0702008L.pdfmain article3.43 MBAdobe PDFThumbnail

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.