Please use this identifier to cite or link to this item:
|Title:||Learning modality-invariant features for heterogeneous face recognition||Authors:||Huang, Likun
Tan, Yap Peng
|Keywords:||DRNTU::Engineering::Electrical and electronic engineering||Issue Date:||2012||Abstract:||This paper addresses the problem of heterogeneous face recognition where the gallery and probe face samples are captured from two different modalities. Due to large discrepancies yet weak relationships across heterogeneous face image sets, most existing face recognition algorithms usually suffer from this application scenario. To address this problem, we propose in this paper to learn modality-invariant features (MIF) for heterogeneous face recognition. In our proposed method, a pair of heterogeneous face datasets are used as generic training datasets, and the relationship between both gallery and probe samples and generic training datasets are computed as modality-invariant features for matching heterogeneous face images. The rationale of our method is motivated by the fact the local geometrical information of each pair of heterogeneous face samples are usually similar in the corresponding generic training sets. Experimental results are presented to show the efficacy of the proposed method.||URI:||https://hdl.handle.net/10356/99421
|Appears in Collections:||EEE Conference Papers|
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.