Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/161899
Title: GMFAD: Towards generalized visual recognition via multilayer feature alignment and disentanglement
Authors: Li, Haoliang
Wang, Shiqi
Wan, Renjie
Kot, Alex Chichung
Keywords: Engineering::Electrical and electronic engineering
Engineering::Computer science and engineering
Issue Date: 2020
Source: Li, H., Wang, S., Wan, R. & Kot, A. C. (2020). GMFAD: Towards generalized visual recognition via multilayer feature alignment and disentanglement. IEEE Transactions On Pattern Analysis and Machine Intelligence, 44(3), 1289-1303. https://dx.doi.org/10.1109/TPAMI.2020.3020554
Project: 206-A017023
206-A018001
Journal: IEEE Transactions on Pattern Analysis and Machine Intelligence
Abstract: The deep learning based approaches which have been repeatedly proven to bring benefits to visual recognition tasks usually make a strong assumption that the training and test data are drawn from similar feature spaces and distributions. However, such an assumption may not always hold in various practical application scenarios on visual recognition tasks. Inspired by the hierarchical organization of deep feature representation that progressively leads to more abstract features at higher layers of representations, we propose to tackle this problem with a novel feature learning framework, which is called GMFAD, with better generalization capability in a multilayer perceptron manner. We first learn feature representations at the shallow layer where shareable underlying factors among domains (e.g., a subset of which could be relevant for each particular domain) can be explored. In particular, we propose to align the domain divergence between domain pair(s) by considering both inter-dimension and inter-sample correlations, which have been largely ignored by many cross-domain visual recognition methods. Subsequently, to learn more abstract information which could further benefit transferability, we propose to conduct feature disentanglement at the deep feature layer. Extensive experiments based on different visual recognition tasks demonstrate that our proposed framework can learn better transferable feature representation compared with state-of-the-art baselines.
URI: https://hdl.handle.net/10356/161899
ISSN: 0162-8828
DOI: 10.1109/TPAMI.2020.3020554
Rights: © 2020 IEEE. All rights reserved.
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:EEE Journal Articles

Page view(s)

12
Updated on Nov 25, 2022

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.