Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/160950
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLi, Haoliangen_US
dc.contributor.authorWan, Renjieen_US
dc.contributor.authorWang, Shiqien_US
dc.contributor.authorKot, Alex Chichungen_US
dc.date.accessioned2022-08-08T07:15:42Z-
dc.date.available2022-08-08T07:15:42Z-
dc.date.issued2021-
dc.identifier.citationLi, H., Wan, R., Wang, S. & Kot, A. C. (2021). Unsupervised domain adaptation in the wild via disentangling representation learning. International Journal of Computer Vision, 129(2), 267-283. https://dx.doi.org/10.1007/s11263-020-01364-5en_US
dc.identifier.issn0920-5691en_US
dc.identifier.urihttps://hdl.handle.net/10356/160950-
dc.description.abstractMost recently proposed unsupervised domain adaptation algorithms attempt to learn domain invariant features by confusing a domain classifier through adversarial training. In this paper, we argue that this may not be an optimal solution in the real-world setting (a.k.a. in the wild) as the difference in terms of label information between domains has been largely ignored. As labeled instances are not available in the target domain in unsupervised domain adaptation tasks, it is difficult to explicitly capture the label difference between domains. To address this issue, we propose to learn a disentangled latent representation based on implicit autoencoders. In particular, a latent representation is disentangled into a global code and a local code. The global code is capturing category information via an encoder with a prior, and the local code is transferable across domains, which captures the “style” related information via an implicit decoder. Experimental results on digit recognition, object recognition and semantic segmentation demonstrate the effectiveness of our proposed method.en_US
dc.description.sponsorshipNanyang Technological Universityen_US
dc.language.isoenen_US
dc.relation.ispartofInternational Journal of Computer Visionen_US
dc.rights© 2020 Springer Science+Business Media, LLC, part of Springer Nature. All rights reserved.en_US
dc.subjectEngineering::Electrical and electronic engineeringen_US
dc.titleUnsupervised domain adaptation in the wild via disentangling representation learningen_US
dc.typeJournal Articleen
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen_US
dc.contributor.schoolInterdisciplinary Graduate School (IGS)en_US
dc.contributor.researchRapid-Rich Object Search (ROSE) Laben_US
dc.identifier.doi10.1007/s11263-020-01364-5-
dc.identifier.scopus2-s2.0-85089294127-
dc.identifier.issue2en_US
dc.identifier.volume129en_US
dc.identifier.spage267en_US
dc.identifier.epage283en_US
dc.subject.keywordsIn the Wilden_US
dc.subject.keywordsCross-Domainen_US
dc.description.acknowledgementThis research is supported in part by the Wallenberg-NTU Presidential Postdoctoral Fellowship, the NTU-PKU Joint Research Institute, a collaboration between the Nanyang Technological University and Peking University that is sponsored by a donation from the Ng Teng Fong Charitable Foundation, and the Science and Technology Foundation of Guangzhou Huangpu Development District under Grant 201902010028.en_US
item.fulltextNo Fulltext-
item.grantfulltextnone-
Appears in Collections:EEE Journal Articles
IGS Journal Articles

SCOPUSTM   
Citations 20

12
Updated on Nov 26, 2023

Web of ScienceTM
Citations 20

9
Updated on Oct 25, 2023

Page view(s)

79
Updated on Dec 1, 2023

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.