Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/160950
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Li, Haoliang | en_US |
dc.contributor.author | Wan, Renjie | en_US |
dc.contributor.author | Wang, Shiqi | en_US |
dc.contributor.author | Kot, Alex Chichung | en_US |
dc.date.accessioned | 2022-08-08T07:15:42Z | - |
dc.date.available | 2022-08-08T07:15:42Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Li, H., Wan, R., Wang, S. & Kot, A. C. (2021). Unsupervised domain adaptation in the wild via disentangling representation learning. International Journal of Computer Vision, 129(2), 267-283. https://dx.doi.org/10.1007/s11263-020-01364-5 | en_US |
dc.identifier.issn | 0920-5691 | en_US |
dc.identifier.uri | https://hdl.handle.net/10356/160950 | - |
dc.description.abstract | Most recently proposed unsupervised domain adaptation algorithms attempt to learn domain invariant features by confusing a domain classifier through adversarial training. In this paper, we argue that this may not be an optimal solution in the real-world setting (a.k.a. in the wild) as the difference in terms of label information between domains has been largely ignored. As labeled instances are not available in the target domain in unsupervised domain adaptation tasks, it is difficult to explicitly capture the label difference between domains. To address this issue, we propose to learn a disentangled latent representation based on implicit autoencoders. In particular, a latent representation is disentangled into a global code and a local code. The global code is capturing category information via an encoder with a prior, and the local code is transferable across domains, which captures the “style” related information via an implicit decoder. Experimental results on digit recognition, object recognition and semantic segmentation demonstrate the effectiveness of our proposed method. | en_US |
dc.description.sponsorship | Nanyang Technological University | en_US |
dc.language.iso | en | en_US |
dc.relation.ispartof | International Journal of Computer Vision | en_US |
dc.rights | © 2020 Springer Science+Business Media, LLC, part of Springer Nature. All rights reserved. | en_US |
dc.subject | Engineering::Electrical and electronic engineering | en_US |
dc.title | Unsupervised domain adaptation in the wild via disentangling representation learning | en_US |
dc.type | Journal Article | en |
dc.contributor.school | School of Electrical and Electronic Engineering | en_US |
dc.contributor.school | Interdisciplinary Graduate School (IGS) | en_US |
dc.contributor.research | Rapid-Rich Object Search (ROSE) Lab | en_US |
dc.identifier.doi | 10.1007/s11263-020-01364-5 | - |
dc.identifier.scopus | 2-s2.0-85089294127 | - |
dc.identifier.issue | 2 | en_US |
dc.identifier.volume | 129 | en_US |
dc.identifier.spage | 267 | en_US |
dc.identifier.epage | 283 | en_US |
dc.subject.keywords | In the Wild | en_US |
dc.subject.keywords | Cross-Domain | en_US |
dc.description.acknowledgement | This research is supported in part by the Wallenberg-NTU Presidential Postdoctoral Fellowship, the NTU-PKU Joint Research Institute, a collaboration between the Nanyang Technological University and Peking University that is sponsored by a donation from the Ng Teng Fong Charitable Foundation, and the Science and Technology Foundation of Guangzhou Huangpu Development District under Grant 201902010028. | en_US |
item.fulltext | No Fulltext | - |
item.grantfulltext | none | - |
Appears in Collections: | EEE Journal Articles IGS Journal Articles |
SCOPUSTM
Citations
20
12
Updated on Nov 26, 2023
Web of ScienceTM
Citations
20
9
Updated on Oct 25, 2023
Page view(s)
79
Updated on Dec 1, 2023
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.