Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/164573
Full metadata record
DC FieldValueLanguage
dc.contributor.authorHuang, Jiaxingen_US
dc.date.accessioned2023-02-06T04:22:32Z-
dc.date.available2023-02-06T04:22:32Z-
dc.date.issued2023-
dc.identifier.citationHuang, J. (2023). Transductive transfer learning for visual recognition. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/164573en_US
dc.identifier.urihttps://hdl.handle.net/10356/164573-
dc.description.abstractIn recent years, deep neural networks (DNNs) have brought great advances to various computer vision tasks, such as image classification, object detection, semantic segmentation, etc. However, the considerable successes of DNNs are achieved at a high cost of quite a lot of densely labeled training images that are extremely costly and laborious to establish. One way of circumventing such a limitation is to utilize the annotated images from existing related datasets (called the “source domain”) in network training. Unfortunately, the DNNs trained on source domains often undergo a drastic performance degradation when applied to the “target domain” due to the distribution mismatch. In such scenarios, transfer learning (or called knowledge transfer) between domains is desirable and necessary. In this thesis, we explore Transductive Transfer Learning for visual recognition, where the data distributions of labeled source and unlabeled target domains are different while the source and target tasks are the same. More specifically, we investigate three representative types of transductive transfer learning, including domain generalization, unsupervised domain adaptation and source-free unsupervised domain adaptation. In domain generalization, given labeled source domain data, the goal is to learn a generalized visual recognition model that well performs over unseen target-domain data. In other words, domain generalization aims to learn domain invariant features (or transferable features) without requiring target-domain data in training. In this thesis, we proposed a novel domain generalization approach that effectively randomizes source-domain images in frequency space, which encourages DNNs to learn style-invariant visual features that generalize well to unseen target domains. In unsupervised domain adaptation, given labeled source-domain data and unla beled target-domain data, the goal is to learn an adaptive visual recognition model that well performs over target-domain data. Different from domain generalization, in the transfer learning setup of unsupervised domain adaptation, the unlabeled target-domain data is accessible during training. Therefore, unsupervised domain adaptation largely focuses on exploiting unlabelled target-domain data to improve network performance. In this thesis, we developed four novel unsupervised domain adaptation techniques that effectively transfer knowledge from labeled source domains to the unlabeled target domain. More specifically, we designed different unsupervised losses on unlabeled target-domain data for learning a well-performed model in the target domain. In source-free unsupervised domain adaptation, given a source-trained model and unlabeled target-domain data, the goal is to adapt the source-trained model to per form well on unlabeled target-domain data. Different from unsupervised domain adaptation, in the transfer learning setup of source-free unsupervised domain adaptation, the labeled source-domain data is not accessible during training, where we aim to adapt source-trained models to fit target data distribution without accessing the source-domain data. Under a such transfer learning setup, the only information carried forward is a portable source-trained model, which largely alleviates the concern of data privacy, data portability and data transmission efficiency. To this end, we proposed a novel source-free unsupervised domain adaptation approach that exploits historical source hypothesis to make up for the absence of source-domain data in this transfer learning setup. Experimental results over various visual recognition benchmarks indicate our pro posed transfer learning approaches achieve superior performance, enabling transferring DNNs across different domains.en_US
dc.language.isoenen_US
dc.publisherNanyang Technological Universityen_US
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).en_US
dc.subjectEngineering::Computer science and engineeringen_US
dc.titleTransductive transfer learning for visual recognitionen_US
dc.typeThesis-Doctor of Philosophyen_US
dc.contributor.supervisorLu Shijianen_US
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.description.degreeDoctor of Philosophyen_US
dc.identifier.doi10.32657/10356/164573-
dc.contributor.supervisoremailShijian.Lu@ntu.edu.sgen_US
item.fulltextWith Fulltext-
item.grantfulltextopen-
Appears in Collections:SCSE Theses
Files in This Item:
File Description SizeFormat 
NTU_Thesis.pdf13.32 MBAdobe PDFThumbnail
View/Open

Page view(s)

362
Updated on Jun 19, 2024

Download(s) 20

374
Updated on Jun 19, 2024

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.