Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/151969
Title: Feature analysis of marginalized stacked denoising autoenconder for unsupervised domain adaptation
Authors: Wei, Pengfei
Ke, Yiping
Goh, Chi Keong
Keywords: Engineering::Computer science and engineering
Issue Date: 2018
Source: Wei, P., Ke, Y. & Goh, C. K. (2018). Feature analysis of marginalized stacked denoising autoenconder for unsupervised domain adaptation. IEEE Transactions On Neural Networks and Learning Systems, 30(5), 1321-1334. https://dx.doi.org/10.1109/TNNLS.2018.2868709
Project: RG135/14
Journal: IEEE Transactions on Neural Networks and Learning Systems
Abstract: Marginalized stacked denoising autoencoder (mSDA), has recently emerged with demonstrated effectiveness in domain adaptation. In this paper, we investigate the rationale for why mSDA benefits domain adaptation tasks from the perspective of adaptive regularization. Our investigations focus on two types of feature corruption noise: Gaussian noise (mSDA g ) and Bernoulli dropout noise (mSDA bd ). Both theoretical and empirical results demonstrate that mSDA bd successfully boosts the adaptation performance but mSDA g fails to do so. We then propose a new mSDA with data-dependent multinomial dropout noise (mSDA md ) that overcomes the limitations of mSDA bd and further improves the adaptation performance. Our mSDA md is based on a more realistic assumption: different features are correlated and, thus, should be corrupted with different probabilities. Experimental results demonstrate the superiority of mSDA md to mSDA bd on the adaptation performance and the convergence speed. Finally, we propose a deep transferable feature coding (DTFC) framework for unsupervised domain adaptation. The motivation of DTFC is that mSDA fails to consider the distribution discrepancy across different domains in the feature learning process. We introduce a new element to mSDA: domain divergence minimization by maximum mean discrepancy. This element is essential for domain adaptation as it ensures the extracted deep features to have a small distribution discrepancy. The effectiveness of DTFC is verified by extensive experiments on three benchmark data sets for both Bernoulli dropout noise and multinomial dropout noise.
URI: https://hdl.handle.net/10356/151969
ISSN: 2162-2388
DOI: 10.1109/TNNLS.2018.2868709
Schools: School of Computer Science and Engineering 
Rights: © 2018 IEEE. All rights reserved.
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:SCSE Journal Articles

SCOPUSTM   
Citations 10

35
Updated on May 4, 2025

Web of ScienceTM
Citations 10

23
Updated on Oct 28, 2023

Page view(s)

238
Updated on May 4, 2025

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.