Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/82372
Title: Speech dereverberation for enhancement and recognition using dynamic features constrained deep neural networks and feature adaptation
Authors: Xiao, Xiong
Zhao, Shengkui
Nguyen, Duc Hoang Ha
Zhong, Xionghu
Jones, Douglas L.
Chng, Eng Siong
Li, Haizhou
Keywords: Speech enhancement
Deep neural networks
Dynamic features
Feature adaptation
Robust speech recognition
Reverberation challenge
Beamforming
Issue Date: 2016
Source: Xiao, X., Zhao, S., Nguyen, D. H. H., Zhong, X., Jones, D. L., Chng, E. S., et al. (2016). Speech dereverberation for enhancement and recognition using dynamic features constrained deep neural networks and feature adaptation. EURASIP Journal on Advances in Signal Processing, 2016, 4-.
Series/Report no.: EURASIP Journal on Advances in Signal Processing
Abstract: This paper investigates deep neural networks (DNN) based on nonlinear feature mapping and statistical linear feature adaptation approaches for reducing reverberation in speech signals. In the nonlinear feature mapping approach, DNN is trained from parallel clean/distorted speech corpus to map reverberant and noisy speech coefficients (such as log magnitude spectrum) to the underlying clean speech coefficients. The constraint imposed by dynamic features (i.e., the time derivatives of the speech coefficients) are used to enhance the smoothness of predicted coefficient trajectories in two ways. One is to obtain the enhanced speech coefficients with a least square estimation from the coefficients and dynamic features predicted by DNN. The other is to incorporate the constraint of dynamic features directly into the DNN training process using a sequential cost function. In the linear feature adaptation approach, a sparse linear transform, called cross transform, is used to transform multiple frames of speech coefficients to a new feature space. The transform is estimated to maximize the likelihood of the transformed coefficients given a model of clean speech coefficients. Unlike the DNN approach, no parallel corpus is used and no assumption on distortion types is made. The two approaches are evaluated on the REVERB Challenge 2014 tasks. Both speech enhancement and automatic speech recognition (ASR) results show that the DNN-based mappings significantly reduce the reverberation in speech and improve both speech quality and ASR performance. For the speech enhancement task, the proposed dynamic feature constraint help to improve cepstral distance, frequency-weighted segmental signal-to-noise ratio (SNR), and log likelihood ratio metrics while moderately degrades the speech-to-reverberation modulation energy ratio. In addition, the cross transform feature adaptation improves the ASR performance significantly for clean-condition trained acoustic models.
URI: https://hdl.handle.net/10356/82372
http://hdl.handle.net/10220/39943
ISSN: 1687-6172
DOI: 10.1186/s13634-015-0300-4
Schools: School of Computer Engineering 
Research Centres: Temasek Laboratories 
Rights: © 2016 Xiao et al. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Journal Articles
TL Journal Articles

SCOPUSTM   
Citations 10

35
Updated on Jul 16, 2024

Web of ScienceTM
Citations 10

31
Updated on Oct 26, 2023

Page view(s) 20

704
Updated on Jul 20, 2024

Download(s) 20

251
Updated on Jul 20, 2024

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.