Feature Adaptation Using Linear Spectro-Temporal Transform for Robust Speech Recognition
Nguyen, Duc Hoang Ha
Chng, Eng Siong
Date of Issue2016
School of Computer Science and Engineering
Spectral information represents short-term speech information within a frame of a few tens of milliseconds, while temporal information captures the evolution of speech statistics over consecutive frames. Motivated by the findings that human speech comprehension relies on the integrity of both the spectral content and temporal envelope of speech signal, we study a spectro-temporal transform framework that adapts run-time speech features to minimize the mismatch between run-time and training data, and its implementation that includes cross transform and cascaded transform. A Kullback-Leibler divergence based cost function is proposed to estimate the transform parameters. We conducted experiments on the REVERB Challenge 2014 task, where clean and multi-condition trained acoustic models are tested with real reverberant and noisy speech. We found that temporal information is important for reverberant speech recognition and the simultaneous use of spectral and temporal information for feature adaptation is effective. We also investigate the combination of the cross transform with fMLLR, the combination of batch, utterance and speaker mode adaptation, and multicondition adaptive training using proposed transforms. All experiments consistently report significant word error rate reductions.
IEEE/ACM Transactions on Audio, Speech, and Language Processing
© 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: [http://dx.doi.org/10.1109/TASLP.2016.2522646].