Please use this identifier to cite or link to this item:
|Title:||Detecting synthetic speech using long term magnitude and phase information||Authors:||Tian, Xiaohai
Chng, Eng Siong
DRNTU::Engineering::Computer science and engineering
|Issue Date:||2015||Source:||Tian, X., Du, S., Xiao, X., Xu, H., Chng, E. S., & Li, H. (2015). Detecting synthetic speech using long term magnitude and phase information. 2015 IEEE China Summit and International Conference on Signal and Information Processing (ChinaSIP), 611-615. doi:10.1109/ChinaSIP.2015.7230476||Abstract:||Synthetic speech is speech signals generated by text-to-speech (TTS) and voice conversion (VC) techniques. They impose a threat to speaker verification (SV) systems as an attacker may make use of TTS or VC to synthesize a speakers voice to cheat the SV system. To address this challenge, we study the detection of synthetic speech using long term magnitude and phase information of speech. As most of the TTS and VC techniques make use of vocoders for speech analysis and synthesis, we focus on differentiating speech signals generated by vocoders from natural speech. Log magnitude spectrum and two phase-based features, including instantaneous frequency derivation and modified group delay, were studied in this work. We conducted experiments on the CMU-ARCTIC database using various speech features and a neural network classifier. During training, the synthetic speech detection is formulated as a 2-class classification problem and the neural network is trained to differentiate synthetic speech from natural speech. During testing, the posterior scores generated by the neural network is used for the detection of synthetic speech. The synthetic speech used in training and testing are generated by different types of vocoders and VC methods. Experimental results show that long term information up to 0.3s is important for synthetic speech detection. In addition, the high dimensional log magnitude spectrum features significantly outperforms the low dimensional MFCC features, showing that it is important to retain the detailed spectral information for detecting synthetic speech. Furthermore, the two phase-based features are found to perform well and complementary to the log magnitude spectrum features. The fusion of these features produces an equal error rate (EER) of 0.09%.||URI:||https://hdl.handle.net/10356/89638
|DOI:||10.1109/ChinaSIP.2015.7230476||Rights:||© 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: [http://dx.doi.org/10.1109/ChinaSIP.2015.7230476].||Fulltext Permission:||open||Fulltext Availability:||With Fulltext|
|Appears in Collections:||SCSE Conference Papers|
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.