Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorPanda, Ashishen
dc.identifier.citationPanda, A. (2011). Robust text-independent speaker verification in environmental noise. Doctoral thesis, Nanyang Technological University, Singapore.en
dc.description.abstractAutomatic speaker verification has many potential applications in security, surveillance and access control. In many of these applications, it is necessary to verify the speaker based on a short and noise degraded speech utterance. This thesis addresses the problem of robust speaker verification in environmental noise conditions by introducing novel and computationally efficient techniques that are suitable for realistic conditions. It also engenders the application of psychoacoustics to realize an adaptive model compensation technique. The probabilistic spectral subtraction (PSS) technique was investigated in detail and subsequently extended to accommodate noisy training utterance through a novel training scheme. The proposed training scheme has been shown to reduce the equal error rate, on an average, by 20% over the conventional procedure. The parallel model combination technique was investigated next due to its inherent compute efficient properties. While this provided further reduction in the equal error rate when compared to the PSS, it fell short in terms of inaccurate noise corruption function and its reliance on accurate noise estimation for better performance. To address the issue of inaccurate noise corruption function, the max function, a non-linear function, was evaluated as an alternate noise corruption function. This led to the development of a new generalized compensation scheme in order to efficiently estimate the transformed model parameters for non-linear noise corruption functions. Experimental evaluations demonstrate that the proposed max function based compensation scheme is capable of providing better performance gain in white noise conditions. In addition, it was demonstrated that the additive function provides better performance in pink noise conditions. In order to overcome the limitation that neither max function nor additive function can perform effectively across different types of noise, a novel psychoacoustic noise corruption function is proposed by exploiting masking properties of noise and speech signals. The psychoacoustic noise corruption function and the generalized compensation scheme were then elegantly combined to propose a psychoacoustic model compensation technique, which is capable of effective performance across different types of noise. Experimental evaluations of the proposed psychoacoustic model compensation technique conclusively demonstrate that it provides superior performance in both white and pink noise conditions, outperforming parallel model combination by 36% and max function based model compensation by 24%. A new multi-conditioning approach, based on the psychoacoustic model compensation, has also been proposed to deal with realistic and complex noise conditions.en
dc.format.extent137 p.en
dc.subjectDRNTU::Engineering::Electrical and electronic engineering::Electronic systems::Signal processingen
dc.subjectDRNTU::Engineering::Computer science and engineering::Computing methodologies::Pattern recognitionen
dc.subjectDRNTU::Engineering::Electrical and electronic engineering::Electronic systems::Biometricsen
dc.titleRobust text-independent speaker verification in environmental noiseen
dc.contributor.supervisorThambipillai Srikanthanen
dc.contributor.schoolSchool of Computer Engineeringen
dc.description.degreeDOCTOR OF PHILOSOPHY (SCE)en
dc.contributor.researchCentre for High Performance Embedded Systemsen
item.fulltextWith Fulltext-
Appears in Collections:SCSE Theses
Files in This Item:
File Description SizeFormat 
SCEG0501859H.pdf1.53 MBAdobe PDFThumbnail

Page view(s) 50

Updated on Jan 18, 2021

Download(s) 50

Updated on Jan 18, 2021

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.