Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/90922
Full metadata record
DC FieldValueLanguage
dc.contributor.authorNwe, Tin Layen
dc.contributor.authorFoo, Say Weien
dc.contributor.authorDe Silva, Liyanage C.en
dc.date.accessioned2009-07-31T06:50:13Zen
dc.date.accessioned2019-12-06T17:56:29Z-
dc.date.available2009-07-31T06:50:13Zen
dc.date.available2019-12-06T17:56:29Z-
dc.date.copyright2003en
dc.date.issued2003en
dc.identifier.citationDong, L., Foo, S. W., & Lian, Y. (2003). Modeling continuous visual speech using boosted viseme models. Proeedings of the 4th International Conference on Information, Communications and Signal Processing and the 4th IEEE Pacific-Rim Conference on Multimedia. (pp. 1394-1398). Singapore: IEEE.en
dc.identifier.urihttps://hdl.handle.net/10356/90922-
dc.identifier.urihttp://hdl.handle.net/10220/5964en
dc.description.abstractIn this paper, three systems for classification of stress in speech are proposed. The first system makes use of linear short time Log Frequency Power Coefficients (LFPC), the second employs Teager Energy Operator (TEO) based Nonlinear Frequency Domain LFPC features (NFD-LFPC) and the third uses TEO based Nonlinear Time Domain LFPC features (NTD-LFPC).The systems were tested using SUSAS (Speech Under Simulated and Actual Stress) database to categorize five stress conditions individually. Results show that, the system using LFPC gives the highest accuracy, followed by the system using NFD-LFPC features. While the system using NTD-LFPC features gives the worst performance. For the system using linear LFPC features, the average accuracy of 84% and the best accuracy.en
dc.format.extent4 p.en
dc.language.isoenen
dc.rightsIEEE International Conference on Acoustics, Speech, and Signal Processing © 2003 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder. http://www.ieee.org/portal/site.en
dc.subjectDRNTU::Engineering::Electrical and electronic engineering::Electronic systems::Signal processingen
dc.titleClassification of stress in speech using linear and nonlinear featuresen
dc.typeConference Paperen
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen
dc.contributor.conferenceIEEE International Conference on Acoustics, Speech and Signal Processing (2003 : Hong Kong)en
dc.identifier.doihttp://dx.doi.org/10.1109/ICASSP.2003.1202281en
dc.description.versionAccepted versionen
item.grantfulltextopen-
item.fulltextWith Fulltext-
Appears in Collections:EEE Conference Papers
Files in This Item:
File Description SizeFormat 
C68-02-00009-ICASSP03-TLN1.pdfAccepted139.54 kBAdobe PDFThumbnail
View/Open

Google ScholarTM

Check

Altmetric

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.