Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChakraborty, Debsubhraen_US
dc.contributor.authorYang, Zixuen_US
dc.contributor.authorTahir, Yasiren_US
dc.contributor.authorMaszczyk, Tomaszen_US
dc.contributor.authorDauwels, Justinen_US
dc.contributor.authorThalmann, Nadiaen_US
dc.contributor.authorZheng, Jianminen_US
dc.contributor.authorManiam, Yogeswaryen_US
dc.contributor.authorNur Amirahen_US
dc.contributor.authorTan, Bhing-Leeten_US
dc.contributor.authorLee, Jimmy Chee Keongen_US
dc.identifier.citationChakraborty, D., Yang, Z., Tahir, Y., Maszczyk, T., Dauwels, J., Thalmann, N., . . ., Lee, J. C. K. (2018). Prediction of negative symptoms of schizophrenia from emotion related low-level speech signals. 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 6024-6028. doi:10.1109/ICASSP.2018.8462102en_US
dc.description.abstractNegative symptoms of schizophrenia are often associated with the blunting of emotional affect which creates a serious impediment in the daily functioning of the patients. Affective prosody is almost always adversely impacted in such cases, and is known to exhibit itself through the low-level acoustic signals of prosody. To automate and simplify the process of assessment of severity of emotion related symptoms of schizophrenia, we utilized these low-level acoustic signals to predict the expert subjective ratings assigned by a trained psychologist during an interview with the patient. Specifically, we extract acoustic features related to emotion using the openSMILE toolkit from the audio recordings of the interviews. We analysed the interviews of 78 paid participants (52 patients and 26 healthy controls) in this study. The subjective ratings could be accurately predicted from the objective openSMILE acoustic signals with an accuracy of 61-85% using machine-learning algorithms with leave-one-out cross-validation technique. Furthermore, these objective measures can be reliably utilized to distinguish between the patient and healthy groups, as supervised learning methods can classify the two groups with 79-86% accuracy.en_US
dc.description.sponsorshipNRF (Natl Research Foundation, S’pore)en_US
dc.description.sponsorshipNMRC (Natl Medical Research Council, S’pore)en_US
dc.rights© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at:
dc.subjectEngineering::Electrical and electronic engineeringen_US
dc.titlePrediction of negative symptoms of schizophrenia from emotion related low-level speech signalsen_US
dc.typeConference Paperen
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen_US
dc.contributor.conference2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)en_US
dc.contributor.researchInstitute for Media Innovation (IMI)en_US
dc.description.versionAccepted versionen_US
dc.subject.keywordsAffective Prosodyen_US
dc.citation.conferencelocationCalgary, AB, Canadaen_US
item.fulltextWith Fulltext-
Appears in Collections:EEE Conference Papers
Files in This Item:
File Description SizeFormat 
ICASSP_2018_Deb.pdf200.27 kBAdobe PDFView/Open

Citations 50

Updated on Jul 13, 2020

Page view(s)

Updated on Jul 17, 2021

Download(s) 50

Updated on Jul 17, 2021

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.