Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/53290
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWong, Yi Ben.
dc.date.accessioned2013-05-31T04:01:49Z
dc.date.available2013-05-31T04:01:49Z
dc.date.copyright2013en_US
dc.date.issued2013
dc.identifier.urihttp://hdl.handle.net/10356/53290
dc.description.abstractMood recognition through vocal prosody recognition is designed to predict human‟s mood through speeches profiles. There are existing applications in vocal prosody such as Microsoft‟s “Speech to Text”, IOS‟s “Siri” and Andriod‟s “S Voice” that they are executing actions which are ordered by users but they are not relate to mood recognition function. Therefore, this project seeks to develop a software package in mood recognition through human‟s speeches. Speaker-Dependent and Speaker- Independent mode were investigated to develop Real-Time Emotion Recognition System. Speeches database were collected and studied to improve the emotion recognition system since speeches database is one of the factors to define the quality of emotion recognition model. Besides, the process of handling speeches database was proposed to improve the accuracy and several experiments were completed for the improvement. Speeches database was reviewed by other users to prove the quality of speeches in term of expressing moods. Experimental results had shown that the Speaker-Dependent mode provides higher accuracy than Speaker-Independent mode as similar researches are found to support the findings. Besides, the number of emotion used in emotion recognition system does affect the accuracy in recognizing mood through speeches. Emotion- basis data division was found to have better accuracy instead of using Speaker- basis data division during the process of handling speeches database to train emotion recognition model. The human recognition on speeches database had shown it's less accurate to predict others‟ emotions under cross cultural background.en_US
dc.format.extent51 p.en_US
dc.language.isoenen_US
dc.rightsNanyang Technological University
dc.subjectDRNTU::Engineering::Mechanical engineering::Mechatronicsen_US
dc.titleMood recognition through vocal prosody recognitionen_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorSeet Gim Lee, Geralden_US
dc.contributor.schoolSchool of Mechanical and Aerospace Engineeringen_US
dc.description.degreeBachelor of Engineering (Mechanical Engineering)en_US
dc.contributor.researchRobotics Research Centreen_US
item.fulltextWith Fulltext-
item.grantfulltextrestricted-
Appears in Collections:MAE Student Reports (FYP/IA/PA/PI)
Files in This Item:
File Description SizeFormat 
mC023.pdf
  Restricted Access
Experiment data1.19 MBAdobe PDFView/Open

Page view(s)

346
Updated on Jul 19, 2024

Download(s)

10
Updated on Jul 19, 2024

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.