Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/40345
Full metadata record
DC FieldValueLanguage
dc.contributor.authorCui, Min.-
dc.date.accessioned2010-06-15T01:22:26Z-
dc.date.available2010-06-15T01:22:26Z-
dc.date.copyright2010en_US
dc.date.issued2010-
dc.identifier.urihttp://hdl.handle.net/10356/40345-
dc.description.abstractA new technology on music resources classifications is needed to search for or manage music pieces more effectively as the traditional music resource management requires a lot of manual work and is too time-consuming. In recent days, researches on automatic classifications based on music mood are conducted by many researchers, which have great significance as music contents are good representations of human emotions. The emotion plane proposed by Thayer, which defines the emotion classes dimensionally in terms of arousal and valence, is commonly adopted to avoid the ambiguity of the music emotion description. Another investigation on music mood perception show that human perceives music emotions in the way that is most similar to Hevner’s categorization of music mood. Empirical studies also show that the main features that influence the human perception of music emotions are: music tempo, pitch and music articulation. Hence, these three audio features are extracted in order to classify a piece of music into the correct emotional expression categorization. The proposed approach for musical mood classification is implemented with the MIR-Toolbox integrated within the MATLAB software, which is dedicated to the extraction from audio files of musical features such as tonality, rhythm and so on. A database containing music pieces of all 8 clusters of emotional expressions was constructed according to Hevner’s categorization, within which the training set was used for music features extraction, while the testing set was used for musical classification and accuracy test. The result of this automatic music mood classification project is quite satisfactory, as the categorization outcomes are quite accurate. However, due to the limited time and author’s knowledge, some other music features such as tonality that related to emotional expression are not extracted, which may reduce the classification accuracy. Hence, much more effort needs to be put in to perform a better categorization approach in the future.en_US
dc.format.extent76 p.en_US
dc.language.isoenen_US
dc.rightsNanyang Technological University-
dc.subjectDRNTU::Engineeringen_US
dc.titleAutomatic music mood classificationen_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorWan Chunruen_US
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen_US
dc.description.degreeBachelor of Engineeringen_US
item.grantfulltextrestricted-
item.fulltextWith Fulltext-
Appears in Collections:EEE Student Reports (FYP/IA/PA/PI)
Files in This Item:
File Description SizeFormat 
E3191-091.pdf
  Restricted Access
1.81 MBAdobe PDFView/Open

Page view(s) 20

422
Updated on Nov 29, 2020

Download(s) 5

9
Updated on Nov 29, 2020

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.