Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/40345
Title: | Automatic music mood classification | Authors: | Cui, Min. | Keywords: | DRNTU::Engineering | Issue Date: | 2010 | Abstract: | A new technology on music resources classifications is needed to search for or manage music pieces more effectively as the traditional music resource management requires a lot of manual work and is too time-consuming. In recent days, researches on automatic classifications based on music mood are conducted by many researchers, which have great significance as music contents are good representations of human emotions. The emotion plane proposed by Thayer, which defines the emotion classes dimensionally in terms of arousal and valence, is commonly adopted to avoid the ambiguity of the music emotion description. Another investigation on music mood perception show that human perceives music emotions in the way that is most similar to Hevner’s categorization of music mood. Empirical studies also show that the main features that influence the human perception of music emotions are: music tempo, pitch and music articulation. Hence, these three audio features are extracted in order to classify a piece of music into the correct emotional expression categorization. The proposed approach for musical mood classification is implemented with the MIR-Toolbox integrated within the MATLAB software, which is dedicated to the extraction from audio files of musical features such as tonality, rhythm and so on. A database containing music pieces of all 8 clusters of emotional expressions was constructed according to Hevner’s categorization, within which the training set was used for music features extraction, while the testing set was used for musical classification and accuracy test. The result of this automatic music mood classification project is quite satisfactory, as the categorization outcomes are quite accurate. However, due to the limited time and author’s knowledge, some other music features such as tonality that related to emotional expression are not extracted, which may reduce the classification accuracy. Hence, much more effort needs to be put in to perform a better categorization approach in the future. | URI: | http://hdl.handle.net/10356/40345 | Schools: | School of Electrical and Electronic Engineering | Rights: | Nanyang Technological University | Fulltext Permission: | restricted | Fulltext Availability: | With Fulltext |
Appears in Collections: | EEE Student Reports (FYP/IA/PA/PI) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
E3191-091.pdf Restricted Access | 1.81 MB | Adobe PDF | View/Open |
Page view(s) 50
679
Updated on May 7, 2025
Download(s)
16
Updated on May 7, 2025
Google ScholarTM
Check
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.