Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/175211
Full metadata record
DC FieldValueLanguage
dc.contributor.authorQuah, Joeyen_US
dc.date.accessioned2024-04-21T10:35:29Z-
dc.date.available2024-04-21T10:35:29Z-
dc.date.issued2024-
dc.identifier.citationQuah, J. (2024). Music recommender system based on emotions from facial expression. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175211en_US
dc.identifier.urihttps://hdl.handle.net/10356/175211-
dc.description.abstractMusic classification algorithms have become an important component of musical systems. Although current research has had some success in using audio features to classify music, there is a lack of analysis on other crucial musical components, such as the lyrics of a song. Song lyrics can reveal the artist’s intention, which may not be fully conveyed through audio features alone. Hence, this paper explores the extent to which song lyrics can further improve the accuracy of music classification based on emotion. The dataset was created by scraping song lyrics from Genius and extracting audio features using the Spotify API. The songs are split into four basic emotion categories: angry, calm, happy and sad. Both deep learning and transfer learning approaches were employed to build models capable of predicting the emotion based on song lyrics and audio features. Results showed an improvement in accuracy when combining both model predictions. Furthermore, given the deterioration in mental health worldwide, music recommender systems can benefit from an enhanced classification model to recommend music that can improve people’s mood. As such, simple desktop application was also developed to recommend music to users based on their facial emotions detected in real-time. The application integrated the combined model predictions for music recommendation and utilised Spotify API to generate playlists.en_US
dc.language.isoenen_US
dc.publisherNanyang Technological Universityen_US
dc.subjectComputer and Information Scienceen_US
dc.titleMusic recommender system based on emotions from facial expressionen_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorOwen Noel Newton Fernandoen_US
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.description.degreeBachelor's degreeen_US
dc.contributor.supervisoremailOFernando@ntu.edu.sgen_US
dc.subject.keywordsMusic classificationen_US
dc.subject.keywordsMusic recommender systemen_US
dc.subject.keywordsDeep learningen_US
item.grantfulltextrestricted-
item.fulltextWith Fulltext-
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)
Files in This Item:
File Description SizeFormat 
Joey Quah_FYP Report.pdf
  Restricted Access
1.49 MBAdobe PDFView/Open

Page view(s)

120
Updated on Apr 24, 2025

Download(s)

9
Updated on Apr 24, 2025

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.