Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/175148
Title: Static visualisations of music mood using deep learning
Authors: Ang, Justin Teng Hng
Keywords: Computer and Information Science
Issue Date: 2024
Publisher: Nanyang Technological University
Source: Ang, J. T. H. (2024). Static visualisations of music mood using deep learning. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175148
Project: SCSE23-0039 
Abstract: Of the many aspects of music, including pitch, volume, tempo, modality, etc., mood is one of the fewer visualised aspects. This is due to mood being harder to quantify and being rather subjective. Additionally, much of today’s work on music visualisation focuses on animated representations of music, meant to be viewed while listening along. Thus, there is a gap for static visualisations of music mood, which can be used to give viewers a quick overview of the overall ambience of a piece of music. A model has been proposed that combines the MuLan model for audio embedding and Stable Diffusion-XL Turbo for image generation to generate images from audio files, with the aim of visualising the mood of music. This model is trained using a dataset of classical music pieces and corresponding images generated using DALL-E. The generated images are subjected to analysis, and the model undergoes user testing to evaluate its effectiveness.
URI: https://hdl.handle.net/10356/175148
Schools: School of Computer Science and Engineering 
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
Ang_Teng_Hng_Justin_FYP.pdf
  Restricted Access
5.31 MBAdobe PDFView/Open

Page view(s)

84
Updated on May 7, 2025

Download(s)

7
Updated on May 7, 2025

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.