Please use this identifier to cite or link to this item:
Title: Music visualisation with deep learning
Authors: Chong, Kyrin Sethel
Keywords: Engineering::Computer science and engineering::Computer applications::Arts and humanities
Issue Date: 2023
Publisher: Nanyang Technological University
Source: Chong, K. S. (2023). Music visualisation with deep learning. Final Year Project (FYP), Nanyang Technological University, Singapore.
Abstract: Music visualisation has become an integral part of music performance, appreciation and study. Even before computers, people have tried to visualise different aspects of music, from Kandinsky’s abstract paintings to Oskar Fischinger’s animated videos. Music-visual association is an innate sensory response for a small percentage of the population, known as “synaesthetes”. Even for individuals without synaesthesia, music can be associated with colours consistently enough to reach a general agreement rate. Music visualisation can be conducted on a wide variety of musical characteristics, of which timbre is one of the least visualised. Moreover, timbre is difficult to quantify and categorise as it is commonly labelled with semantic descriptors that vary from person to person. As such, this project explores the algorithm for a standard timbre-to-colour conversion that is both widely accepted by the general public and also, when given a certain colour, enables identification of the timbre from which the colour was generated.
Schools: School of Computer Science and Engineering 
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
  Restricted Access
3.18 MBAdobe PDFView/Open

Page view(s)

Updated on Sep 26, 2023


Updated on Sep 26, 2023

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.