Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/153200
 Title: Controllable music : supervised learning of disentangled representations for music generation Authors: Watcharasupat, Karn N. Keywords: Engineering::Electrical and electronic engineering::Electronic systems::Signal processingEngineering::Computer science and engineering::Computing methodologies::Artificial intelligence Issue Date: 2021 Publisher: Nanyang Technological University Source: Watcharasupat, K. N. (2021). Controllable music : supervised learning of disentangled representations for music generation. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/153200 Project: CY3001-211 Abstract: Controllability, despite being a much-desired property of a generative model, remains an ill-defined concept that is difficult to measure. In the context of neural music generation, a controllable system often implies an intuitive interaction between human agents and the neural model, allowing the relatively opaque neural model to be controlled by a human in a semantically understandable manner. In this work, we aim to tackle controllable music generation in the raw audio domain, which is significantly less attempted compared to the symbolic domain. Specifically, we focus on controlling multiple continuous, potentially interdependent timbral attributes of a musical note using a variational autoencoder (VAE) framework, and the necessary groundwork research needed to support the goal. Specifically, this work consists of three main parts. The first formulates the concept of \textit{controllability} and how to evaluate a latent manifold of deep generative models in the presence of multiple interdependent attributes. The second focuses on the development of a composite latent space architecture for VAE, in order to allow encoding of interdependent attributes which having an easily sampled disentangled prior. Proofs of concept work for the second part was performed on several standard vision disentanglement learning datasets. Finally, the last part applies the composite latent space model on music generation in the raw audio domain and discusses the evaluation of the model against the criteria defined in the first part of this project. All in all, given the relatively uncharted nature of the controllable generation in the raw audio domain, this project provides a foundational work for the evaluation of controllable generation as a whole, and a promising proof of concept for musical audio generation with timbral control using variational autoencoders. URI: https://hdl.handle.net/10356/153200 Fulltext Permission: restricted Fulltext Availability: With Fulltext Appears in Collections: EEE Student Reports (FYP/IA/PA/PI)

###### Files in This Item:
File Description SizeFormat
OFYP_Final_Report.pdf
Restricted Access

#### Page view(s)

169
Updated on Jan 31, 2023