Please use this identifier to cite or link to this item:
Title: Music generation with deep learning techniques
Authors: Lee, Daniel Yu Sheng
Keywords: Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Issue Date: 2021
Publisher: Nanyang Technological University
Source: Lee, D. Y. S. (2021). Music generation with deep learning techniques. Final Year Project (FYP), Nanyang Technological University, Singapore.
Abstract: This report demonstrated the use of conditioning inputs, together with an appropriate model architecture, to improve the structure of generated music through deep learning. Existing challenges to generate music using deep learning, in particular structure, were reviewed. The use of bar counter, occurrence of repeated motifs, and form of a piece as conditioning inputs were hypothesized to capture long-term structure of music. Then, the proposed model was designed using Bidirectional Long Short-Term Memory (Bi-LSTM) and attention layers to take in the conditioning inputs. To evaluate performance of the proposed model, quantitative analysis was done on the proposed model, the same model without conditioning inputs, and a baseline LSTM model. Following which, a user study was conducted to compare music samples generated by the 3 models. Evaluation results verified that by utilising the 3 conditioning inputs, the proposed model generated more pleasant-sounding and structurally coherent music.
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
FYP Final Report Lee Yu Sheng Daniel.pdf
  Restricted Access
1.32 MBAdobe PDFView/Open

Page view(s)

Updated on Jun 29, 2022


Updated on Jun 29, 2022

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.