Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/138858
Title: Deep learning techniques to derive descriptions from audio signals
Authors: Wu, Mengkai
Keywords: Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Issue Date: 2020
Publisher: Nanyang Technological University
Project: PSCSE18-0064
Abstract: With the rapid growth of the Internet, the amount of video and audio data is increasing sharply. With the development of big data and artificial intelligence, audio analysis and recognition technology become more important. As the audio classification requirement increases, to classify audio and generate a description, many methods have been introduced. This project uses machine learning to achieve the classification goal through building a model with Convolutional Neural Networks or other neural networks such as Recurrent Neural Networks to categorize and generate the description for the audio. This paper includes the research I have done for generating audio descriptions using different neural network models and approaches. It starts from audio data downloading, feature extraction, image generation, and classifier training to the final audio description design and implementation. In this project, after comparison on a few types of deep neural networks, we found that deep convolutional neural networks have the overall better accuracy.
URI: https://hdl.handle.net/10356/138858
Schools: School of Computer Science and Engineering 
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
Final Year Project Report.pdf
  Restricted Access
1.4 MBAdobe PDFView/Open

Page view(s) 50

496
Updated on Mar 9, 2025

Download(s) 50

51
Updated on Mar 9, 2025

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.