Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/137926
Title: Adversarial attacks on RNN-based deep learning systems
Authors: Loi, Chii Lek
Keywords: Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Engineering::Computer science and engineering::Computing methodologies::Simulation and modeling
Issue Date: 2020
Publisher: Nanyang Technological University
Project: SCSE 19-0319
Abstract: Automatic Speech Recognition (ASR) systems have been growing in prevalence together with the advancement in deep learning. Built within many Intelligent Voice Control (IVC) systems such as Alexa, Siri and Google Assistant, ASR has become an attractive target for adversarial attacks. In this research project, the objective is to create a black-box over-the-air (OTA) attack system that can mutate an audio into its adversarial form with imperceptible difference, such that it will be interpreted as the targeted word by the ASR. In this paper, we demonstrate the feasibility and effectiveness of such an attack system in generating perturbation for the DeepSpeech ASR.
URI: https://hdl.handle.net/10356/137926
Schools: School of Computer Science and Engineering 
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
Final Report.pdf
  Restricted Access
1.04 MBAdobe PDFView/Open

Page view(s)

340
Updated on Mar 20, 2025

Download(s) 50

33
Updated on Mar 20, 2025

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.