Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/136558
Title: Investigating robustness of deep learning against adversarial examples
Authors: Chua, Shan Jing
Keywords: Engineering
Engineering::Computer science and engineering
Issue Date: 2019
Publisher: Nanyang Technological University
Abstract: Deep learning has achieved many unprecedented performances in various fields, such as the field of Computer Vision. Deep neural networks have shown many impressive results in solving complex problems, yet, they are still vulnerable to adversarial attacks, which come in the form of subtle, often imperceptible perturbations. These perturbations that are added to the inputs can cause models to predict incorrectly. In this report, we present the effects of adversarial perturbations that are restricted to their low frequency subspace using the MNIST and CIFAR-10 dataset. We also experimented on generating a universal perturbation that is restricted to its low frequency subspace. The generated image-agnostic perturbation was also tested with a common adversarial defense method – JPEG compression, to observe the effectiveness of such defenses against the perturbation.
URI: https://hdl.handle.net/10356/136558
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
INVESTIGATING ROBUSTNESS OF DEEP LEARNING AGAINST ADVERSARIAL EXAMPLES.pdf
  Restricted Access
915.43 kBAdobe PDFView/Open

Page view(s)

231
Updated on Feb 5, 2023

Download(s) 50

57
Updated on Feb 5, 2023

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.