Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/137946
Title: | Demystifying adversarial attacks on neural networks | Authors: | Yip, Lionell En Zhi | Keywords: | Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence | Issue Date: | 2020 | Publisher: | Nanyang Technological University | Project: | SCSE19-0306 | Abstract: | Prevalent use of Neural Networks for Classification Tasks has brought to attention the security and integrity of the Neural Networks that industries are so reliant on. Adversarial examples are conspicuous to humans, but neural networks struggle to correctly classify images with the presence of adversarial perturbations. I introduce a framework for understanding how neural networks perceive inputs, and its relation to adversarial attack methods. I demonstrate that there is no correlation between the region of importance and the region of attack. I demonstrate that a frequently perturbed region of an adversarial example across a class in a data-set exists. I attempt to improve classification performance by exploiting the differences of input and adversarial attack, and I demonstrate a novel augmentation method for improving prediction performance of adversarial samples. | URI: | https://hdl.handle.net/10356/137946 | Schools: | School of Computer Science and Engineering | Research Centres: | Parallel and Distributed Computing Centre | Fulltext Permission: | restricted | Fulltext Availability: | With Fulltext |
Appears in Collections: | SCSE Student Reports (FYP/IA/PA/PI) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
NTU_SCSE19-0306-U1721954J.pdf Restricted Access | 2.4 MB | Adobe PDF | View/Open |
Page view(s)
275
Updated on Dec 8, 2023
Download(s) 50
61
Updated on Dec 8, 2023
Google ScholarTM
Check
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.