Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/138719
Title: Detecting adversarial samples for deep neural networks through mutation testing
Authors: Tan, Kye Yen
Keywords: Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Engineering::Electrical and electronic engineering
Issue Date: 2020
Publisher: Nanyang Technological University
Project: A2040-191
Abstract: Deep Neural Networks (DNNs) are adept at many tasks, with the more well-known task of image recognition using a subset of DNNs called Convolutional Neural Networks (CNNs). However, they are prone to attacks called adversarial attacks. Adversarial attacks are malicious modifications made on input samples to the DNN that causes the DNN to fail at its task. In the case of image recognition, which is the focus of this project, adversarial attacks result in misclassification of images by the CNN. These attacks are conducted by deliberately adding perturbations imperceptible to humans in images before being fed into the CNN. This is a serious breach of security in CNNs which may lead to disastrous consequences in security reliant applications. Finding a defence mechanism for these attacks are imperative in ensuring the safe operation of CNNs. The first line of defence for CNNs against adversarial attacks is the detection of the adversarial images. This method of defence has been a topic for scrutiny to achieve not only high accuracy but also being real-time. Currently, high detection rate is computationally intensive, leading to increased time to detect the adversaries. Therefore, in this final year project, two methods were proposed to detect adversarial images with lower computational effort. The first method employs network prediction inconsistency concept, which has shown that adversarial inputs are more sensitive to model mutation than the natural inputs. It optimizes previous mutation testing method by implementing partial mutation to the statistically determined most distinguishable areas of the CNNs, instead of blindly implemented random mutations. These specific mutations in the CNNs causes changes in the output prediction which determines the inputs as adversarial. The second method makes use of the difference in layer-wise firing neuron rate distribution between adversarial and normal images to build a decision tree for adversarial detection. Both methods had shown reasonable detection rate.
URI: https://hdl.handle.net/10356/138719
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:EEE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
FYP Final Report.pdf
  Restricted Access
2.41 MBAdobe PDFView/Open

Page view(s)

186
Updated on May 19, 2022

Download(s) 50

20
Updated on May 19, 2022

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.