Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/138719
Full metadata record
DC FieldValueLanguage
dc.contributor.authorTan, Kye Yenen_US
dc.date.accessioned2020-05-12T04:04:01Z-
dc.date.available2020-05-12T04:04:01Z-
dc.date.issued2020-
dc.identifier.urihttps://hdl.handle.net/10356/138719-
dc.description.abstractDeep Neural Networks (DNNs) are adept at many tasks, with the more well-known task of image recognition using a subset of DNNs called Convolutional Neural Networks (CNNs). However, they are prone to attacks called adversarial attacks. Adversarial attacks are malicious modifications made on input samples to the DNN that causes the DNN to fail at its task. In the case of image recognition, which is the focus of this project, adversarial attacks result in misclassification of images by the CNN. These attacks are conducted by deliberately adding perturbations imperceptible to humans in images before being fed into the CNN. This is a serious breach of security in CNNs which may lead to disastrous consequences in security reliant applications. Finding a defence mechanism for these attacks are imperative in ensuring the safe operation of CNNs. The first line of defence for CNNs against adversarial attacks is the detection of the adversarial images. This method of defence has been a topic for scrutiny to achieve not only high accuracy but also being real-time. Currently, high detection rate is computationally intensive, leading to increased time to detect the adversaries. Therefore, in this final year project, two methods were proposed to detect adversarial images with lower computational effort. The first method employs network prediction inconsistency concept, which has shown that adversarial inputs are more sensitive to model mutation than the natural inputs. It optimizes previous mutation testing method by implementing partial mutation to the statistically determined most distinguishable areas of the CNNs, instead of blindly implemented random mutations. These specific mutations in the CNNs causes changes in the output prediction which determines the inputs as adversarial. The second method makes use of the difference in layer-wise firing neuron rate distribution between adversarial and normal images to build a decision tree for adversarial detection. Both methods had shown reasonable detection rate.en_US
dc.language.isoenen_US
dc.publisherNanyang Technological Universityen_US
dc.relationA2040-191en_US
dc.subjectEngineering::Computer science and engineering::Computing methodologies::Artificial intelligenceen_US
dc.subjectEngineering::Electrical and electronic engineeringen_US
dc.titleDetecting adversarial samples for deep neural networks through mutation testingen_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorChang Chip Hongen_US
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen_US
dc.description.degreeBachelor of Engineering (Electrical and Electronic Engineering)en_US
dc.contributor.supervisoremailechchang@ntu.edu.sgen_US
item.grantfulltextrestricted-
item.fulltextWith Fulltext-
Appears in Collections:EEE Student Reports (FYP/IA/PA/PI)
Files in This Item:
File Description SizeFormat 
FYP Final Report.pdf
  Restricted Access
2.41 MBAdobe PDFView/Open

Page view(s)

200
Updated on Jun 30, 2022

Download(s) 50

20
Updated on Jun 30, 2022

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.