Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/138719
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Tan, Kye Yen | en_US |
dc.date.accessioned | 2020-05-12T04:04:01Z | - |
dc.date.available | 2020-05-12T04:04:01Z | - |
dc.date.issued | 2020 | - |
dc.identifier.uri | https://hdl.handle.net/10356/138719 | - |
dc.description.abstract | Deep Neural Networks (DNNs) are adept at many tasks, with the more well-known task of image recognition using a subset of DNNs called Convolutional Neural Networks (CNNs). However, they are prone to attacks called adversarial attacks. Adversarial attacks are malicious modifications made on input samples to the DNN that causes the DNN to fail at its task. In the case of image recognition, which is the focus of this project, adversarial attacks result in misclassification of images by the CNN. These attacks are conducted by deliberately adding perturbations imperceptible to humans in images before being fed into the CNN. This is a serious breach of security in CNNs which may lead to disastrous consequences in security reliant applications. Finding a defence mechanism for these attacks are imperative in ensuring the safe operation of CNNs. The first line of defence for CNNs against adversarial attacks is the detection of the adversarial images. This method of defence has been a topic for scrutiny to achieve not only high accuracy but also being real-time. Currently, high detection rate is computationally intensive, leading to increased time to detect the adversaries. Therefore, in this final year project, two methods were proposed to detect adversarial images with lower computational effort. The first method employs network prediction inconsistency concept, which has shown that adversarial inputs are more sensitive to model mutation than the natural inputs. It optimizes previous mutation testing method by implementing partial mutation to the statistically determined most distinguishable areas of the CNNs, instead of blindly implemented random mutations. These specific mutations in the CNNs causes changes in the output prediction which determines the inputs as adversarial. The second method makes use of the difference in layer-wise firing neuron rate distribution between adversarial and normal images to build a decision tree for adversarial detection. Both methods had shown reasonable detection rate. | en_US |
dc.language.iso | en | en_US |
dc.publisher | Nanyang Technological University | en_US |
dc.relation | A2040-191 | en_US |
dc.subject | Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence | en_US |
dc.subject | Engineering::Electrical and electronic engineering | en_US |
dc.title | Detecting adversarial samples for deep neural networks through mutation testing | en_US |
dc.type | Final Year Project (FYP) | en_US |
dc.contributor.supervisor | Chang Chip Hong | en_US |
dc.contributor.school | School of Electrical and Electronic Engineering | en_US |
dc.description.degree | Bachelor of Engineering (Electrical and Electronic Engineering) | en_US |
dc.contributor.supervisoremail | echchang@ntu.edu.sg | en_US |
item.fulltext | With Fulltext | - |
item.grantfulltext | restricted | - |
Appears in Collections: | EEE Student Reports (FYP/IA/PA/PI) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
FYP Final Report.pdf Restricted Access | 2.41 MB | Adobe PDF | View/Open |
Page view(s)
301
Updated on Mar 28, 2024
Download(s) 50
22
Updated on Mar 28, 2024
Google ScholarTM
Check
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.