Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/140900
Title: Statistical diagnosis system for adversarial examples
Authors: Wu, Yuting
Keywords: Engineering::Electrical and electronic engineering
Issue Date: 2020
Publisher: Nanyang Technological University
Abstract: DeepNeuralNetworks (DNNs) are powerful to the classification tasks, finding the potential links between dataset with high accuracy and speed. However, the DNNs are also fragile to intentionally produced adversarial attacks, especially in the field of image analysis where also the concept of adversarial examples first emerged. These adversarial perturbations are designed to be quasi-imperceptible to human vision but can easily fool the deep models with high confidence. This situation aroused researchers’ great interest in detection and defense of adversarial examples to improve the reliability of the deep neural networks which would play an important role in safety and security systems in the coming future. In the view of that, this work will first give a brief view of common attacks on Mnist and Cifar-10 dataset with a general concept on what adversarial examples are and how to generate it. After that, different kinds of defense methods will be introduced, and we will mainly focus on statistical defense way. Experiments are conducted to evaluate the performance of those existing defense methods on their merits and demerits. In chapter 4, an improvement on Principal Component Analysis with Gaussian Mixture Model method is proposed which enables it to detect adversarial examples from dataset attacked by C&W attack. In chapter 5, this dissertation proposes an improvedKernel-Density-Estimation detection method based on a Deep Graph infomax. We assume that a simple modification on loss function, with an extra loss that maximizes the mutual information between images and their deep representations, the DNN models could extract more key information from input images to their deep feature map. After the modification, it would help the model detect the unique feature of adversarial examples and improve the detection result. The experiment in chapter 5 has demonstrated the verification of our assumption.
URI: https://hdl.handle.net/10356/140900
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:EEE Theses

Files in This Item:
File Description SizeFormat 
STATISTICAL DIAGNOSIS SYSTEM FOR ADVERSARIAL EXAMPLES.pdf
  Restricted Access
5.56 MBAdobe PDFView/Open

Page view(s)

215
Updated on Feb 4, 2023

Download(s)

7
Updated on Feb 4, 2023

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.