Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/171835
Title: Evaluation of adversarial attacks against deep learning models
Authors: Chua, Jonathan Wen Rong
Keywords: Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Issue Date: 2023
Publisher: Nanyang Technological University
Source: Chua, J. W. R. (2023). Evaluation of adversarial attacks against deep learning models. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/171835
Project: SCSE22-0758
Abstract: Machine learning has been increasingly prevalent in aiding us in our day-to-day lives. They have been and are still useful in performing tasks in different fields such as Computer Vision and Natural Language Processing. However, they are also increasingly targeted by adversaries, who aim to reduce their effectiveness rendering them useless and unpredictable. Hence, there is a need to improve the robustness of current machine learning models, to deter adversarial attacks. Existing defences have been proven to be useful in deterring known attacks such as Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD) and Carlini and Wagner (C&W). However, in recent times, adaptive attacks such as Backward Pass Differential Approximation (BPDA) and AutoAttack (AA), have been able to counteract existing defence techniques, rendering them ineffective. In this project, we focus on adversarial defences in the field of Computer Vision. In our experiments, we employed various input preprocessing techniques as defence such as JPEG compression, Total Variance Minimization (TVM), Spatial Smoothing, Bit-depth Reduction, Principal Component Analysis (PCA) and Pixel Deflection to remove adversarial perturbations from input data. These defence techniques have been evaluated on the ResNet-20 and ResNet-56 networks, trained with CIFAR-10 and CIFAR-100 datasets. The image inputs were adversarially perturbed using several known attacks such as C&W, PGD and AA.
URI: https://hdl.handle.net/10356/171835
Schools: School of Computer Science and Engineering 
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
SCSE22_0758_Chua_Wen_Rong_Jonathan_Final_Year_Project_Report.pdf
  Restricted Access
Undergraduate project report644.42 kBAdobe PDFView/Open

Page view(s)

166
Updated on Jun 18, 2024

Download(s)

16
Updated on Jun 18, 2024

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.