Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/156741
Title: Attack on training effort of deep learning
Authors: Ho, Tony Man Tung
Keywords: Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Issue Date: 2022
Publisher: Nanyang Technological University
Source: Ho, T. M. T. (2022). Attack on training effort of deep learning. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/156741
Abstract: Abstract The objective of this project is to extend on a previous study on an adversarial rain attack on state-of-the-art deep neural networks (DNN) to hinder image classification and object detection. DNNs are known for their vulnerabilities to adversarial attacks. These attacks can take on many forms, but generally take the form of adding some form of perturbation to images intended to fool the DNN into misclassifying the image. While there are many other popular forms of adversarial attacks such as FastGradient Sign method (FGSM), Limited-memory BFGS (L-BFGS) and Generative Adversarial Networks (GAN), in this project, we will focus mainly on the adversarial rain attack. Rain has also been known to pose a threat to DNN based perception systems such as video surveillance, autonomous driving, and unmanned aerial vehicles (UAV), and can cause serious safety issues to the user when a misclassification occurs due to the perturbation caused by the effects of the rain. An attack script that uses factor-aware rain generation was used to create rain streaks on the individual frames of a video, which are then used for the adversarial rain attack. A comparison of the confidence of the images before and after the attack will then be made, allowing us to clearly visualize the effects of the attack. The attack script has performed to expectation and was successful in reducing the overall confidence of the recognition. While some objects in certain frames of the video after the attack are still detected by the Faster R-CNN model with a VGG16 backbone, their confidence scores are lowered, proving to us that the attack was at least somewhat successful. This can serve as a baseline for future research to explore further into similar attacks to devise a better defensive countermeasure against these attacks.
URI: https://hdl.handle.net/10356/156741
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
FYP_Final_Report_U1922328C_Tony_Ho_Man_Tung.pdf
  Restricted Access
828.02 kBAdobe PDFView/Open

Page view(s)

19
Updated on May 18, 2022

Download(s)

1
Updated on May 18, 2022

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.