Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/157249
Title: Attack on prediction confidence of deep learning neural networks
Authors: Ng, Garyl Xuan
Keywords: Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Issue Date: 2022
Publisher: Nanyang Technological University
Source: Ng, G. X. (2022). Attack on prediction confidence of deep learning neural networks. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/157249
Project: SCSE21-0227 
Abstract: Machine learning has become a prevalent part of our everyday life, being utilised in tasks that most people may not be aware of, such as in transport, entertainment, and education. Due to its widespread usage, it has also become a prime target for cyber-attacks from malicious parties. Therefore, with machine learning playing such a crucial role in the livelihood of society, it is vital that these threats be extensively researched and studied, to prevent the disruption of essential networks and systems. One common threat is the data poisoning attack, in which the attacker manipulates training data to cause errors in the model. Most research conducted on this attack involves the implementation of “noise” in images, in the form of perturbations to the image that are imperceivable to the human eye. These methods are generally more complex to execute, and require a higher level of mathematical understanding. This project aims to shift the attention onto more simplistic methods of attack, that an inexperienced or less knowledgeable malicious party may attempt in their efforts to disrupt a DLNN. In doing so, it may empower defence against a wider variety of attacks, which results in increased versatility and robustness of cybersecurity in the machine learning industry. The objective of this project is to investigate a simplistic approach to the data poisoning attack, by utilising basic image adjustments such as altering the vibrance and saturation of an image, and comparing the results to determine the most effective adjustment type in disrupting a DLNN. A single dataset was altered using image editing software into multiple subsets, each corresponding to a particular image adjustment type. These subsets contained further sets of images with varying degrees of severity, which were then tested on three different DLNNs (VGG-16, EfficientNet and ResNet). The generated results were analysed, and each adjustment type was ranked accordingly to its effectiveness. The most effective types were then combined together and subjected to further testing. The results showed that out of all the image adjustment types, a combination of exposure and offset was the most effective in attacking the prediction confidence of DLNNs. It had an effectiveness of reducing the prediction score of the model by 4% for every factor value of 1 that the adjustment was increased.
URI: https://hdl.handle.net/10356/157249
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
SCSE21-0227 Final Year Project Report by Garyl Ng Xuan.pdf
  Restricted Access
1.45 MBAdobe PDFView/Open

Page view(s)

6
Updated on May 18, 2022

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.