Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/147972
Title: Attack on training effort of deep learning
Authors: Chan, Wen Le
Keywords: Engineering::Computer science and engineering::Computer applications::Life and medical sciences
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Issue Date: 2021
Publisher: Nanyang Technological University
Source: Chan, W. L. (2021). Attack on training effort of deep learning. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/147972
Project: SCSE20-0190
Abstract: Deep Neural Network (DNN) is popular for its efficiency and accuracy across domains, including the medical field. However, medical DNNs are vulnerable to adversarial attacks, which presents a huge limitation in their clinical usage. Retinal vessel segmentation is key to diagnosis of ocular diseases. This task is inherently challenging as 1) vessels have low contrast with the background 2) vessel are of different width 3) other pathological regions could be easily mistaken as vascular structures. With its high clinical value, many works construct DNNs to realize automated vessel segmentation. However, current approaches have two main limitations. 1) The small available datasets result in possible over-training and over-fitting. 2) The dataset contains only specially selected high-quality images, results in lack of generalisation ability on low-quality images and ill-crafted adversarial examples. To illustrate these limitations, two adversarial attack methods are proposed. We did not utilise noise attacks, as noises are rarely present in retinal images. We instead leveraged on their inherent degradation, which is uneven illumination, caused by the imperfect image acquisition process. Firstly, pixel-wise adversarial attack applies a Light-Enhancement curve iteratively on each pixel’s illumination. Secondly, threshold-based adversarial attack creates non-uniform illumination through disproportional change in different region’s illumination. We also utilised different constraints to ensure the effectiveness of adversarial examples while retaining high level of realism. Validation on DRIVE datasets with the state-of-the art DNN, SA-Unet, achieved superior results compared to noise-based attack. We revealed the potential threat of non-uniform illumination to DNN-based automated retinal segmentation, in hopes to inspire the development of uneven-illumination-robust approaches. We proposed a possible solution to this threat, by demonstrating the effectiveness of adversarial training method in improving network’s generalisation ability. In addition, we proposed DC-Unet, in which DropBlock, batch normalisation and ReLU activation are added to U-Net’s convolution block, with dynamic convolution linking the encoder path and decoder path. The proposed architecture achieved competitive performance on both DRIVE test set and synthesised low-quality images.
URI: https://hdl.handle.net/10356/147972
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
Chan Wen Le_Attack on Training Effort of Deep Learning.pdf
  Restricted Access
2.03 MBAdobe PDFView/Open

Page view(s)

127
Updated on May 16, 2022

Download(s)

17
Updated on May 16, 2022

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.