Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/183707
Title: | Harnessing input gradients for generating unrestricted adversarial examples in remote sensing | Authors: | Fan, Wei | Keywords: | Computer and Information Science | Issue Date: | 2025 | Publisher: | Nanyang Technological University | Source: | Fan, W. (2025). Harnessing input gradients for generating unrestricted adversarial examples in remote sensing. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/183707 | Abstract: | Due to their excellent accuracy, deep neural networks (DNNs) have been widely used in image classification, recognition, and segmentation. However, these networks are still vulnerable to targeted attacks. This study proposes a novel gradient-based adversarial attack method, Input Gradient Attack (IG), which can destroy the recognition ability of deep learning models while maintaining the high visual quality of the attacked images. The method systematically analyzes the input gradients (calculated from the gradient of the neural network loss function for each pixel) to identify specific image regions with high sensitivity. Unlike traditional methods that apply a global noise pattern, this technique selectively perturbs pixels with high input gradient magnitude, ensuring minimal but effective changes. To maintain color fidelity, this study uses a color-aware loss function based on CIEDE2000 to limit adversarial examples to a threshold of imperceptible color changes. Experiments on remote sensing datasets show that this attack significantly degrades the classification performance of DNN-based models without degrading the image's visual quality. In addition, the attack method demonstrates transferability and achieves good performance on image segmentation models. Notably, when defended by a state-of-the-art Denoised Diffusion Probabilistic Model, currently considered the most powerful defense mechanism against adversarial attacks, the adversarial examples in this study retain their attack capabilities even after the purification procedure. | URI: | https://hdl.handle.net/10356/183707 | Schools: | School of Electrical and Electronic Engineering | Fulltext Permission: | restricted | Fulltext Availability: | With Fulltext |
Appears in Collections: | EEE Theses |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
FanWei_dissertation_modified_signed.pdf Restricted Access | 1.09 MB | Adobe PDF | View/Open |
Google ScholarTM
Check
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.