Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/173118
Title: | An imperceptible data augmentation based blackbox clean-label backdoor attack on deep neural networks | Authors: | Xu, Chaohui Liu, Wenye Zheng, Yue Wang, Si Chang, Chip Hong |
Keywords: | Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision |
Issue Date: | 2023 | Source: | Xu, C., Liu, W., Zheng, Y., Wang, S. & Chang, C. H. (2023). An imperceptible data augmentation based blackbox clean-label backdoor attack on deep neural networks. IEEE Transactions On Circuits and Systems I: Regular Papers, 70(12), 5011-5024. https://dx.doi.org/10.1109/TCSI.2023.3298802 | Project: | NRF2018NCR-NCR009-0001 CHFA-GC1-AW01 MOE-T2EP50220-0003 |
Journal: | IEEE Transactions on Circuits and Systems I: Regular Papers | Abstract: | Deep neural networks (DNNs) have permeated into many diverse application domains, making them attractive targets of malicious attacks. DNNs are particularly susceptible to data poisoning attacks. Such attacks can be made more venomous and harder to detect by poisoning the training samples without changing their ground-truth labels. Despite its pragmatism, the clean-label requirement imposes a stiff restriction and strong conflict in simultaneous optimization of attack stealth, success rate, and utility of the poisoned model. Attempts to circumvent the pitfalls often lead to a high injection rate, ineffective embedded backdoors, unnatural triggers, low transferability, and/or poor robustness. In this paper, we overcome these constraints by amalgamating different data augmentation techniques for the backdoor trigger. The spatial intensities of the augmentation methods are iteratively adjusted by interpolating the clean sample and its augmented version according to their tolerance to perceptual loss and augmented feature saliency to target class activation. Our proposed attack is comprehensively evaluated on different network models and datasets. Compared with state-of-the-art clean-label backdoor attacks, it has lower injection rate, stealthier poisoned samples, higher attack success rate, and greater backdoor mitigation resistance while preserving high benign accuracy. Similar attack success rates are also demonstrated on the Intel Neural Compute Stick 2 edge AI device implementation of the poisoned model after weight-pruning and quantization. | URI: | https://hdl.handle.net/10356/173118 | ISSN: | 1549-8328 | DOI: | 10.1109/TCSI.2023.3298802 | Schools: | School of Electrical and Electronic Engineering | Rights: | © 2023 IEEE. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. The Version of Record is available online at http://doi.org/10.1109/TCSI.2023.3298802. | Fulltext Permission: | open | Fulltext Availability: | With Fulltext |
Appears in Collections: | EEE Journal Articles |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
TCAS-I_camera_ready.pdf | 12.36 MB | Adobe PDF | ![]() View/Open |
SCOPUSTM
Citations
50
7
Updated on May 5, 2025
Page view(s)
129
Updated on May 7, 2025
Download(s) 50
162
Updated on May 7, 2025
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.