Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/183876
Title: Evaluation of backdoor attacks on deep neural networks
Authors: Mohamed Nur Hazim Bin Mohamed Ghazali
Keywords: Computer and Information Science
Issue Date: 2025
Publisher: Nanyang Technological University
Source: Mohamed Nur Hazim Bin Mohamed Ghazali (2025). Evaluation of backdoor attacks on deep neural networks. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/183876
Project: CCDS24-0093
Abstract: Deep Neural Networks (DNNs) are increasingly deployed for use in critical applications in organizations everywhere, but yet the time and cost complexity of a state-of-the-art model can mean outsourcing parts of the DNN model training, either through third-party datasets or models, opening themselves up to backdoor attacks such as data poisoning. This study evaluates the effectiveness of BadNet attacks across multiple DNN models (e.g., WideResNet50, MobileNetV3-Large) and datasets (CIFAR10, TinyImageNet) under real-world constraints: black-box access, non-targeted attacks, and dataset-only manipulation. We analyse key attack parameters—poisoning ratio and trigger size—and their impact on a backdoored model’s clean accuracy (ACC) and attack success rate (ASR). Our findings reveal that complex datasets (e.g., TinyImageNet) are more vulnerable, requiring lower poisoning ratios (0.5–2%) for high ASR, while simpler datasets (e.g., CIFAR10) demand higher ratios. Smaller and less complex models (e.g., MobileNetV3-Large) are more susceptible, achieving 100% ASR with minimal poisoning. We further assess three defences—Anti-Backdoor Learning (ABL), Channel Lipschitz Pruning (CLP), and Neural Attention Distillation (NAD)—and demonstrate that NAD is most effective for complex datasets, and CLP performs better on simpler models. However, no single defence universally mitigates all attack configurations. Our results highlight the need for risk-avoidant model selection, dataset verification, and selecting appropriate defences to protect against backdoor threats.
URI: https://hdl.handle.net/10356/183876
Schools: College of Computing and Data Science 
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:CCDS Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
FYP Evaluation Of Backdoor Attacks On Deep Neural Networks.pdf
  Restricted Access
2.55 MBAdobe PDFView/Open

Page view(s)

65
Updated on May 7, 2025

Download(s)

2
Updated on May 7, 2025

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.