Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/175146
Title: | Backdoor attacks in neural networks | Authors: | Liew, Sher Yun | Keywords: | Computer and Information Science | Issue Date: | 2024 | Publisher: | Nanyang Technological University | Source: | Liew, S. Y. (2024). Backdoor attacks in neural networks. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175146 | Abstract: | As artificial intelligence becomes increasingly integrated in our daily lives, neural networks can be found in applications of deep learning in a multitude of critical domains, encompassing facial recognition, autonomous vehicular systems, and more. This pervasive integration, while transformative, has brought about a pressing concern: the potential for disastrous consequences arising from malicious backdoor attacks in neural networks. To determine the effects and limitations of these attacks, this project aims to conduct a comprehensive examination of 2 previously proposed backdoor attack strategies, namely Blended and Blind backdoors, along with 2 previously proposed backdoor defence mechanisms, namely Neural Cleanse and Spectral Signatures. An exhaustive review of pertinent research literature was performed. Additionally, experiments were carried out to test the effectiveness of these strategies. | URI: | https://hdl.handle.net/10356/175146 | Schools: | School of Computer Science and Engineering | Fulltext Permission: | restricted | Fulltext Availability: | With Fulltext |
Appears in Collections: | SCSE Student Reports (FYP/IA/PA/PI) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
LiewSherYun_FYPReport.pdf Restricted Access | 1.3 MB | Adobe PDF | View/Open |
Page view(s)
146
Updated on May 7, 2025
Download(s)
17
Updated on May 7, 2025
Google ScholarTM
Check
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.