Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/160963
Title: | Exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks | Authors: | Bai, Tao | Keywords: | Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision |
Issue Date: | 2022 | Publisher: | Nanyang Technological University | Source: | Bai, T. (2022). Exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/160963 | Abstract: | Deep learning, especially deep neural networks (DNNs), is at the heart of the current rise of artificial intelligence, and the major breakthroughs in the last few years have been made by DNNs. It has been demonstrated in recent works that DNNs are vulnerable to human-crafted adversarial examples, which look normal in human eyes. Such adversarial instances can fool and mislead DNNs to misbehave as adversaries expected, causing serious consequences for various DNN-based applications in our daily life. To this end, this thesis dedicates to revealing the vulnerabilities of deep learning algorithms and developing defense strategies for combating adversaries effectively. We study current DNNs from the perspective of security with two sides: attack and defense. On the attack front, we explore the possibility of attacks against DNNs during test time with two types of adversarial examples: adversarial perturbations and adversarial patches. On the defense front, we develop solutions to defend against adversarial examples and investigate the robustness-preserving distillation techniques. | URI: | https://hdl.handle.net/10356/160963 | DOI: | 10.32657/10356/160963 | Schools: | School of Computer Science and Engineering | Rights: | This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). | Fulltext Permission: | open | Fulltext Availability: | With Fulltext |
Appears in Collections: | SCSE Theses |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Exploring the Vulnerabilities and Enhancing the Adversarial Robustness of Deep Neural Networks.pdf | 27.34 MB | Adobe PDF | ![]() View/Open |
Page view(s)
140
Updated on Oct 3, 2023
Download(s) 50
115
Updated on Oct 3, 2023
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.