Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/165839
Title: | Defense on unrestricted adversarial examples | Authors: | Sim, Chee Xian | Keywords: | Engineering::Computer science and engineering | Issue Date: | 2023 | Publisher: | Nanyang Technological University | Source: | Sim, C. X. (2023). Defense on unrestricted adversarial examples. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/165839 | Abstract: | Deep Neural Networks (DNN) and Deep Learning (DL) has led to advancements in various fields, including learning algorithms such as Reinforcement Learning (RL). These advancements have led to new algorithms like Deep Reinforcement Learning (DRL), which can achieve great performance in fields such as image recognition and playing video games. However, DRL models are vulnerable to adversarial attacks that could lead to catastrophic results. A white-box attack, such as the Fast Gradient Signed Method (FGSM) attack, can significantly affect the performance of models, even with low amounts of perturbations. To defend against such attacks, the most common approach is to perform adversarial training to create robust neural networks against these attacks. In this paper, we explore the use of Bayesian Neural Networks (BNN) on Proximal Policy Optimization (PPO) model to defend against adversarial attacks. | URI: | https://hdl.handle.net/10356/165839 | Schools: | School of Computer Science and Engineering | Fulltext Permission: | restricted | Fulltext Availability: | With Fulltext |
Appears in Collections: | SCSE Student Reports (FYP/IA/PA/PI) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Final Report V5.pdf Restricted Access | Amended Final Report | 1.86 MB | Adobe PDF | View/Open |
Page view(s)
204
Updated on May 7, 2025
Download(s)
18
Updated on May 7, 2025
Google ScholarTM
Check
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.