Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/163971
Title: Challenges and countermeasures for adversarial attacks on deep reinforcement learning
Authors: Ilahi, Inaam
Usama, Muhammad
Qadir, Junaid
Janjua, Muhammad Umar
Al-Fuqaha, Ala
Hoang, Dinh Thai
Niyato, Dusit
Keywords: Engineering::Computer science and engineering
Issue Date: 2022
Source: Ilahi, I., Usama, M., Qadir, J., Janjua, M. U., Al-Fuqaha, A., Hoang, D. T. & Niyato, D. (2022). Challenges and countermeasures for adversarial attacks on deep reinforcement learning. IEEE Transactions On Artificial Intelligence, 3(2), 90-109. https://dx.doi.org/10.1109/TAI.2021.3111139
Journal: IEEE Transactions on Artificial Intelligence
Abstract: Deep reinforcement learning (DRL) has numerous applications in the real world, thanks to its ability to achieve high performance in a range of environments with little manual oversight. Despite its great advantages, DRL is susceptible to adversarial attacks, which precludes its use in real-life critical systems and applications (e.g., smart grids, traffic controls, and autonomous vehicles) unless its vulnerabilities are addressed and mitigated. To address this problem, we provide a comprehensive survey that discusses emerging attacks on DRL-based systems and the potential countermeasures to defend against these attacks. We first review the fundamental background on DRL and present emerging adversarial attacks on machine learning techniques. We then investigate the vulnerabilities that an adversary can exploit to attack DRL along with state-of-the-art countermeasures to prevent such attacks. Finally, we highlight open issues and research challenges for developing solutions to deal with attacks on DRL-based intelligent systems.
URI: https://hdl.handle.net/10356/163971
ISSN: 2691-4581
DOI: 10.1109/TAI.2021.3111139
Rights: © 2021 IEEE. All rights reserved.
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:SCSE Journal Articles

SCOPUSTM   
Citations 20

10
Updated on Jan 28, 2023

Page view(s)

20
Updated on Jan 29, 2023

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.