Please use this identifier to cite or link to this item:
|Title:||Deep-attack over the deep reinforcement learning||Authors:||Li, Yang
|Keywords:||Engineering::Computer science and engineering||Issue Date:||2022||Source:||Li, Y., Pan, Q. & Cambria, E. (2022). Deep-attack over the deep reinforcement learning. Knowledge-Based Systems, 250, 108965-. https://dx.doi.org/10.1016/j.knosys.2022.108965||Journal:||Knowledge-Based Systems||Abstract:||Recent adversarial attack developments have made reinforcement learning more vulnerable, and different approaches exist to deploy attacks against it, where the key is how to choose the right timing of the attack. Some work tries to design an attack evaluation function to select critical points that will be attacked if the value is greater than a certain threshold. This approach makes it difficult to find the right place to deploy an attack without considering the long-term impact. In addition, there is a lack of appropriate indicators of assessment during attacks. To make the attacks more intelligent as well as to remedy the existing problems, we propose the reinforcement learning-based attacking framework by considering the effectiveness and stealthy spontaneously, while we also propose a new metric to evaluate the performance of the attack model in these two aspects. Experimental results show the effectiveness of our proposed model and the goodness of our proposed evaluation metric. Furthermore, we validate the transferability of the model, and also its robustness under the adversarial training.||URI:||https://hdl.handle.net/10356/162724||ISSN:||0950-7051||DOI:||10.1016/j.knosys.2022.108965||Rights:||© 2022 Elsevier B.V. All rights reserved.||Fulltext Permission:||none||Fulltext Availability:||No Fulltext|
|Appears in Collections:||SCSE Journal Articles|
Updated on Nov 25, 2022
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.