Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorDi, Kaien_US
dc.contributor.authorYang, Shaofuen_US
dc.contributor.authorWang, Wanyuanen_US
dc.contributor.authorYan, Fuhanen_US
dc.contributor.authorXing, Haokunen_US
dc.contributor.authorJiang, Jiuchuanen_US
dc.contributor.authorJiang, Yichuanen_US
dc.identifier.citationDi, K., Yang, S., Wang, W., Yan, F., Xing, H., Jiang, J. & Jiang, Y. (2019). Optimizing evasive strategies for an evader with imperfect vision capacity. Journal of Intelligent and Robotic Systems, 96(3-4), 419-437.
dc.description.abstractThe multiagent pursuit-evasion problem has attracted considerable interest during recent years, and a general assumption is that the evader has perfect vision capacity. However, in the real world, the vision capacity of the evader is always imperfect, and it may have noisy observation within its limited field of view. Such an imperfect vision capacity makes the evader sense incomplete and inaccurate information from the environment, and thus, the evader will achieve suboptimal decisions. To address this challenge, we decompose this problem into two subproblems: 1) optimizing evasive strategies with a limited field of view, and 2) optimizing evasive strategies with noisy observation. For the evader with a limited field of view, we propose a memory-based ‘worst case’ algorithm, the idea of which is to store the locations of the pursuers seen before and estimate the possible region of the pursuers outside the sight of the evader. For the evader with noisy observation, we propose a value-based reinforcement learning algorithm that trains the evader offline and applies the learned strategy to the actual environment, aiming at reducing the impact of uncertainty created by inaccurate information. Furthermore, we combine and make a trade-off between the above two algorithms and propose a memory-based reinforcement learning algorithm that utilizes the estimated locations to modify the input of the state set in the reinforcement learning algorithm. Finally, we extensively evaluate our algorithms in simulation, concluding that in this imperfect vision capacity setting, our algorithms significantly improve the escape success rate of the evader.en_US
dc.relation.ispartofJournal of Intelligent and Robotic Systemsen_US
dc.rights© 2019 Springer Nature B.V. All rights reserved.en_US
dc.subjectEngineering::Computer science and engineeringen_US
dc.titleOptimizing evasive strategies for an evader with imperfect vision capacityen_US
dc.typeJournal Articleen
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.subject.keywordsMultiagent Pursuit-evasion Problemen_US
dc.subject.keywordsImperfect Vision Capacityen_US
dc.description.acknowledgementThis work was supported by the National Natural Science Foundation of China (61472079, 61170164, 61807008 and 61806053), the Natural Science Foundation of Jiangsu Province of China (BK20171363, BK20180356, BK20180369, BK20170693).en_US
item.fulltextNo Fulltext-
Appears in Collections:SCSE Journal Articles

Page view(s)

Updated on Oct 18, 2021

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.