Please use this identifier to cite or link to this item:
Title: iTD3-CLN: learn to navigate in dynamic scene through Deep Reinforcement Learning
Authors: Jiang, Haoge
Esfahani, Mahdi Abolfazli
Wu, Keyu
Wan, Kong-wah
Heng, Kuan-kian
Wang, Han
Jiang, Xudong
Keywords: Engineering::Electrical and electronic engineering
Issue Date: 2022
Source: Jiang, H., Esfahani, M. A., Wu, K., Wan, K., Heng, K., Wang, H. & Jiang, X. (2022). iTD3-CLN: learn to navigate in dynamic scene through Deep Reinforcement Learning. Neurocomputing, 503, 118-128.
Project: 192 2500049
Journal: Neurocomputing
Abstract: This paper proposes iTD3-CLN, a Deep Reinforcement Learning (DRL) based low-level motion controller, to achieve map-less autonomous navigation in dynamic scene. We consider three enhancements to the Twin Delayed DDPG (TD3) for the navigation task: N-step returns, Priority Experience Replay, and a channel-based Convolutional Laser Network (CLN) architecture. In contrast to the conventional methods such as the DWA, our approach is found superior in the following ways: no need for prior knowledge of the environment and metric map, lower reliance on an accurate sensor, learning emergent behavior in dynamic scene that is intuitive, and more remarkably, able to transfer to the real robot without further fine-tuning. Our extensive studies show that in comparison to the original TD3, the proposed approach can obtain approximately 50% reduction in training to get same performance, 50% higher accumulated reward, and 30–50% increase in generalization performance when tested in unseen environments. Videos of our experiments are available at (Simulation) and (Real experiment).
ISSN: 0925-2312
DOI: 10.1016/j.neucom.2022.06.102
Rights: © 2022 Elsevier B.V. All rights reserved.
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:EEE Journal Articles

Page view(s)

Updated on Jan 29, 2023

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.