Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/103342
Title: Learn to steer through deep reinforcement learning
Authors: Wu, Keyu
Esfahani, Mahdi Abolfazli
Yuan, Shenghai
Wang, Han
Keywords: Autonomous Steering
DRNTU::Engineering::Electrical and electronic engineering
Deep Reinforcement Learning
Issue Date: 2018
Source: Wu, K., Esfahani, M. A., Yuan, S., & Wang, H. (2018). Learn to Steer through Deep Reinforcement Learning. Sensors, 18(11), 3650-. doi:10.3390/s18113650
Series/Report no.: Sensors
Abstract: It is crucial for robots to autonomously steer in complex environments safely without colliding with any obstacles. Compared to conventional methods, deep reinforcement learning-based methods are able to learn from past experiences automatically and enhance the generalization capability to cope with unseen circumstances. Therefore, we propose an end-to-end deep reinforcement learning algorithm in this paper to improve the performance of autonomous steering in complex environments. By embedding a branching noisy dueling architecture, the proposed model is capable of deriving steering commands directly from raw depth images with high efficiency. Specifically, our learning-based approach extracts the feature representation from depth inputs through convolutional neural networks and maps it to both linear and angular velocity commands simultaneously through different streams of the network. Moreover, the training framework is also meticulously designed to improve the learning efficiency and effectiveness. It is worth noting that the developed system is readily transferable from virtual training scenarios to real-world deployment without any fine-tuning by utilizing depth images. The proposed method is evaluated and compared with a series of baseline methods in various virtual environments. Experimental results demonstrate the superiority of the proposed model in terms of average reward, learning efficiency, success rate as well as computational time. Moreover, a variety of real-world experiments are also conducted which reveal the high adaptability of our model to both static and dynamic obstacle-cluttered environments.
URI: https://hdl.handle.net/10356/103342
http://hdl.handle.net/10220/47293
ISSN: 1424-8220
DOI: http://dx.doi.org/10.3390/s18113650
Rights: © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:EEE Journal Articles

Files in This Item:
File Description SizeFormat 
Learn to Steer through Deep Reinforcement Learning.pdf4.1 MBAdobe PDFThumbnail
View/Open

Google ScholarTM

Check

Altmetric

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.