Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorJiang, Haogeen_US
dc.identifier.citationJiang, H. (2024). Learning-based robot navigation in dynamic environments: from indoor scenes to human crowd scenes. Doctoral thesis, Nanyang Technological University, Singapore.
dc.description.abstractThe increasing presence of autonomous robots in numerous settings necessitates the development of advanced navigation techniques that can handle complex environments. In this doctoral thesis, we present our previous works about exploring autonomous navigation using Deep Reinforcement Learning (DRL) methods in various complex and dynamic environments for covering both indoor scenes and outdoor human crowd scenarios. In the earlier phase, our work more focus on indoor scene navigation with learning-based approaches, we first proposed dueling deep deterministic policy gradient (Dueling-TD3) to address the existing issues that the original TD3 has such as inefficient learning and slow convergence speed. In this way, enable the model to derive an ideal action for mobile robot navigation. We compose the dueling network architecture into the critic network to increase the Q-value estimate precision. The results demonstrate that our proposed model outperforms the original model in terms of route planning capabilities. To counter more dynamic indoor situations, our improved work named iTD3-CLN was proposed, which is a DRL-based low-level motion controller, that mainly focuses on map-less autonomous navigation in indoor dynamic scenes. By incorporating enhancements of N-step returns, Priority Experience Replay, and a channel-based Convolutional Laser Network (CLN) architecture to the TD3 algorithm, the proposed method achieves superior navigation performance compared to traditional techniques DWA. Our extensive studies show that in comparison to the state-of-the-art DRL model TD3, our model is with remarkable improvements in training efficiency, accumulated reward, and generalization performance in unseen environments. Despite the effectiveness of iTD3-CLN in indoor scene navigation, the mobile robot application environment is proposing to expand from static indoor to outdoor which is often crowded with humans. Thus, it requires a better crowd inference ability for the agent. The next important component of this thesis, MP-GatedGCN-RL, which is also based on the DRL framework, is proposed to address the challenges of navigating in outdoor crowded environments. It models the environment information in a graph representation and employs a Message-Passing Graph Convolutional Network (MP-GCN) with edge-wise gating mechanisms to encode asymmetric human-human and human-robot interactions. This approach demonstrates significant improvements in success rate and navigation time on simulated environments derived from the ETH/UCY pedestrian datasets compared to both conventional dynamic avoidance methods ORCA and state-of-the-art DRL-based approaches. To sum up, this thesis advances the field of autonomous robot navigation and proposes a series of DRL robot navigation approaches to allow robots to efficiently and safely navigate in a wide range of environments, from indoor dynamics to outdoor human interactions. Experimental results verify the effectiveness of the proposed learning-based navigation methods.en_US
dc.publisherNanyang Technological Universityen_US
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).en_US
dc.titleLearning-based robot navigation in dynamic environments: from indoor scenes to human crowd scenesen_US
dc.typeThesis-Doctor of Philosophyen_US
dc.contributor.supervisorJiang Xudongen_US
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen_US
dc.description.degreeDoctor of Philosophyen_US
item.fulltextWith Fulltext-
Appears in Collections:EEE Theses
Files in This Item:
File Description SizeFormat 
PhD_Thesis_Jiang_Haoge.pdf11.82 MBAdobe PDFThumbnail

Page view(s)

Updated on Jul 22, 2024


Updated on Jul 22, 2024

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.