Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSingh Gauraven_US
dc.identifier.citationSingh Gaurav (2022). Fast and robust visual SLAM for dynamic environments. Doctoral thesis, Nanyang Technological University, Singapore.
dc.description.abstractAutonomous mobile robots need to perform self-localization with respect to their environments in order to achieve safe navigation. The self-localization of these autonomous robots is usually jointly addressed as the problem of Simultaneous Localization and Mapping (SLAM). The focus of SLAM has recently shifted towards vision-based approaches, which can provide higher robustness as well as the ability to generate semantically-rich maps. However, visual SLAM (vSLAM) systems suffer from high computational complexity and are unable to deal with dynamic objects in the scene and changing scene conditions. This thesis aims to develop a vSLAM framework that is robust to such dynamic environments while being able to run efficiently on resource-constrained platforms. We first propose a real-time solution for Visual Odometry (VO) that achieves high pose accuracy. In particular, an efficient feature correspondence setup scheme is introduced to generate high-quality feature matches that are evenly distributed over the image. A new adaptive technique that rapidly and efficiently removes outliers is presented, which overcomes the computational complexity of existing outlier removal schemes. In addition, a new pose optimization step is introduced to mitigate problems that are associated with far features, which often lead to high residual errors. The proposed VO is evaluated on the popular KITTI dataset by comparing it with top-performing VO and vSLAM in terms of speed and accuracy. Results show that the proposed VO achieves the fastest speed compared to all the top-ranked VO and vSLAM systems on the KITTI leaderboard. The proposed VO is 47% faster than the state-of-the-art ORB-SLAM2 with comparable accuracy. Next, studies are undertaken to study the impact of dynamic objects on the accuracy of pose estimates. Our studies highlight the importance of distinguishing between motion states of potential moving objects for vSLAM in highly dynamic environments. We propose a semantic vSLAM framework to increase the robustness of existing vSLAM systems by accurately removing moving objects from the scene so that they will not contribute to pose estimation and mapping. information is fused with motion states of the scene via a probability framework to enable accurate and robust moving object extraction to retain the useful features for pose estimation and mapping. We performed extensive experiments on well-known datasets to show that the proposed technique outperforms existing vSLAM methods in complex indoor and outdoor environments, under various dynamic scenarios such as crowded scenes. In order to accelerate our semantic vSLAM framework on embedded platforms, we propose a lightweight keyframe-only semantic generation method. Our approach extracts semantics only on keyframes (i.e., frames with significant changes in image content), and semantic propagation is used to compensate for the changes in the intermediate frames. This is achieved by computing the dense transformation map using the available feature flow vectors. A novel motion state detection algorithm to compensate for the propagated semantics is employed to identify regions in the scene with high moving probability. This information is then fused with semantic cues using the previously proposed probability framework to retain the useful features for pose estimation and mapping. We implemented our semantic vSLAM framework on the embedded Jetson TX1 and performed extensive experiments on four well-known datasets to show that it can outperform existing vSLAM methods in complex indoor and outdoor environments under various dynamic scenarios. Finally, we extend our semantic vSLAM framework to long-term localization by enabling it to adapt to varying scene conditions. To achieve this, we increase the robustness of the loop detection (relocalization) task in vSLAM, by using global semantic structure descriptors, which are more stable than the conventional local features under changing scene conditions. We introduce a novel hierarchical loop detection method that relies on the global semantic structure descriptors to first identify a coarse location, which is further refined using local feature descriptor-based bag-of-words (BOW). In addition, semantic class-wise local BOW vocabulary trees are built to increase the descriptiveness of the vocabulary for within-class words. The experiments demonstrate that the proposed hierarchical loop detection method has significantly lower query times than existing state-of-the-art loop detection methods with enhanced recall rates at 100% precision. Furthermore, the proposed hierarchical loop detection does not require any offline training for vocabularies or places.en_US
dc.publisherNanyang Technological Universityen_US
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).en_US
dc.subjectEngineering::Computer science and engineering::Computer applicationsen_US
dc.titleFast and robust visual SLAM for dynamic environmentsen_US
dc.typeThesis-Doctor of Philosophyen_US
dc.contributor.supervisorLam Siew Keien_US
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.description.degreeDoctor of Philosophyen_US
dc.contributor.researchHardware & Embedded Systems Lab (HESL)en_US
item.fulltextWith Fulltext-
Appears in Collections:SCSE Theses
Files in This Item:
File Description SizeFormat 
Final version.pdf21.16 MBAdobe PDFThumbnail

Page view(s)

Updated on Nov 28, 2023

Download(s) 50

Updated on Nov 28, 2023

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.