Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/175555
Full metadata record
DC FieldValueLanguage
dc.contributor.authorHalim, Jessicaen_US
dc.date.accessioned2024-04-29T07:19:08Z-
dc.date.available2024-04-29T07:19:08Z-
dc.date.issued2024-
dc.identifier.citationHalim, J. (2024). Enhancing robustness and efficiency in visual SLAM through integration of deep learning-based semantic segmentation techniques. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175555en_US
dc.identifier.urihttps://hdl.handle.net/10356/175555-
dc.description.abstractVisual SLAM is a robotics system enabling the traversal of robots in new environments without prior information. With the camera as its main sensor, it achieves the aforementioned objective through localisation and mapping algorithms based on visual information. To counter the vulnerability of visual SLAM to dynamic elements, Semantic SLAM imbues visual SLAM systems with semantic information from semantic machine learning models. This Final Year Project delves into the field of Semantic SLAM to discover and counter its unique challenges. Two main methods targeting separate groups of challenges are conceptualized and evaluated. The first method tackles the inflexibility in the use of semantic labels by Semantic SLAM. This method develops a combined moving probability which incorporates both semantic and geometric information. The combined probability allows for a more precise estimate of the probability of a feature point moving within a scene. The method achieved aggregate improvements across low dynamic scenes for both global and local accuracies. Most significantly, there was an overall 12.5% improvement in local translational accuracies over traditional Semantic SLAM. In low-dynamic scenes with faster and less stable camera movements, the combined probability method allows for significant improvements of 17 to 22% in local translational and rotational accuracies. In addition, improvements in low-dynamic scenes are well balanced against performance in high-dynamic scenes. This is exemplified through global improvements of 92.5% over traditional SLAM. This means that in high-dynamic scenes, the average global error is kept to a maximum of 13 centimetres as opposed to 101 centimetres for traditional ORB-SLAM. Furthermore, this method operates in real-time by relocating the execution of semantic segmentation to a separate thread. The second method targets the lack of precision in segmentation boundaries of semantic segmentation models while aiming for further minimization of semantic SLAM execution times for RGB-D Semantic SLAM. The method developed is a clustering-to-classification pipeline which splits the responsibility of segmentation and classification between the components of clustering and classification. This method is developed with the aim to replace the traditional use of semantic segmentation models. Although gains in segmentation and classification are limited, the method shows potential for more specialized use cases.en_US
dc.language.isoenen_US
dc.publisherNanyang Technological Universityen_US
dc.relationSCSE23-0587en_US
dc.subjectComputer and Information Scienceen_US
dc.titleEnhancing robustness and efficiency in visual SLAM through integration of deep learning-based semantic segmentation techniquesen_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorLoke Yuan Renen_US
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.description.degreeBachelor's degreeen_US
dc.contributor.supervisoremailyrloke@ntu.edu.sgen_US
dc.subject.keywordsSLAMen_US
dc.subject.keywordsVSLAMen_US
dc.subject.keywordsSemantic SLAMen_US
dc.subject.keywordsSemantic segmentationen_US
item.grantfulltextrestricted-
item.fulltextWith Fulltext-
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)
Files in This Item:
File Description SizeFormat 
FYP_AmendedFinalReport.pdf
  Restricted Access
2.62 MBAdobe PDFView/Open

Page view(s)

156
Updated on Apr 21, 2025

Download(s)

15
Updated on Apr 21, 2025

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.