Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/162903
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChong, Shen Ruien_US
dc.date.accessioned2022-11-14T01:34:45Z-
dc.date.available2022-11-14T01:34:45Z-
dc.date.issued2022-
dc.identifier.citationChong, S. R. (2022). Server-edge visual localization system for autonomous agents. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/162903en_US
dc.identifier.urihttps://hdl.handle.net/10356/162903-
dc.description.abstractSLAM algorithms are commonly used to generate a map that can subsequently be used in the field of autonomous robot navigation and obstacle avoidance, with robots simultaneously mapping the environment around itself and localising itself in that environment. Since SLAM algorithms make use of approximate solutions and are commonly executed on embedded platforms in real-time environments, it is crucial for these SLAM algorithms to be accurate yet efficient. One way of increasing the efficiency of SLAM algorithms is through collaborative SLAM which allows multiple agents to participate in the mapping and localization process concurrently. Ensuring that collaborative SLAM algorithms can run properly with real-world considerations, such as when agents connect halfway or when agents with different camera characteristics and movements are used would mean that these algorithms can be used in a larger variety of applications. In this project, we tested COVINS, a collaborative SLAM framework, with up to two distinct types of agents running visual-inertial odometry through ORB-SLAM3. Specifically, a Tello EDU drone and an Intel RealSense Depth Camera D435i were used as the agents. Calibration was done before running the framework in the Hardware & Embedded Systems Lab (HESL) in NTU. Both on the fly connection and concurrent connection to the framework were tested, with trajectory estimates of each agent and covisibility edges between keyframes of the agents obtained. It was found that on the fly connections were well-supported, while agents need to first work well with visual inertial odometry to integrate properly with the COVINS framework.en_US
dc.language.isoenen_US
dc.publisherNanyang Technological Universityen_US
dc.relationSCSE21-0703en_US
dc.subjectEngineering::Computer science and engineeringen_US
dc.titleServer-edge visual localization system for autonomous agentsen_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorLam Siew Keien_US
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.description.degreeBachelor of Engineering (Computer Science)en_US
dc.contributor.supervisoremailASSKLam@ntu.edu.sgen_US
item.fulltextWith Fulltext-
item.grantfulltextrestricted-
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)
Files in This Item:
File Description SizeFormat 
Chong Shen Rui - Final FYP Report.pdf
  Restricted Access
2.71 MBAdobe PDFView/Open

Page view(s)

81
Updated on Apr 14, 2024

Download(s)

10
Updated on Apr 14, 2024

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.