Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/78537
Title: Visual and inertial odometry for mobile robot
Authors: Leong, Jing Chen
Keywords: DRNTU::Engineering::Mechanical engineering
Issue Date: 2019
Abstract: In the recent year, application of unmanned aerial vehicle (UAV) has been well received by both consumer and industry application due to its versatility and cost effectiveness. During the Singapore International Water Week, national water agency (PUB) showcased its smart technologies to perform various complex tasks, such as deep tunnel sewerage system inspection [1]. Instead of carrying out inspection by inspection personnel, PUB deployed UAV to perform inspections in the deep tunnel sewerage system environment which is hostile for human. For the UAV to manoeuvre in the environment, it must be able to localize itself. To localize an UAV, most developer incorporated global positioning system (GPS) into the UAV. However, GPS system is ineffective to determine the position of UAV when GPS signal is weak or absent, such as operating in urban indoor environment and tunnel network. To accurately determine the location of an UAV, another method is to incorporate sensors such as inertial measurement unit (IMU), cameras and Lidar. The objective of this project is to localize an UAV with respect to the features in the environment for tunnel and indoor task using vision and inertial sensors. In tunnel environment, the number of features could be scarce and less illuminated. Camera performs ineffectively when the environment is dark and travelling at higher flight speed due to motion blur, while IMU is prone to drifting when held stationary or travelling at lower speed due to higher noise exhibit when IMU is held static. In such condition, IMU will complement camera when travelling at higher flight velocity where the IMU is effective at, meanwhile camera will complement IMU at lower speed, where the camera is effective to determine movement changes. Throughout the project, several challenges were encountered, such as camera and IMU calibration issue, and hardware synchronization issue between the camera and IMU. In this project, an UAV capable to self-localize was setup, with integration of on-board computing unit, camera and IMU. Prior to the state estimation, camera calibration, IMU calibration and hardware synchronization of camera and IMU was carried out. To perform the state estimation, VINS-Mono state estimator was utilized. Subsequently, experimental evaluation was carried out to compare the UAV system ii localization performance with ground truth data at Motion Analysis Laboratory of Nanyang Technological University. Apart from that, a comparison study was made to determine the robustness and reliability of VINS-Mono state estimator and the UAV system using various flight velocities and environment features settings. Based on the results obtained, it can be concluded that the implementation is practical with a synchronized camera-IMU setup as it is capable to localize an UAV in an GPS-denied environment, with an average root mean square error kept under 25cm.
URI: http://hdl.handle.net/10356/78537
Rights: Nanyang Technological University
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:MAE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
C020_FYP Report_Leong Jing Chen.pdf
  Restricted Access
Main Article3.4 MBAdobe PDFView/Open

Page view(s) 50

40
checked on Sep 25, 2020

Download(s) 50

11
checked on Sep 25, 2020

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.