Please use this identifier to cite or link to this item:
|Title:||3D localization for unmanned vehicles using visual inputs||Authors:||Mou, Wei||Keywords:||DRNTU::Engineering::Electrical and electronic engineering||Issue Date:||2017||Source:||Mou, W. (2017). 3D localization for unmanned vehicles using visual inputs. Doctoral thesis, Nanyang Technological University, Singapore.||Abstract:||This thesis describes our efforts to tackle the 3D localization problem for unmanned vehicles using only visual data. Given the video stream of a camera, we wish to estimate the location of the camera accurately in real-time. We propose an indoor visual odometry system with a RGB-D camera pointing at the ceiling. The term visual odometry is chosen for its functional similarity with wheel odometry which incrementally estimates the position of a robot by counting the number of turns of its wheels over time. Similarly, visual odometry estimates the position of the robot by integrating its motion changes inferred from the images that captured by the on-board cameras. The main contribution of this algorithm is the introduction of principal direction detection that can greatly reduce error accumulation problem in most visual odometry estimation approaches. The proposed approach can be operated in real-time and it performs well even with cameras disturbance. For robots working in outdoor environments, an efficient visual odometry system is developed. Keypoints are detected using the FAST detector. The proposed feature descriptor is designed in such a way that it is not only invariant to rotation and llumination changes but also the difference between two descriptors can be computed very efficiently using Intel Streaming SIMD Extensions (SSE) instruction. The feature matching process is accelerated using prior statistical analysis of maximum and minimum feature displacements. Experimental results show that the proposed system can perform accurate visual odometry very efficiently in outdoor environments. In order to localize distant objects on the sea surface, an automatic self-calibration approach for wide baseline stereo cameras using sea surface images is introduced. Compared to the traditional stereo calibration method using calibration pattern, the proposed self-calibration approach automatically estimate cameras’ rotation matrices for stereo rig using the sea horizon and a point at infinite distance.||URI:||http://hdl.handle.net/10356/70516||Fulltext Permission:||open||Fulltext Availability:||With Fulltext|
|Appears in Collections:||EEE Theses|
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.