Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKarunsekera, H. Hasith Ruchiranen_US
dc.identifier.citationKarunsekera, H. H. R. (2020). Vision based solutions for autonomous navigation. Doctoral thesis, Nanyang Technological University, Singapore.en_US
dc.description.abstractThis thesis presents a study on computer vision solutions for autonomous navigation. Among many different functionalities in autonomous navigation, areas covered in this study include obstacle detection and multiple object tracking. The study on obstacle detection is performed under two main sub-categories, namely, positive and negative obstacle detection. Positive obstacles are the obstacles that lie on the road surface such as vehicles, pedestrians etc., while the negative obstacles are the ones below the road surface such as holes and potholes. The first challenge addressed in this study is to identify the obstacles on the drivable road surface with depth information. The key contribution made in the first part is to propose an efficient framework for understanding the road surface, calculating the road angle, detecting obstacles in class agnostic manner and instance segmentation of objects, using stereo vision. The proposed framework has been tested in the real world with live data on public roads in real time, with different weather conditions and has shown to be effective. The second part of the study is on negative obstacle detection for the safe navigation of the robot. The key contribution in this section is to propose an energy minimization approach for negative obstacle region detection, using stereo output, colour information and saliency detection. The proposed concept has been tested on different environments such as concrete-road, tar-road and corridor based scenarios. Comparison with the recent work, has shown the effectiveness of the proposed method. The third contribution of the thesis is to propose an efficient framework for the multiple object tracking task that can achieve real time performance with the state-of-the-art accuracy. Tracking framework is proposed following the tracking-by-detection architecture. Matching cost is calculated by combining grid based color histogram matching, grid based structure matching, predicted object motion matching and predicted size based matching. Proposed framework has gained the expected efficiency, achieving 150+ fps for KITTI data and 27.0 fps and 17.1 fps for MOT 17 training and test data respectively, with comparable accuracy to the recent work. Final contribution of the study is to learn good features to track as an extension to an existing detection network. Object detection is the first step of the tracking-by-detection architecture. Hence, learning good features for tracking as an extension to the detection network, helps to reduce the computational complexity of the whole pipeline. Learnt features have resulted in improved tracking accuracy compared to the hand crafted ones (in the above mentioned work), at the cost of reduction in tracking speed. 3D convolutional network architecture is proposed to consider the inter-dependencies in spatio-temporal domains.en_US
dc.publisherNanyang Technological Universityen_US
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).en_US
dc.subjectEngineering::Electrical and electronic engineering::Electronic systems::Signal processingen_US
dc.titleVision based solutions for autonomous navigationen_US
dc.typeThesis-Doctor of Philosophyen_US
dc.contributor.supervisorWang Hanen_US
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen_US
dc.description.degreeDoctor of Philosophyen_US
dc.contributor.researchST Engineering-NTU Corporate Laben_US
item.fulltextWith Fulltext-
Appears in Collections:EEE Theses
Files in This Item:
File Description SizeFormat 

Page view(s)

Updated on Jun 29, 2022

Download(s) 50

Updated on Jun 29, 2022

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.