Please use this identifier to cite or link to this item:
|Title:||Urban vehicle ego lane detection||Authors:||Srinivasan Karunathrinathan Harisudan||Keywords:||DRNTU::Engineering::Electrical and electronic engineering||Issue Date:||2017||Abstract:||Detection of lane markings will enhance the capabilities of Advanced Driver Assistance Systems (ADAS). Lane markings denote vital information pertaining to navigation. For lane detection sensors like radar, laser and vision system can be used and the vision system is the most cost-efficient and principal approach for lane detection. The cameras mounted in the front of the vehicle captures the real-time images on road scenario. Those captured images undergo few processes so that lane can be detected. In this thesis, a simple heuristic method for Ego lane detection for the vehicle on the urban road has been developed which is a modified methodology of an existing research work. Ego lane is the lane in which the vehicle is currently driven on. In this modified heuristic approach, from the image a region of interest is selected to get information about the road, filtering techniques are used to identify boundaries of the lanes. Hough Transform is applied to get the lane position, and Clustering methodology is used to group the points as left and right lane. To get rid of the outliers and pick the best estimate of inliers, RANSAC (Random SAmple Consensus) is used. The points on the lanes are identified from the best estimate and the lane where the vehicle is being driven is drawn. As a part of the thesis, classical pixel based performance evaluation is performed. The input road image data from numerous urban road scenes has been taken from KITTI and CALTECH road data sets. The methods are simulated with the help of the data sets. The data sets and the benchmark have been provided for urban road scenario. With the help of ground truth data available on the KITTI benchmark, the results are compared and performance evaluated. Thus the proposed modified heuristic approach has given an accuracy of 74.88% than the existing algorithm used in the library module.||URI:||http://hdl.handle.net/10356/69507||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||EEE Theses|
Updated on Oct 17, 2021
Updated on Oct 17, 2021
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.