Please use this identifier to cite or link to this item:
Title: Dynamic object removal in point clouds for efficient SLAM
Authors: Nithish, Muthuchamy Selvaraj
Keywords: DRNTU::Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics
Issue Date: 2019
Abstract: Autonomous cars are one of the greatest technological advancements of this decade and a giant leap in the transportation industry and mobile robotics. Autonomous cars face various challenges to achieve Level 5 autonomy and one amongst the challenges is to find a fast and reliable algorithms for simultaneous localisation and mapping (SLAM) in real time environments. SLAM algorithms enable an autonomous car to perceive its environment and identify its position relative to it. The major limitation of the SLAM algorithm, especially while building a map is to have static environmental features, i.e. without any dynamic or moving objects. Research work on SLAM over the past years have produced state of the art algorithms, but virtually all of them assume the environment to be static without moving objects. But in real time environments the autonomous cars must face a lot of moving objects like pedestrians, cyclists, pets etc. This problem is not only associated to autonomous cars, but it is common to all of mobile robots. To enable research progress, human effort is invested in order to manually identify and remove the dynamic objects and then proceed with the SLAM research. But this approach is time consuming, labour intensive, less reliable and does not provide a permanent solution. In this dissertation, a novel algorithm is proposed that can identify and remove dynamic objects in the point clouds obtained from Light Detection and Ranging (LiDAR) sensor and reconstruct a static scene. This algorithm acts as a pre-processing stage and outputs a static scene to traditional SLAM algorithms. The algorithm is tailored for autonomous vehicles with low computational complexity. Experiments were performed using the dataset obtained from KITTI Vision Benchmark suite, which contains real time Lidar data obtained from autonomous cars running on the streets of Karlsruhe, Germany. The algorithm effectively removes the dynamic objects and reconstructs a static scene. This dissertation is a small step in the journey to make autonomous cars a reality and the applications are not only limited to autonomous cars, but also to all of mobile robots. It makes the traditional SLAM algorithms robust and more reliable.
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:EEE Theses

Files in This Item:
File Description SizeFormat 
  Restricted Access
5.08 MBAdobe PDFView/Open

Page view(s) 5

checked on Oct 26, 2020

Download(s) 5

checked on Oct 26, 2020

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.