Please use this identifier to cite or link to this item:
|Title:||Tracking a person based on RGB-D data from a mobile robot||Authors:||Lim, Ying||Keywords:||DRNTU::Engineering::Electrical and electronic engineering||Issue Date:||2017||Abstract:||In the recent years, there has been an increase in interest in the field of human detection, due to its importance in many real-life application such as surveillance and in areas of interactions between robot and humans. To understand the human behaviors in different scenarios, the ability to be able to continuous track a person is the next step to do in the field of research. In order to keep up with the fast movements in human tracking, there is a need to find a suitable method to improve the performance and robustness of the detection system where having additional features such as the depth information will further enhance the efficiency. As such, this project will be exploring the use of RGB-D (RGB-Depth) information taken from the RGB-D sensors in aiding the robots to detect and follow the robot while it is moving. One of the drawbacks of vision based tracking methods is that the small errors from a single frame will be accumulated over a long run, as in each frame the background images are not segmented and will be tracked too. With RGB-D sensors such as Kinects, there is the advantage of a big depth difference between human and the environment, and hence based on the depth difference, it is easier to segment the human from its surrounding background. With RGB-D information, it is now possible to reduce the small shift errors from each single frame by removing the background and re-centering the tracking window based on the segmentation results. The first part of the project is the pre-processing of the images into point cloud data, by combining the depth image and RGB image from the dataset. The image will be transformed into a 3D point cloud in which the human and the ground plane are clearly seen to be differentiated. Based on the spatial relationship on the consecutive frames of the target, the candidate of the target person will be selected. Following which, both the target person and the whole 3D point cloud are being fed into the tracking algorithm and the tracking will be performed and compared with two different motion tracking algorithm.||URI:||http://hdl.handle.net/10356/70680||Rights:||Nanyang Technological University||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||EEE Student Reports (FYP/IA/PA/PI)|
Updated on Jun 23, 2021
Updated on Jun 23, 2021
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.