Please use this identifier to cite or link to this item:
Title: Human trajectory prediction based on multi-sensor fusion
Authors: Qiu, Wenyuan
Keywords: DRNTU::Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics
Issue Date: 2019
Abstract: Since the beginning of the 21st century, the robotic technology and the market of autonomous driving have been widely developed. It is expected that the autonomous vehicle can analyze the behavior of the pedestrian and predict their future trajectory in order to plan its own behavior safely and efficiently. This dissertation proposes an algorithm for predicting future human positions based on the historical positions. An unmanned ground vehicle is used as the platform that equipped with a stereo camera and a 3D LiDAR. The approach is divided by two steps: human coordinate extraction and future positions prediction. In the first step, the human coordinate model contains the human gravity coordinate and the depth information. On the one hand, the human gravity coordinate is built by calculating the average coordinate values of six key points which are gathered by implementing the pose estimation algorithm. On the other hand, the human depth information is acquired by averaging all the LiDAR depth values locating in the range of human torso. In the second step, the vector superposition method is used to predict the future positions of the pedestrian. In this experiment, a video dataset is collected which has several scenes of pedestrian movement in a first-person perspective. As a result, this dissertation builds a future position prediction system and a safety distance warning system, which shows satisfactory results in general pedestrian scenes.
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:EEE Theses

Files in This Item:
File Description SizeFormat 
  Restricted Access
5.13 MBAdobe PDFView/Open

Page view(s)

Updated on Jul 27, 2021


Updated on Jul 27, 2021

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.