Please use this identifier to cite or link to this item:
Title: Conditional bayesian filtering for robot navigation and human tracking
Authors: Liu, Jigang
Keywords: DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
DRNTU::Engineering::Computer science and engineering::Computing methodologies::Pattern recognition
Issue Date: 2015
Source: Liu, J. (2017). Conditional bayesian filtering for robot navigation and human tracking. Doctoral thesis, Nanyang Technological University, Singapore.
Abstract: As a model for estimating the state of a dynamical system through noisy measurements, Bayesian filtering (BF) has become one of the fastest growing research fields in the last decade. Bayesian filtering techniques play an important role in a wide variety of engineering applications such as navigation, control engineering, finance and aerospace engineering. Despite its remarkable advances made by researchers, the existing Bayesian filtering techniques, such as Kalman filter, extended Kalman filter (EKF) and particle filters, are still challenged by prediction accuracy. To improve the prediction accuracy of EKF, we present a novel model called conditional EKF by introducing a condition with respect to image data. For predicting the system state in a prediction step, an image-based dynamic model is built to replace the smooth motion model which is commonly used in EKF. Based on the conditional EKF, a robust visual simultaneous localization and mapping (SLAM) system, named C-SLAM is presented for both map-building and camera-localization tasks. SLAM is an effective method developed for autonomous robot applications to obtain navigation information. In the traditional EKF-based SLAM systems, a predefined or learnt dynamic model is required for predicting camera state in the prediction step. However, these dynamic models do not necessarily describe complex camera motion in the real world correctly. For example, when the camera experiences a sudden velocity change, or when low-speed cameras are used, predicted states by these dynamic models will deviate largely from ground truth, leading to the growth of state uncertainty and even the failure for tracking features. To solve this problem, in the prediction step, C-SLAM employs optical flow and epipolar geometry techniques to predict camera pose instead of using a smooth motion model which is widely used in existing SLAM systems. The limitations of current particle filter approaches include sensitivity to discontinuous motion caused by low frame rate or sudden velocity change. To address these problems, exemplar-based conditional particle filter (EC-PF) is proposed by including a conditional term with respect to exemplars and image data. By using EC-PF, a 3D human motion tracking algorithm is proposed to recover 3D full body motions. Within the method, an exemplar-based dynamic model is constructed to guide human motion prediction so that particles are able to evolve within a reasonable area close to the true state. Furthermore, by adopting shape context-based exemplar matching, the proposed 3D human motion tracking approach can be effectively achieved by a monocular camera setup. Both SLAM and human motion tracking are important research topics in the area of robot navigation. In addition to Bayesian filtering techniques, we also present a vision-based cuboid model for corridor robot navigation. The robot control system uses an omni-directional camera that observes the navigation area, and provides required visual information. Two distinct navigation methods are proposed according to the cuboid model. The first method is based on X-shape and vanishing point which are two useful features that can be extracted from captured images. Vanishing point is used to estimate the robot orientation and X-shape helps robot to move in the middle of a corridor. The second navigation method is to recover corridor guidelines by tracking vertical lines. With the information of corridor guidelines, the robot can be localized in the corridor environments effectively. Thanks to the wide view of the omni-directional camera used in the proposed navigation algorithm, features like X-shape, vanishing point and vertical line can be extracted from captured image even when there are obstacles close to the robot. The proposed method was tested in different corridor environments. The results of several experiments show the promising performance of the proposed navigation algorithm.
Schools: School of Computer Engineering 
Research Centres: Centre for Computational Intelligence 
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Theses

Files in This Item:
File Description SizeFormat 
Liu Jigang Ph.D Thesis Final Version 20150515.pdf
  Restricted Access
5.71 MBAdobe PDFView/Open

Page view(s) 50

Updated on Sep 23, 2023


Updated on Sep 23, 2023

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.