Please use this identifier to cite or link to this item:
Title: 3D human motion recovery from a single video using dense spatio-temporal features with exemplar-based approach
Authors: Leong, Mei Chee
Lin, Feng
Lee, Yong Tsui
Keywords: 3D Pose Estimation
Feature Descriptors
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Issue Date: 2019
Source: Leong, M. C., Lin, F., & Lee, Y. T. (2019). 3D human motion recovery from a single video using dense spatio-temporal features with exemplar-based approach. 2019 4th International Conference on Image, Vision and Computing (ICIVC 2019).
Abstract: This study focuses on 3D human motion recovery from a sequence of video frames by using the exemplar-based approach. Conventionally, human pose tracking requires two stages: 1) estimating the 3D pose for a single frame, and 2) using the current estimated pose to predict the pose in the next frame. This usually involves generating a set of possible poses in the prediction state, then optimizing the mapping between the projection of the predicted poses and the 2D image in the subsequent frame. The computational complexity of this approach becomes significant when the search space dimensionality increases. In contrast, we propose a robust and efficient approach for direct motion estimation in video frames by extracting dense appearance and motion features in spatio-temporal space. We exploit three robust descriptors - Histograms of Oriented Gradients, Histograms of Optical Flow and Motion Boundary Histograms in the context of human pose tracking for 3D motion recovery. We conducted comparative analyses using individual descriptors as well as a weighted combination of them. We evaluated our approach using the HumanEva-I dataset and presented both quantitative comparisons and visual results to demonstrate the advantages of our approach. The output is a smooth motion that can be applied in motion retargeting.
Rights: © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
metadata.item.grantfulltext: open
metadata.item.fulltext: With Fulltext
Appears in Collections:IGS Conference Papers
MAE Conference Papers
SCSE Conference Papers

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.