Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/151523
Title: | 3D articulated skeleton extraction using a single consumer-grade depth camera | Authors: | Lu, Xuequan Deng, Zhigang Luo, Jun Chen, Wenzhi Yeung, Sai-Kit He, Ying |
Keywords: | Engineering::Computer science and engineering | Issue Date: | 2019 | Source: | Lu, X., Deng, Z., Luo, J., Chen, W., Yeung, S. & He, Y. (2019). 3D articulated skeleton extraction using a single consumer-grade depth camera. Computer Vision and Image Understanding, 188, 102792-. https://dx.doi.org/10.1016/j.cviu.2019.102792 | Project: | MOE2016-T2-2-022 MOE RG26/17 |
Journal: | Computer Vision and Image Understanding | Abstract: | Articulated skeleton extraction or learning has been extensively studied for 2D (e.g., images and video) and 3D (e.g., volume sequences, motion capture, and mesh sequences) data. Nevertheless, robustly and accurately learning 3D articulated skeletons from point set sequences captured by a single consumer-grade depth camera still remains challenging, since such data are often corrupted with substantial noise and outliers. Relatively few approaches have been proposed to tackle this problem. In this paper, we present a novel unsupervised framework to address this issue. Specifically, we first build one-to-one point correspondences among the point cloud frames in a sequence with our non-rigid point cloud registration algorithm. We then generate a skeleton involving a reasonable number of joints and bones with our skeletal structure extraction algorithm. We lastly present an iterative Linear Blend Skinning based algorithm for accurate joints learning. At the end, our method can learn a quality articulated skeleton from a single 3D point sequence possibly corrupted with noise and outliers. Through qualitative and quantitative evaluations on both publicly available data and in-house Kinect-captured data, we show that our unsupervised approach soundly outperforms state of the art techniques in terms of both quality (i.e., visual) and accuracy (i.e., Euclidean distance error metric). Moreover, the poses of our extracted skeletons are even comparable to those by KinectSDK, a well-known supervised pose estimation technique; for example, our method and KinectSDK achieves similar distance errors of 0.0497 and 0.0521. | URI: | https://hdl.handle.net/10356/151523 | ISSN: | 1077-3142 | DOI: | 10.1016/j.cviu.2019.102792 | Rights: | © 2019 Elsevier Inc. All rights reserved. | Fulltext Permission: | none | Fulltext Availability: | No Fulltext |
Appears in Collections: | SCSE Journal Articles |
SCOPUSTM
Citations
20
10
Updated on Jan 29, 2023
Web of ScienceTM
Citations
20
8
Updated on Jan 30, 2023
Page view(s)
163
Updated on Feb 2, 2023
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.