Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/90230
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLeong, Mei Cheeen
dc.contributor.authorLin, Fengen
dc.contributor.authorLee, Yong Tsuien
dc.date.accessioned2019-08-06T02:12:51Zen
dc.date.accessioned2019-12-06T17:43:36Z-
dc.date.available2019-08-06T02:12:51Zen
dc.date.available2019-12-06T17:43:36Z-
dc.date.issued2019en
dc.identifier.citationLeong, M. C., Lin, F., & Lee, Y. T. (2019). 3D human motion recovery from a single video using dense spatio-temporal features with exemplar-based approach. 2019 4th International Conference on Image, Vision and Computing (ICIVC 2019).en
dc.identifier.urihttps://hdl.handle.net/10356/90230-
dc.description.abstractThis study focuses on 3D human motion recovery from a sequence of video frames by using the exemplar-based approach. Conventionally, human pose tracking requires two stages: 1) estimating the 3D pose for a single frame, and 2) using the current estimated pose to predict the pose in the next frame. This usually involves generating a set of possible poses in the prediction state, then optimizing the mapping between the projection of the predicted poses and the 2D image in the subsequent frame. The computational complexity of this approach becomes significant when the search space dimensionality increases. In contrast, we propose a robust and efficient approach for direct motion estimation in video frames by extracting dense appearance and motion features in spatio-temporal space. We exploit three robust descriptors - Histograms of Oriented Gradients, Histograms of Optical Flow and Motion Boundary Histograms in the context of human pose tracking for 3D motion recovery. We conducted comparative analyses using individual descriptors as well as a weighted combination of them. We evaluated our approach using the HumanEva-I dataset and presented both quantitative comparisons and visual results to demonstrate the advantages of our approach. The output is a smooth motion that can be applied in motion retargeting.en
dc.description.sponsorshipMOE (Min. of Education, S’pore)en
dc.format.extent6 p.en
dc.language.isoenen
dc.rights© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en
dc.subject3D Pose Estimationen
dc.subjectFeature Descriptorsen
dc.subjectEngineering::Computer science and engineering::Computing methodologies::Image processing and computer visionen
dc.title3D human motion recovery from a single video using dense spatio-temporal features with exemplar-based approachen
dc.typeConference Paperen
dc.contributor.schoolSchool of Computer Science and Engineeringen
dc.contributor.schoolSchool of Mechanical and Aerospace Engineeringen
dc.contributor.schoolInterdisciplinary Graduate School (IGS)en
dc.contributor.conference2019 4th International Conference on Image, Vision and Computing (ICIVC 2019)en
dc.contributor.researchInstitute for Media Innovation (IMI)en
dc.description.versionAccepted versionen
item.fulltextWith Fulltext-
item.grantfulltextopen-
Appears in Collections:IGS Conference Papers
MAE Conference Papers
SCSE Conference Papers

Page view(s) 50

587
Updated on Mar 28, 2024

Download(s) 50

133
Updated on Mar 28, 2024

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.