Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/90230
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Leong, Mei Chee | en |
dc.contributor.author | Lin, Feng | en |
dc.contributor.author | Lee, Yong Tsui | en |
dc.date.accessioned | 2019-08-06T02:12:51Z | en |
dc.date.accessioned | 2019-12-06T17:43:36Z | - |
dc.date.available | 2019-08-06T02:12:51Z | en |
dc.date.available | 2019-12-06T17:43:36Z | - |
dc.date.issued | 2019 | en |
dc.identifier.citation | Leong, M. C., Lin, F., & Lee, Y. T. (2019). 3D human motion recovery from a single video using dense spatio-temporal features with exemplar-based approach. 2019 4th International Conference on Image, Vision and Computing (ICIVC 2019). | en |
dc.identifier.uri | https://hdl.handle.net/10356/90230 | - |
dc.description.abstract | This study focuses on 3D human motion recovery from a sequence of video frames by using the exemplar-based approach. Conventionally, human pose tracking requires two stages: 1) estimating the 3D pose for a single frame, and 2) using the current estimated pose to predict the pose in the next frame. This usually involves generating a set of possible poses in the prediction state, then optimizing the mapping between the projection of the predicted poses and the 2D image in the subsequent frame. The computational complexity of this approach becomes significant when the search space dimensionality increases. In contrast, we propose a robust and efficient approach for direct motion estimation in video frames by extracting dense appearance and motion features in spatio-temporal space. We exploit three robust descriptors - Histograms of Oriented Gradients, Histograms of Optical Flow and Motion Boundary Histograms in the context of human pose tracking for 3D motion recovery. We conducted comparative analyses using individual descriptors as well as a weighted combination of them. We evaluated our approach using the HumanEva-I dataset and presented both quantitative comparisons and visual results to demonstrate the advantages of our approach. The output is a smooth motion that can be applied in motion retargeting. | en |
dc.description.sponsorship | MOE (Min. of Education, S’pore) | en |
dc.format.extent | 6 p. | en |
dc.language.iso | en | en |
dc.rights | © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | en |
dc.subject | 3D Pose Estimation | en |
dc.subject | Feature Descriptors | en |
dc.subject | Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision | en |
dc.title | 3D human motion recovery from a single video using dense spatio-temporal features with exemplar-based approach | en |
dc.type | Conference Paper | en |
dc.contributor.school | School of Computer Science and Engineering | en |
dc.contributor.school | School of Mechanical and Aerospace Engineering | en |
dc.contributor.school | Interdisciplinary Graduate School (IGS) | en |
dc.contributor.conference | 2019 4th International Conference on Image, Vision and Computing (ICIVC 2019) | en |
dc.contributor.research | Institute for Media Innovation (IMI) | en |
dc.description.version | Accepted version | en |
item.fulltext | With Fulltext | - |
item.grantfulltext | open | - |
Appears in Collections: | IGS Conference Papers MAE Conference Papers SCSE Conference Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
3D Human Motion Recovery From A Single Video Using Dense Spatio-Temporal Features With Exemplar-based Approach.pdf | 760.71 kB | Adobe PDF | View/Open |
Page view(s) 50
587
Updated on Mar 28, 2024
Download(s) 50
133
Updated on Mar 28, 2024
Google ScholarTM
Check
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.