Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/100602
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWang, Jiangen
dc.contributor.authorLiu, Zichengen
dc.contributor.authorWu, Yingen
dc.contributor.authorYuan, Junsongen
dc.date.accessioned2013-11-29T03:20:19Zen
dc.date.accessioned2019-12-06T20:25:13Z-
dc.date.available2013-11-29T03:20:19Zen
dc.date.available2019-12-06T20:25:13Z-
dc.date.copyright2012en
dc.date.issued2012en
dc.identifier.citationWang, J., Liu, Z., Wu, Y., & Yuan, J. (2012). Mining actionlet ensemble for action recognition with depth cameras. 2012 IEEE Conference on Computer Vision and Pattern Recognition, 1290-1297.en
dc.identifier.urihttps://hdl.handle.net/10356/100602-
dc.description.abstractHuman action recognition is an important yet challenging task. The recently developed commodity depth sensors open up new possibilities of dealing with this problem but also present some unique challenges. The depth maps captured by the depth cameras are very noisy and the 3D positions of the tracked joints may be completely wrong if serious occlusions occur, which increases the intra-class variations in the actions. In this paper, an actionlet ensemble model is learnt to represent each action and to capture the intra-class variance. In addition, novel features that are suitable for depth data are proposed. They are robust to noise, invariant to translational and temporal misalignments, and capable of characterizing both the human motion and the human-object interactions. The proposed approach is evaluated on two challenging action recognition datasets captured by commodity depth cameras, and another dataset captured by a MoCap system. The experimental evaluations show that the proposed approach achieves superior performance to the state of the art algorithms.en
dc.format.extentThis work was supported in part by National Science Foundation grant IIS-0347877,IIS-0916607,US Army Research Laboratory and the US Army Research Office under grant ARO W911NF-08-1-0504, and DARPA Award FA 8650-11-1-7149.This work is partially supported by Microsoft Research.en
dc.language.isoenen
dc.rights© 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: [http://dx.doi.org/10.1109/CVPR.2012.6247813].en
dc.subjectDRNTU::Engineering::Electrical and electronic engineeringen
dc.titleMining actionlet ensemble for action recognition with depth camerasen
dc.typeConference Paperen
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen
dc.contributor.conferenceIEEE Conference on Computer Vision and Pattern Recognition (2012 : Providence, Rhode Island, US)en
dc.identifier.doi10.1109/CVPR.2012.6247813en
dc.description.versionAccepted versionen
item.fulltextWith Fulltext-
item.grantfulltextopen-
Appears in Collections:EEE Conference Papers
Files in This Item:
File Description SizeFormat 
Mining Actionlet Ensemble for Action Recognition with Depth Cameras.pdf1.35 MBAdobe PDFThumbnail
View/Open

SCOPUSTM   
Citations 1

1,037
Updated on Mar 6, 2021

PublonsTM
Citations 1

724
Updated on Mar 7, 2021

Page view(s) 5

812
Updated on Apr 18, 2021

Download(s) 1

5,073
Updated on Apr 18, 2021

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.