View Item 
      •   Home
      • 1. Schools
      • College of Engineering
      • School of Electrical and Electronic Engineering (EEE)
      • EEE Conference Papers
      • View Item
      •   Home
      • 1. Schools
      • College of Engineering
      • School of Electrical and Electronic Engineering (EEE)
      • EEE Conference Papers
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.
      Subject Lookup

      Browse

      All of DR-NTUCommunities & CollectionsTitlesAuthorsBy DateSubjectsThis CollectionTitlesAuthorsBy DateSubjects

      My Account

      Login

      Statistics

      Most Popular ItemsStatistics by CountryMost Popular Authors

      About DR-NTU

      Mining actionlet ensemble for action recognition with depth cameras

      Thumbnail
      Mining Actionlet Ensemble for Action Recognition with Depth Cameras.pdf (1.320Mb)
      Author
      Wang, Jiang
      Liu, Zicheng
      Wu, Ying
      Yuan, Junsong
      Date of Issue
      2012
      Conference Name
      IEEE Conference on Computer Vision and Pattern Recognition (2012 : Providence, Rhode Island, US)
      School
      School of Electrical and Electronic Engineering
      Version
      Accepted version
      Abstract
      Human action recognition is an important yet challenging task. The recently developed commodity depth sensors open up new possibilities of dealing with this problem but also present some unique challenges. The depth maps captured by the depth cameras are very noisy and the 3D positions of the tracked joints may be completely wrong if serious occlusions occur, which increases the intra-class variations in the actions. In this paper, an actionlet ensemble model is learnt to represent each action and to capture the intra-class variance. In addition, novel features that are suitable for depth data are proposed. They are robust to noise, invariant to translational and temporal misalignments, and capable of characterizing both the human motion and the human-object interactions. The proposed approach is evaluated on two challenging action recognition datasets captured by commodity depth cameras, and another dataset captured by a MoCap system. The experimental evaluations show that the proposed approach achieves superior performance to the state of the art algorithms.
      Subject
      DRNTU::Engineering::Electrical and electronic engineering
      Type
      Conference Paper
      Rights
      © 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: [http://dx.doi.org/10.1109/CVPR.2012.6247813].
      Collections
      • EEE Conference Papers
      http://dx.doi.org/10.1109/CVPR.2012.6247813
      Get published version (via Digital Object Identifier)

      Show full item record


      NTU Library, Nanyang Avenue, Singapore 639798 © 2011 Nanyang Technological University. All rights reserved.
      DSpace software copyright © 2002-2015  DuraSpace
      Contact Us | Send Feedback
      Share |    
      Theme by 
      Atmire NV
       

       


      NTU Library, Nanyang Avenue, Singapore 639798 © 2011 Nanyang Technological University. All rights reserved.
      DSpace software copyright © 2002-2015  DuraSpace
      Contact Us | Send Feedback
      Share |    
      Theme by 
      Atmire NV
       

       

      DCSIMG