Please use this identifier to cite or link to this item:
Title: Middle-level representation for human activities recognition : the role of spatio-temporal relationships
Authors: Yuan, Fei
Prinet, V´eronique
Yuan, Junsong
Keywords: DRNTU::Engineering::Electrical and electronic engineering
Issue Date: 2012
Publisher: Springer Berlin Heidelberg
Source: Yuan, F., Prinet, V., & Yuan, J. (2012). Middle-Level Representation for Human Activities Recognition: The Role of Spatio-Temporal Relationships. In K.N. Kutulakos (Ed.), Trends and Topics in Computer Vision, ECCV 2010 Workshops, Part I, LNCS 6553, (pp.168–180). Springer-Verlag Berlin Heidelberg.
Abstract: We tackle the challenging problem of human activity recognition in realistic video sequences. Unlike local features-based methods or global template-based methods, we propose to represent a video sequence by a set of middle-level parts. A part, or component, has consistent spatial structure and consistent motion. We first segment the visual motion patterns and generate a set of middle-level components by clustering keypoints-based trajectories extracted from the video. To further exploit the interdependencies of the moving parts, we then define spatio-temporal relationships between pairwise components. The resulting descriptive middle-level components and pairwise-components thereby catch the essential motion characteristics of human activities. They also give a very compact representation of the video. We apply our framework on popular and challenging video datasets: Weizmann dataset and UT-Interaction dataset. We demonstrate experimentally that our middle-level representation combined with a χ 2-SVM classifier equals to or outperforms the state-of-the-art results on these dataset.
ISBN: 978-3-642-35748-0; 978-3-642-35749-7
DOI: 10.1007/978-3-642-35749-7
Rights: © 2012 Springer-Verlag Berlin Heidelberg.
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:EEE Books & Book Chapters

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.