Please use this identifier to cite or link to this item:
|Title:||Multiple human action recognition from video streams||Authors:||Koh, Khai Huat||Keywords:||DRNTU::Engineering::Electrical and electronic engineering||Issue Date:||2014||Abstract:||In recent times, the focus of researchers has been on understanding the human behavior under different circumstances. An important aspect of this research is to understand how a person would act in a given scenario. Human action recognition forms an integral part of such analysis. The objective of this project is to build a framework to recognize multiple human actions in different scenarios. Firstly, the author looks at several possible methods of human detection and action recognition. Each method is then evaluated based on the results obtained. Finally, the viability of each framework is discussed. In this project, the author combines two main methods to form the Multiple Human Action Recognition system. In the first technique, the human detection method, a Histogram of Oriented Gradient is used to extract the human features of the image. Next, the features are send into the Extreme Learning Machine classifier which predicts whether the image has a human. Sampled images with positive human detection are then compiled and passed to the second step. The second step involves preprocessing of compiled images, namely to extract the human subject from the image and the removal of background. Subsequently, the image is converted into a image mask for action feature extraction. In the action recognition process, features that represent an action are extracted. This is carried out via calculation of power spectrum feature from the image volume, and then sending it into the Weighted Euclidean Distance for possible match retrieval. Comparison is also done with the pose base feature to determine which method produce better results. The results detailed in this report consist of two sections, human detection and action recognition. Using a test time of 0.25 seconds, the reported results are at 94% accuracy for human detection with HOG features, 65% accuracy for Power spectrum features and 90% for pose base action features. These results are tested using MVU dataset.||URI:||http://hdl.handle.net/10356/60181||Rights:||Nanyang Technological University||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||EEE Student Reports (FYP/IA/PA/PI)|
Page view(s) 1169
checked on Oct 19, 2020
checked on Oct 19, 2020
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.