Please use this identifier to cite or link to this item:
Title: Point cloud-based action recognition
Authors: Zhou, Chenhang
Keywords: Engineering::Electrical and electronic engineering
Issue Date: 2022
Publisher: Nanyang Technological University
Source: Zhou, C. (2022). Point cloud-based action recognition. Final Year Project (FYP), Nanyang Technological University, Singapore.
Project: A3096-211
Abstract: Action recognition has received a lot of attention in computer vision tasks. It aims to capture and classify the action in a certain input like a video. In this project, we firstly thoroughly review two major methods for action recognition tasks, i.e., the skeleton-based method and the point cloud-based method. Second, to enhance video understanding, we select LSTM architecture which has proved to be competent in sequential understanding. We design sampling mechanisms for a video frame and point cloud to efficiently express the point cloud raw data for training on the LSTM network with NTU RGB+D 60 Dataset, contributed by NTU ROSE Lab. To investigate the relation within training parameters, multiple experiments are implemented based on PointLSTM architecture and we conclude model performance by evaluating the batch size, frame rate, and points number accordingly. We also find model discrepancies within different action classes based on the confusion matrix. Further, we compare different LSTM stages’ effects on model accuracy. Last, considering the nature of point cloud data, we conclude this project and make recommendations for further work.
Schools: School of Electrical and Electronic Engineering 
Research Centres: Rapid-Rich Object Search (ROSE) Lab 
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:EEE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
  Restricted Access
Final Year Project Report1.34 MBAdobe PDFView/Open

Page view(s)

Updated on Dec 10, 2023


Updated on Dec 10, 2023

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.