Please use this identifier to cite or link to this item:
|Title:||Visual event recognition in videos||Authors:||Chan, Kerlina Pei Min.||Keywords:||DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision||Issue Date:||2012||Abstract:||The report provides a detailed documentation on the methods implemented and evaluations carried out in this project. The project aims to create a framework with an efficient classifier for visual event recognition in videos. Firstly, a dataset of videos made up of six classes of events were obtained from the Kodak database. Next, the videos are divided into training and testing sets manually. Thereafter, space time interest points feature extraction method was used to extract interest points for all videos. Subsequently, K-mean clustering was used to determine the optimal visual words clusters. For classification use, histograms were formed based on the optimal clusters. K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) were the two classification methods implemented in this project. Finally, the performance of the classifiers was evaluated. The best classifier will be selected to apply in the framework. A user friendly graphical user interface (GUI) was created to implement with the framework for visual event recognition in videos.||URI:||http://hdl.handle.net/10356/48724||Rights:||Nanyang Technological University||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||SCSE Student Reports (FYP/IA/PA/PI)|
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.