Please use this identifier to cite or link to this item:
|Title:||Video analytics based on deep learning and information fusion technologies||Authors:||Lee, Zheng Han||Keywords:||Engineering::Electrical and electronic engineering||Issue Date:||2020||Publisher:||Nanyang Technological University||Project:||A1121-191||Abstract:||In recent years, video analytics has risen to become a popular topic in the field of Artificial Intelligence. With the advancement in high-speed connection, machine learning algorithms and IoT technologies, the applications of video analytics using multiple modalities and information fusion technologies is becoming a commodity to everyone in the Information Age and the coming future. Most studies done in this topic previously focused on pushing the boundaries of algorithms for the applications of information fusion, such as Audio-visual correspondence task (AVC) and video-scene segmentation. This study aims to explore the optimization of video analytics based on information fusion technologies by using C3D-based action recognition function as the benchmark for video analytics performance. By scrutinizing and testing the mechanisms and architectures of the C3D-based action model, the best performing elements and the reasons behind their performances are explored. The types of pooling, optimizer and scheduler and their respective accuracies with the dataset used are recorded. The different methods of fusion of visual-audio information and their introduction into the action recognition model are explored. Their executions and respective accuracies are studied to get insights on how they affect the model’s performance. The feature extraction methods for the audio modality with their respective performance are also studied. Different self-attention mechanisms involving the modalities and channels are implemented in the model and the resulting accuracies studied. These explorations provide understandings on how they affect the performance of video analytics based on information fusion and subsequently help to unleash its full potential.||URI:||https://hdl.handle.net/10356/139262||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||EEE Student Reports (FYP/IA/PA/PI)|
Updated on Jul 4, 2022
Updated on Jul 4, 2022
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.