Please use this identifier to cite or link to this item:
|Title:||Driver state monitoring of intelligent vehicles part I: in-cabin activity identification||Authors:||Foo, Weng Keat||Keywords:||Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
|Issue Date:||2022||Publisher:||Nanyang Technological University||Source:||Foo, W. K. (2022). Driver state monitoring of intelligent vehicles part I: in-cabin activity identification. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/157519||Abstract:||With growing interests in intelligent vehicles (IV) worldwide, intelligent vehicles are set to replace conventional vehicles soon. Although IVs will bring convenience to the driver, it might also bring about problems of distracted driving. Therefore, to combat the problems of distracted driving, driver state monitoring has been extensively researched. Past research has a myopic focus on the accuracy of the model, utilizing sensors to capture features such as brain waves, heart signals among others. However, the proposed systems typically forgo computational costs and equipment costs which hinders adoption rates. Therefore, this project aims to propose a system that balance the computational costs and accuracy such that it is commercially viable and thus can be easily adopted to reduce the cases of distracted driving. The project experiments with different types of Neural Networks such as Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) on 2 datasets; an image dataset and a video dataset. 4 overarching techniques were used, a 2D-CNN end-to-end model and a 2D-CNN with transfer learning model was applied on the image dataset while a naïve 2D-CNN model and a RNN model was applied on the video dataset. The 2D-CNN end-to-end model performed the best for the image classification task with an accuracy of 0.9946 while the 3Bi-LSTM-BN-DP-H model performed the best on the video dataset with an accuracy 0.6595 . Real-time data from 10 subjects are collected from 2 different types of vehicles. The data is used to verify only the video classification models such as the 3Bi-LSTM-BN-DP-H and 1BiGRU-BN-DP-H model as the 2D-CNN end-to-end models produces flickering results.||URI:||https://hdl.handle.net/10356/157519||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||MAE Student Reports (FYP/IA/PA/PI)|
Updated on Dec 1, 2022
Updated on Dec 1, 2022
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.