Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/157519
Full metadata record
DC FieldValueLanguage
dc.contributor.authorFoo, Weng Keaten_US
dc.date.accessioned2022-05-25T02:09:42Z-
dc.date.available2022-05-25T02:09:42Z-
dc.date.issued2022-
dc.identifier.citationFoo, W. K. (2022). Driver state monitoring of intelligent vehicles part I: in-cabin activity identification. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/157519en_US
dc.identifier.urihttps://hdl.handle.net/10356/157519-
dc.description.abstractWith growing interests in intelligent vehicles (IV) worldwide, intelligent vehicles are set to replace conventional vehicles soon. Although IVs will bring convenience to the driver, it might also bring about problems of distracted driving. Therefore, to combat the problems of distracted driving, driver state monitoring has been extensively researched. Past research has a myopic focus on the accuracy of the model, utilizing sensors to capture features such as brain waves, heart signals among others. However, the proposed systems typically forgo computational costs and equipment costs which hinders adoption rates. Therefore, this project aims to propose a system that balance the computational costs and accuracy such that it is commercially viable and thus can be easily adopted to reduce the cases of distracted driving. The project experiments with different types of Neural Networks such as Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) on 2 datasets; an image dataset and a video dataset. 4 overarching techniques were used, a 2D-CNN end-to-end model and a 2D-CNN with transfer learning model was applied on the image dataset while a naïve 2D-CNN model and a RNN model was applied on the video dataset. The 2D-CNN end-to-end model performed the best for the image classification task with an accuracy of 0.9946 while the 3Bi-LSTM-BN-DP-H model performed the best on the video dataset with an accuracy 0.6595 . Real-time data from 10 subjects are collected from 2 different types of vehicles. The data is used to verify only the video classification models such as the 3Bi-LSTM-BN-DP-H and 1BiGRU-BN-DP-H model as the 2D-CNN end-to-end models produces flickering results.en_US
dc.language.isoenen_US
dc.publisherNanyang Technological Universityen_US
dc.subjectEngineering::Computer science and engineering::Computing methodologies::Image processing and computer visionen_US
dc.subjectEngineering::Computer science and engineering::Computing methodologies::Artificial intelligenceen_US
dc.titleDriver state monitoring of intelligent vehicles part I: in-cabin activity identificationen_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorLyu Chenen_US
dc.contributor.schoolSchool of Mechanical and Aerospace Engineeringen_US
dc.description.degreeBachelor of Engineering (Aerospace Engineering)en_US
dc.contributor.supervisoremaillyuchen@ntu.edu.sgen_US
item.fulltextWith Fulltext-
item.grantfulltextrestricted-
Appears in Collections:MAE Student Reports (FYP/IA/PA/PI)
Files in This Item:
File Description SizeFormat 
FYP report_FOO WENG KEAT_FINAL.pdf
  Restricted Access
3.1 MBAdobe PDFView/Open

Page view(s)

108
Updated on Apr 17, 2024

Download(s) 50

18
Updated on Apr 17, 2024

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.