Please use this identifier to cite or link to this item:
Title: Modelling temporal contextual information in eye movement data with application to gaze gesture recognition
Authors: Du, Weiwei
Keywords: DRNTU::Engineering::Electrical and electronic engineering
Issue Date: 2015
Abstract: In recent 20 years, technology has expanded on in-car human machine interaction (HMI). However, driver distraction has become a growing safety concern. Scientists try to construct systems to detect driver’s state to prevent driver distraction by tracking driver’s eye movements. Traditionally, eye data, such as gaze position, fixations or saccades are usually used as features in monitoring driver’s state. A very robust method is to use temporal contextual information, which is extracted from scan path and can keep more eye movement information. However, there is lack of systematic research into different ways of modelling temporal contextual information in eye movement data. Therefore, the author investigates three methods of modelling temporal contextual information. And to have a better understanding, the author uses the application of eye gaze gesture recognition to compare the methods and algorithms. Furthermore, the author implemented the application of gaze gesture recognition as a pilot research to examine if it is possible to apply in driving. As a result, the author provides insights on different methods and also the application itself can also serve as a prototype for further driving related applications.
Schools: School of Electrical and Electronic Engineering 
Rights: Nanyang Technological University
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:EEE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
  Restricted Access
FYP report1.44 MBAdobe PDFView/Open

Page view(s)

Updated on Jun 24, 2024


Updated on Jun 24, 2024

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.