Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/73328
Title: A study on algorithms and applications of eye gaze tracking
Authors: Liu, Yi
Keywords: DRNTU::Engineering::Computer science and engineering
Issue Date: 2018
Source: Liu, Y. (2018). A study on algorithms and applications of eye gaze tracking. Doctoral thesis, Nanyang Technological University, Singapore.
Abstract: The ability to express quickly and effectively in precise language is fundamental for quality of life. However, some physically challenged people have difficulty in communicating using our common method, e.g. Motor neuro disease (MND). People with these severe conditions may not be able to use even speech, and control of the eyes may be the only means to communicate. With the development of modern assistive technology, a number of eye-based typing systems have been proposed to help these physically challenged people to improve their communication ability, which is of tremendous benefit. However, due to the limitation of dwell time, speed of text entry in early eye-typing system tends to be relatively slow. Recently, dwell-free systems have been proposed that are much faster than existing dwell-based typing systems, but can be vulnerable to common text entry problems, such as selection of the wrong letters. Therefore, in this thesis, we first propose a dwell-free system, GazeTry, with two robust recognition algorithms respectively for inferring the words which the user intends to type. The proposed algorithms determine the intended word based on the sequential letters of gaze trace instead of pausing at each individual letter, which is more robust to missing letters and when a neighboring letter on the keyboard is incorrectly selected. Simulation and experiment results suggest that the algorithms have better accuracy and more resilience to common text entry errors than other currently proposed dwell-free systems. However, the extra eye-tracking device is still an indispensable equipment in the typing system which is inconvenient in some cases. Therefore, we propose a prototype of eye-based typing system using an off-the-shelf webcam without the extra eye tracker. In the system, the appearance-based algorithm is proposed to estimate the person’s gaze coordinates on the screen based on the frontal face images captured by the webcam. In addition, some critical issues of the appearance-based method are also investigated, which helps to improve the estimation accuracy and reduce computing complexity in practice. The experimental results show the feasibility of eye typing using the webcam, and the proposed method is comparable to the eye tracker under a small degree of head movement. Gaze estimation with natural head movement is still a major research challenge. Although researchers have made a lot of effort into addressing it, there is one fundamental problem that has yet to be studied extensively in existing appearance-based algorithms that is how to integrate the left and right eye images. In this thesis, we develop a datacollection application that records the pair of face images and corresponding gaze points of subjects. Unlike other existing datasets, our data are collected while subjects are doing daily work under natural head movement without any constraint. We then propose a new reconstruction method that aligns the two appearance spaces of both eyes into the same local structure, and also investigate the impact of degree of head pose variation. The experimental results show that the proposed method outperforms existing combination methods under natural head movement.
URI: http://hdl.handle.net/10356/73328
DOI: 10.32657/10356/73328
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:IGS Theses

Files in This Item:
File Description SizeFormat 
Final-LiuYi.pdfPhD Thesis7.46 MBAdobe PDFThumbnail
View/Open

Page view(s)

215
Updated on May 10, 2021

Download(s) 50

85
Updated on May 10, 2021

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.