Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/154171
Title: Enhancing multimodal interactions with eye-tracking for virtual reality applications
Authors: Chia, Wen Han
Keywords: Engineering::Industrial engineering::Human factors engineering
Issue Date: 2021
Publisher: Nanyang Technological University
Source: Chia, W. H. (2021). Enhancing multimodal interactions with eye-tracking for virtual reality applications. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/154171
Project: A271
Abstract: The motion of dragging is a common yet imperative action in many forms of human-computer interaction, including Virtual Reality. With the growing availability of commercial eye-tracking devices, researchers have begun to investigate eye-based multimodal interactions’ performance in dragging tasks in desktop settings. However, little is known about the performance of eye-based multimodal interactions in 3D dragging tasks with Virtual Reality head-mounted displays. 31 participants volunteered in the study which compared the usability of eye-gaze with button click, eyegaze with dwell time and the default Vive controller for 3D dragging tasks in Virtual Reality headmounted displays. Based on the ISO 9241-9 standard, a novel immersive 3D dragging task was designed and implemented to facilitate the experiment. The task difficulty was varied by adjusting the following variables: target width, target-destination angular distance, and direction of path curvature. An additional selection task was implemented along with the dragging task to investigate multitasking performance. Contrary to our hypothesis, the controller was the fastest, achieved the highest throughput, and was the most preferred modality among the three modalities. It also offered the highest precision and accuracy in the dragging task. Notably, gaze with click had comparable speed and accuracy with the controller. Even though both gaze with click and gaze with dwell were highly imprecise in the dragging task, they were still well-preferred by participants. Furthermore, design guidelines were recommended for visual targets’ position in the horizontal field of view and visual target size in the immersive 3D dragging task. In conclusion, the controller is the most usable modality for an immersive 3D dragging task. Gaze with click could still suffice as a usable modality when low precision is required in the dragging task.
URI: https://hdl.handle.net/10356/154171
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:MAE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
FYP Report - Chia Wen Han.pdf
  Restricted Access
1.64 MBAdobe PDFView/Open

Page view(s)

30
Updated on May 14, 2022

Download(s)

16
Updated on May 14, 2022

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.