Please use this identifier to cite or link to this item:
Title: A new interaction framework for human and robot
Authors: Dinh, Quang Huy
Keywords: DRNTU::Engineering::Mechanical engineering::Robots
Issue Date: 2018
Source: Dinh, Q. H. (2018). A new interaction framework for human and robot. Doctoral thesis, Nanyang Technological University, Singapore.
Abstract: Despite technological advancement in the robotic field, designing a user-friendly robot to interact with human remains one of the most technically challenging problems for researchers. One possible solution is to design a robot that can mimic human appearance and behavior to interact with a human. Although the idea is very promising, the current technologies still fail to create such a robot due to the complexity of human’s body and brain. This makes the users feel frustrated and annoyed when interacting with these types of robots. In fact, users normally expect humanoids to be more human-like than it really is, ultimately leading to disappointment and uncomfortable when the robot fails to be as intelligent. As a result, the future of humanoid is still unforeseen. On the other hand, the indirect interaction approach provides another way for us to solve the problem. The method’s framework utilizes a mediating object which is normally a standard projector such as a video or a LED projector to facilitate the interaction process. However, this framework also contains some limitations that make it hard to be accepted as a standardized framework for human-robot interaction to be applied to different contexts. Firstly, current indirect interaction interfaces provide only one way to communicate with the robot which is via the standard projector. This prohibits the robot to communicate with multiple users with different access levels. Also, the current framework setup cannot guarantee the safety of the system information since the information is projected on the floor and can be modified by any user. Next, the framework can only be applied to a limited number of applications as a normal projector cannot generate bright and high-contrast images in bright or outdoor environments. Finally, the current framework only presents a single-modal input device for the user to send input commands or to interact with the mediating system. This not only can limit the number of applications the interface can be applied to but also reduce the reliability of the interaction process because the user does not have any other input models to perform a validation process in case the robot answers with inappropriate responses. This proposal aims to identify a framework for indirect interaction interface that recognizes the deficiencies of current systems. To overcome the challenges of the current system, the new framework must be able to support multiple users with different levels of interaction. The mediating channels must be reconfigured to increase the security of exchanging information while allowing the robot to send feedbacks to the surrounding environment to inform people sharing the same working environment of its operational status. Furthermore, the input device must support multimodal input modalities to enhance the robustness of transmitting commands from human to robot which, in turn, allows the system to be deployed in different environments. To do so, the system must combine two augmented reality techniques called: see-through augmented reality and spatial augmented reality. Furthermore, a laser writer and wearable handheld device are specially designed to be included in the interface to facilitate the interacting processes. Finally, a dialog framework between human and robot is also introduced to allow “the human to say less and the robot to do more” during the interaction process.
DOI: 10.32657/10356/75865
Schools: Interdisciplinary Graduate School (IGS) 
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:IGS Theses

Files in This Item:
File Description SizeFormat 
Thesis_Official.pdfPhD Thesis6.18 MBAdobe PDFThumbnail

Page view(s) 50

Updated on Jun 21, 2024

Download(s) 20

Updated on Jun 21, 2024

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.