Please use this identifier to cite or link to this item:
|Title:||A framework for human-robot interaction with a mobile and telepresence enabled anthropomorphic robot||Authors:||Pang, Wee Ching||Keywords:||Engineering::Mechanical engineering::Robots||Issue Date:||2021||Publisher:||Nanyang Technological University||Source:||Pang, W. C. (2021). A framework for human-robot interaction with a mobile and telepresence enabled anthropomorphic robot. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/152630||Abstract:||Effectiveness in both personal and professional relationships lies in one's ability to communicate. In addition to words, nonverbal cues or "body language" is recognized to enhance interpersonal communication. Body language is the use of physical behavior, expressions, and mannerisms to communicate non-verbally. Telepresence robots, or robotic avatars, can be viewed as a tool that embodies gestures within a medium for distance interpersonal communication. Early designs of telepresence robots attempted to address this limitation by integrating a mobile robotic base to the screen that displays the robot's user. This permits the user to remotely drive the platform from point to point, extending its interaction space, and to interact with another moving human. Such robots, although not resembling the human form, can be recognized as an opportunity to exercise the user's presence, remotely. A trend towards anthropomorphic telepresence robots is emerging. In addition to the basic functions like navigation and conversation that traditional telepresence robots are expected to perform, the humanoid form enables a user to gesture and manipulate objects remotely. With this increase in functionality, users may find the increase in cognitive workload overwhelming. The robot's user must drive the robot, manipulate the robot's arms for gesturing and attempt a meaningful conversation. This level of cognitive load increases the possibility of accidents. In addition, the secondary tasks relating to motion and gesturing hampers the primary task of effective interpersonal communication. Attempts to mitigate this high human cognitive workload have taken the approach of providing autonomy for secondary functions. Autonomy is, however, not a panacea for all situations. Inappropriate autonomy can cause frustration, diminished situation awareness and unintended outcomes. Selecting the proper amount of autonomy to apply during an episode of robot deployment is necessary. The level of autonomy is required to be dynamically adjusted to support the interaction tasks. During situations, like navigating the robot along a cluttered hallway, autonomy in the form of automated obstacle avoidance is desirable, but during instances like moving the telepresence robot close to an object for a closer look may introduce an unwanted obstacle avoidance response. Current telepresence robots lack a generic structure to assist dynamically in implementing autonomy. There is also an absence of consideration for what aspects of telepresence-robot-deployments to automate. To address this, the thesis identifies three aspects that characterize the autonomy needs in telepresence robots for navigation, gesticulation and conversation. With the goal of addressing these concerns, a framework called "Autonomy Levels for Social and Telepresence Robots'' or ALSTER is proposed. It is a set of core software modules categorized in the identified aspects of autonomy required for telepresence robots. This array of software modules has also been cohesively arranged in a hybrid six-layered framework to incorporate an appropriate level of autonomous behaviors during the various stages of interaction. Past software frameworks for similar platforms were limited by a focus only on the navigation aspect, neglecting the gesticulation and conversation aspects or simply multiple iterations of the same robot. This thesis describes the conceptualization of a framework that unifies the navigation, gesticulation and conversation aspects with adjustable autonomy. The framework has been implemented on platforms that support, platform motion, hand gestures, verbal communication, and video interaction. These implementations demonstrate the interaction of primary and secondary interactions between remote humans. Through the implementation of the framework on four evaluation studies, the software modules demonstrate the software modules as a tool for the realization of human-robot systems in communication tasks. In the limited trials, human interaction was demonstrated to support the objective of enhanced interpersonal interaction at lower cognitive loads and mitigate the issues of inappropriate autonomy. The addition of autonomous navigation behavior to a telepresence robotic avatar ensured enhanced performance in collision free navigating whilst engaging in a multiplicity of scenarios including one involving young children in a learning engagement. Without the varying levels of autonomy, the robot would have been less able to adjust to the needs for physical interaction, and the tasks of teaching or learning.||URI:||https://hdl.handle.net/10356/152630||DOI:||10.32657/10356/152630||Rights:||This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).||Fulltext Permission:||open||Fulltext Availability:||With Fulltext|
|Appears in Collections:||MAE Theses|
Updated on May 24, 2022
Updated on May 24, 2022
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.