Please use this identifier to cite or link to this item:
|Title:||An automated dialogue management system for a museum robotic guide||Authors:||Yu, Eugene GuangQian||Keywords:||DRNTU::Engineering::Mechanical engineering::Robots||Issue Date:||2015||Abstract:||Tour guiding is done in museums and many places of interest. The quality of tour guides is however, unable to be made uniform. The nature of the job can be repetitive and mundane. Therefore, the next step is to replace human tour guides with robots. This is to induce a uniform quality of robotic tour guides; to remove the need for humans to work such jobs. Since year 1997, the first robotic tour guide, Rhino, was placed in a museum exhibition, robotic tour guides have shifted from a focus of map planning and localization. Currently, more focuses are on content generation, physical gestures and facial expressions. These changes enable users an engaging and interactive experience with the robots. In future, when artificial intelligence is added, a seamless and life-like interaction with a robot is expected. For this project, a museum robotic guide is made from an existing platform, MAVEN. In order to mimic a human tour guide, the fully autonomous robotic guide will give a basic tour by guiding users around the given floor area. The robot will educate the users on displayed items. At the end of the tour, it will conduct a question and answer (QnA) session. To get a fully autonomous robotic guide, we focused on Automated Docking System, Automated Navigation System and Automated Dialogue Management System (ADMS). The ADMS uses the VHToolKit from University of Southern California (USC) Institute of Creative Technologies (ICT). To use the VHToolKit as the ADMS, QnA database is created along with scripting for displayed items and a programming code that uses the Transmission Control Protocol/Internet Protocol (TCP/IP) to communicate. Hence, allowing information to be shared among the 3 FYP students. This makes it fully autonomous and function in synchronization. In order to establish an engaging and interactive experience, the robot will generate responses mated to hand gestures, facial expressions and voice which is lip-synced. Reviewing on a survey conducted, this project has established its project scope and goals. This includes, to allow robot to operate fully autonomous, to deliver scripts at precise locations and to receive satisfaction from the survey participants. In future, gesture sensing, human detection, better Automatic Speech Recognizer (ASR) and Text To Speech (TTS) should be implemented to make user experience more awe inspiring.||URI:||http://hdl.handle.net/10356/64920||Rights:||Nanyang Technological University||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||MAE Student Reports (FYP/IA/PA/PI)|
checked on Sep 26, 2020
checked on Sep 26, 2020
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.