Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/141400
Title: | Knowledge-based multimodal information fusion for role recognition and situation assessment by using mobile robot | Authors: | Yang, Chule Wang, Danwei Zeng, Yijie Yue, Yufeng Siritanawan, Prarinya |
Keywords: | Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics | Issue Date: | 2018 | Source: | Yang, C., Wang, D., Zeng, Y., Yue, Y., & Siritanawan, P. (2019). Knowledge-based multimodal information fusion for role recognition and situation assessment by using mobile robot. Information Fusion, 50, 126-138. doi:10.1016/j.inffus.2018.10.007 | Journal: | Information Fusion | Abstract: | Decision-making is the key for autonomous systems to achieve real intelligence and autonomy. This paper presents an integrated probabilistic decision framework for a robot to infer roles that humans fulfill in specific missions. The framework also enables the assessment of the situation and necessity of interaction with the person fulfilling the target role. The target role is the person who is distinctive in movement or holds a mission-critical object, where the object is pre-specified in the corresponding mission. The proposed framework associates prior knowledge with spatial relationships between the humans and objects as well as with their temporal changes. Distance-Based Inference (DBI) and Knowledge-Based Inference (KBI) support recognition of human roles. DBI deduces the role based on the relative distance between humans and the specified objects. KBI focuses on human actions and objects existence. The role is estimated using weighted fusion scheme based on the information entropy. The situation is assessed by analyzing the action of the person fulfilling the target role and relative position of this person to the mission-related entities, where the entity is something that has a particular function in the corresponding mission. This assessment determines the robot decision on what actions it should take. A series of experiments has proofed that the proposed framework provides a reasonable assessment of the situation. Moreover, it outperforms other approaches on accuracy, efficiency, and robustness. | URI: | https://hdl.handle.net/10356/141400 | ISSN: | 1566-2535 | DOI: | 10.1016/j.inffus.2018.10.007 | Rights: | © 2018 Elsevier B.V. All rights reserved. This paper was published in Information Fusion and is made available with permission of Elsevier B.V. | Fulltext Permission: | open | Fulltext Availability: | With Fulltext |
Appears in Collections: | EEE Journal Articles |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Knowledge-based multimodal information fusion for role recognition and situation.pdf | 4.91 MB | Adobe PDF | View/Open |
SCOPUSTM
Citations
20
12
Updated on Jan 23, 2023
Web of ScienceTM
Citations
20
13
Updated on Jan 30, 2023
Page view(s)
202
Updated on Feb 5, 2023
Download(s) 50
153
Updated on Feb 5, 2023
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.