Computational methods for modeling human-like agents in virtual environment
Date of Issue2014
School of Computer Engineering
Centre for Computational Intelligence
Virtual environments, also known as virtual worlds, are computer simulated spaces which enable players or participants to engage in long-term, joint coordinated actions, with the novel possibilities which may be impossible or too costly in a real world setting. Hence, they have become a popular platform used in a variety of contexts, including teaching in classrooms, informal learning, distance learning, business, and e-commerce. In a virtual world, the primary population is constituted by real people, represented as human-controlled agents or avatars. Besides that, computer-controlled agents, also known as NPCs (non-player characters), play an important role. Given the NPC’s popularity in virtual worlds, many people have been working on various approaches to incorporate intelligent learning agents to improve the interactivity and playability. Three key requirements of realistic characters in virtual worlds have been identified, namely (1) Autonomy: Agents should be able to function autonomously and make decisions proactively in response to events happening in the virtual world around them; (2) Interactivity: The interaction between virtual humans and real humans should be as natural as possible, integrating both verbal and nonverbal communication such as facial expressions and body gestures; and (3) Personality: Virtual humans should exhibit human-like traits and characteristics, such as personalities and emotions. As far as we know, the existing work mainly focuses on the different building parts of the virtual human, such as animation, body language, natural language processing, emotion, decision making, learning and so on. We propose an architecture that integrates autonomy, interactivity and personification, which are identified as the three most important features for virtual humans. No one has proposed a model that emphasizes such features. This is the motivation for our work. The first part of the dissertation presents a self-organizing neural model for creating intelligent learning agents in virtual worlds. As agents in a virtual world roam, interact and socialize with users and other agents as in real world without explicit goals and teachers, learning in virtual world presents many challenges not found in typical machine learning benchmarks. In this dissertation, we highlight the unique issues and challenges of building learning agents in virtual world using reinforcement learning. Specifically, a self-organizing neural model, named TD-FALCON (Temporal Difference - Fusion Architecture for Learning and Cognition) is deployed, which enables an autonomous agent to adapt and function in a dynamic environment with immediate as well as delayed evaluative feedback signals. Specifically, TD-FALCON integrates temporal difference methods and self-organizing neural networks for reinforcement learning with delayed evaluative feedback. By incorporating TD-FALCON, an agent will be able to learn from sensory and evaluative feedback signals received from the virtual environment without involving human supervision and intervention. In this way, the agent needs neither an explicit teacher nor a perfect model to learn from. Performing reinforcement learning in real time, it is also able to adapt itself to the variations in the virtual environment and changes in the user behavior patterns. Furthermore, by incorporating temporal difference learning, TD-FALCON agents can overcome the issues, such as the absence of immediate reward (or penalty) signals in virtual world by estimating the credit of an action based on what it will lead to eventually. We have implemented and evaluated TD-FALCON agents as tour guides in a virtual world environment. Our experimental results show that the agents are able to adapt and improve their performance in real time. To enable more dynamic and personalized experience, we propose to incorporate adaptive user modeling with our learning agents. Specifically, a two channel fusion ART [1, 2], is employed for learning player models through the interaction with the players. By formulating cognitive codes associating agent’s recommendation to user’s feedback, fusion ART learns explicit player models of the users’ likes and dislikes. We have developed personal agents with adaptive player models as tour guides in a virtual world environment. Our experimental results show that the agents are able to learn user models that evolve and adapt with players in real time. Furthermore, the virtual tour guides with player models outperform those without adaptive player modeling. Working towards the challenges of creating realistic characters, this dissertation proposes a brain inspired agent architecture that integrates goal-directed autonomy, natural language interaction and human-like personification. Based on self-organizing neural models, the agent architecture maintains explicit mental representation of desires, personalities, situation awareness and user awareness. Autonomous behaviors are generated via evaluating the current situation with active goals and learning the most appropriate social or goal-directed rule from the available knowledge, in accordance with the personality of each individual agent. We have built and deployed realistic agents in an interactive 3D virtual environment. Through an empirical user study, the results show that the agents are able to exhibit realistic human-like behavior, in terms of actions and interaction with the users, and are able to improve user experience in virtual environment.
DRNTU::Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence