Please use this identifier to cite or link to this item:
|Title:||Self-organizing neural architectures and multi-agent cooperative reinforcement learning||Authors:||Xiao, Dan||Keywords:||DRNTU::Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence||Issue Date:||2010||Source:||Xiao, D. (2010). Self-organizing neural architectures and multi-agent cooperative reinforcement learning. Doctoral thesis, Nanyang Technological University, Singapore.||Abstract:||Multi-agent system, wherein multiple agents work to perform tasks jointly through their interaction, is a fairly well studied problem. Many approaches to multi-agent learning exist, among which, reinforcement learning is widely used, as it does not require an explicit model of the environment. However, limitations remain in current multi-agent reinforcement learning approaches, including adaptability and scalability in complex and specialized multi-agent domains. In any multi-agent reinforcement learning system, two major considerations are the reinforcement learning methods used and the cooperative strategies among agents. In this research work, we propose to adopt a self-organizing neural network model, named Temporal Difference - Fusion Architecture for Learning, COgnition, and Navigation (TD-FALCON), for multi-agent reinforcement learning. TD-FALCON performs online and incremental learning in real-time with and without immediate reward signals. It thus enables an agent to learn effectively in a dynamic environment.||URI:||https://hdl.handle.net/10356/42406||DOI:||10.32657/10356/42406||Fulltext Permission:||open||Fulltext Availability:||With Fulltext|
|Appears in Collections:||SCSE Theses|
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.