Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/69031
Title: A study of multi-agent reinforcement learning with swarm intelligence
Authors: Chen, Caishun
Keywords: DRNTU::Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Issue Date: 2016
Abstract: Cooperative multi-agent systems (MASs) are ones in which several agents attempt, through their interaction, to jointly solve tasks or to maximize utility [25]. Cooperative multi-agent learning involves constructing the learning system so as to encourage cooperation among the agents in either concurrent learning or teamwork [25]. In most of the MAS, cooperative learning is usually realized by reinforcement learning (RL) due to the online and interactive behavior of agents. On the other hand, while multi-agent learning research works have been reported in a wide range of application domains, multi-agent reinforcement learning based on principles from swarm intelligence has remained under-explored. To this end, the dissertation takes an explorative attitude in the design of cooperative multi-agent reinforcement learning frameworks by leveraging the emergent behaviors from swarm intelligence (SI). The current research presented in this dissertation delves in the role of swarm intelligence algorithms in multi-agent cooperative reinforcement learning. The presented research works use SI-inspired approach on top of the RL agent learning framework to provide both principles for construction of complex systems involving multiple agents and mechanisms for coordination of independent agents’ behaviors. In this dissertation, a self-organizing neural model called temporal difference-fusion architecture for learning and cognition (TD-FALCON) [33, 34] is adopted as the RL agent. The synergy of TD-FALCON and swarm intelligence was studied, by using multiple simultaneous learners, one to one or more agents (concurrent learning), coordinated via rules derived from swarm intelligence. The main contribution of this thesis includes a number of proposed learning approaches in which agents learn by communicating via swarm intelligence to cooperatively solve tasks using different distributed sensing and communication content as inspired by swarm behaviors such as flocking formation and ant colony. Particularly, the current research proposed two SI-inspired multi-agent reinforcement learning approaches, namely flocking-based cooperative learning MAS and pheromone-guided ant colony TD-FALCON network. The effectiveness of these approaches are shown in the context of pursuit game and resource gathering problems.
URI: http://hdl.handle.net/10356/69031
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Theses

Files in This Item:
File Description SizeFormat 
Master-thesis-Final (ChenCaishun G1402576J).pdf
  Restricted Access
Main article1.56 MBAdobe PDFView/Open

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.