Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/172040
Title: | Multi-agent dueling Q-learning with mean field and value decomposition | Authors: | Ding, Shifei Du, Wei Ding, Ling Guo, Lili Zhang, Jian An, Bo |
Keywords: | Engineering::Computer science and engineering | Issue Date: | 2023 | Source: | Ding, S., Du, W., Ding, L., Guo, L., Zhang, J. & An, B. (2023). Multi-agent dueling Q-learning with mean field and value decomposition. Pattern Recognition, 139, 109436-. https://dx.doi.org/10.1016/j.patcog.2023.109436 | Journal: | Pattern Recognition | Abstract: | A great deal of multi agent reinforcement learning(MARL) work has investigated how multiple agents effectively accomplish cooperative tasks utilizing value function decomposition methods. However, existing value decomposition methods can only handle cooperative tasks with shared reward, due to these methods factorize the value function from a global perspective. To tackle the competitive tasks and mixed cooperative-competitive tasks with differing individual reward setting, we design the Multi-agent Dueling Q-learning (MDQ) method based on mean-filed theory and individual value decomposition. Specifically, we integrate the mean-field theory with the value decomposition to factorize the value function at the individual level, which can deal with mixed cooperative-competitive tasks. Besides, we take a dueling network architecture to distinguish which states are valuable, eliminating the need to learn the impact of each action on each state, therefore enabling efficient learning and leading to better policy evaluation. The proposed method MDQ is applicable not only to cooperative tasks with shared rewards setting, but also to mixed cooperative-competitive tasks with individualized reward settings. In this end, it is flexible and generically applicable enough to most multi-agent tasks. Empirical experiments on various mixed cooperative-competitive tasks demonstrate that MDQ significantly outperforms existing multi agent reinforcement learning methods. | URI: | https://hdl.handle.net/10356/172040 | ISSN: | 0031-3203 | DOI: | 10.1016/j.patcog.2023.109436 | Schools: | School of Computer Science and Engineering | Rights: | © 2023 Elsevier Ltd. All rights reserved. | Fulltext Permission: | none | Fulltext Availability: | No Fulltext |
Appears in Collections: | SCSE Journal Articles |
SCOPUSTM
Citations
50
7
Updated on Mar 12, 2025
Page view(s)
111
Updated on Mar 18, 2025
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.