Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/148510
Title: | End-to-end deep reinforcement learning for multi-agent collaborative exploration | Authors: | Chen, Zichen Subagdja, Bhuditama Tan, Ah-Hwee |
Keywords: | Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence | Issue Date: | 2019 | Source: | Chen, Z., Subagdja, B. & Tan, A. (2019). End-to-end deep reinforcement learning for multi-agent collaborative exploration. 2019 IEEE International Conference on Agents (ICA), 99-102. https://dx.doi.org/10.1109/AGENTS.2019.8929192 | Conference: | 2019 IEEE International Conference on Agents (ICA) | Abstract: | Exploring an unknown environment by multiple autonomous robots is a major challenge in robotics domains. As multiple robots are assigned to explore different locations, they may interfere each other making the overall tasks less efficient. In this paper, we present a new model called CNN-based Multi-agent Proximal Policy Optimization (CMAPPO) to multi-agent exploration wherein the agents learn the effective strategy to allocate and explore the environment using a new deep reinforcement learning architecture. The model combines convolutional neural network to process multi-channel visual inputs, curriculum-based learning, and PPO algorithm for motivation based reinforcement learning. Evaluations show that the proposed method can learn more efficient strategy for multiple agents to explore the environment than the conventional frontier-based method. | URI: | https://hdl.handle.net/10356/148510 | ISBN: | 9781728140261 | DOI: | 10.1109/AGENTS.2019.8929192 | Schools: | School of Electrical and Electronic Engineering | Research Centres: | ST Engineering-NTU Corporate Lab | Rights: | © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/AGENTS.2019.8929192 | Fulltext Permission: | open | Fulltext Availability: | With Fulltext |
Appears in Collections: | EEE Conference Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Observation_based_Deep_Reinforcement_Learning_for_Multi_agent_Collaborative_Exploration.pdf | 515.77 kB | Adobe PDF | ![]() View/Open |
SCOPUSTM
Citations
20
17
Updated on Mar 21, 2025
Web of ScienceTM
Citations
20
8
Updated on Oct 30, 2023
Page view(s)
327
Updated on Mar 24, 2025
Download(s) 20
240
Updated on Mar 24, 2025
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.