Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/148510
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chen, Zichen | en_US |
dc.contributor.author | Subagdja, Bhuditama | en_US |
dc.contributor.author | Tan, Ah-Hwee | en_US |
dc.date.accessioned | 2021-05-25T09:09:51Z | - |
dc.date.available | 2021-05-25T09:09:51Z | - |
dc.date.issued | 2019 | - |
dc.identifier.citation | Chen, Z., Subagdja, B. & Tan, A. (2019). End-to-end deep reinforcement learning for multi-agent collaborative exploration. 2019 IEEE International Conference on Agents (ICA), 99-102. https://dx.doi.org/10.1109/AGENTS.2019.8929192 | en_US |
dc.identifier.isbn | 9781728140261 | - |
dc.identifier.uri | https://hdl.handle.net/10356/148510 | - |
dc.description.abstract | Exploring an unknown environment by multiple autonomous robots is a major challenge in robotics domains. As multiple robots are assigned to explore different locations, they may interfere each other making the overall tasks less efficient. In this paper, we present a new model called CNN-based Multi-agent Proximal Policy Optimization (CMAPPO) to multi-agent exploration wherein the agents learn the effective strategy to allocate and explore the environment using a new deep reinforcement learning architecture. The model combines convolutional neural network to process multi-channel visual inputs, curriculum-based learning, and PPO algorithm for motivation based reinforcement learning. Evaluations show that the proposed method can learn more efficient strategy for multiple agents to explore the environment than the conventional frontier-based method. | en_US |
dc.description.sponsorship | National Research Foundation (NRF) | en_US |
dc.language.iso | en | en_US |
dc.rights | © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/AGENTS.2019.8929192 | en_US |
dc.subject | Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence | en_US |
dc.title | End-to-end deep reinforcement learning for multi-agent collaborative exploration | en_US |
dc.type | Conference Paper | en |
dc.contributor.school | School of Electrical and Electronic Engineering | en_US |
dc.contributor.conference | 2019 IEEE International Conference on Agents (ICA) | en_US |
dc.contributor.research | ST Engineering-NTU Corporate Lab | en_US |
dc.identifier.doi | 10.1109/AGENTS.2019.8929192 | - |
dc.description.version | Accepted version | en_US |
dc.identifier.scopus | 2-s2.0-85077815398 | - |
dc.identifier.spage | 99 | en_US |
dc.identifier.epage | 102 | en_US |
dc.subject.keywords | Multi-agent Exploration | en_US |
dc.subject.keywords | Deep Learning | en_US |
dc.citation.conferencelocation | Jinan, China | en_US |
item.grantfulltext | open | - |
item.fulltext | With Fulltext | - |
Appears in Collections: | EEE Conference Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Observation_based_Deep_Reinforcement_Learning_for_Multi_agent_Collaborative_Exploration.pdf | 515.77 kB | Adobe PDF | View/Open |
Page view(s)
140
Updated on May 28, 2022
Download(s) 50
59
Updated on May 28, 2022
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.