Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/148510
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChen, Zichenen_US
dc.contributor.authorSubagdja, Bhuditamaen_US
dc.contributor.authorTan, Ah-Hweeen_US
dc.date.accessioned2021-05-25T09:09:51Z-
dc.date.available2021-05-25T09:09:51Z-
dc.date.issued2019-
dc.identifier.citationChen, Z., Subagdja, B. & Tan, A. (2019). End-to-end deep reinforcement learning for multi-agent collaborative exploration. 2019 IEEE International Conference on Agents (ICA), 99-102. https://dx.doi.org/10.1109/AGENTS.2019.8929192en_US
dc.identifier.isbn9781728140261-
dc.identifier.urihttps://hdl.handle.net/10356/148510-
dc.description.abstractExploring an unknown environment by multiple autonomous robots is a major challenge in robotics domains. As multiple robots are assigned to explore different locations, they may interfere each other making the overall tasks less efficient. In this paper, we present a new model called CNN-based Multi-agent Proximal Policy Optimization (CMAPPO) to multi-agent exploration wherein the agents learn the effective strategy to allocate and explore the environment using a new deep reinforcement learning architecture. The model combines convolutional neural network to process multi-channel visual inputs, curriculum-based learning, and PPO algorithm for motivation based reinforcement learning. Evaluations show that the proposed method can learn more efficient strategy for multiple agents to explore the environment than the conventional frontier-based method.en_US
dc.description.sponsorshipNational Research Foundation (NRF)en_US
dc.language.isoenen_US
dc.rights© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/AGENTS.2019.8929192en_US
dc.subjectEngineering::Computer science and engineering::Computing methodologies::Artificial intelligenceen_US
dc.titleEnd-to-end deep reinforcement learning for multi-agent collaborative explorationen_US
dc.typeConference Paperen
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen_US
dc.contributor.conference2019 IEEE International Conference on Agents (ICA)en_US
dc.contributor.researchST Engineering-NTU Corporate Laben_US
dc.identifier.doi10.1109/AGENTS.2019.8929192-
dc.description.versionAccepted versionen_US
dc.identifier.scopus2-s2.0-85077815398-
dc.identifier.spage99en_US
dc.identifier.epage102en_US
dc.subject.keywordsMulti-agent Explorationen_US
dc.subject.keywordsDeep Learningen_US
dc.citation.conferencelocationJinan, Chinaen_US
item.grantfulltextopen-
item.fulltextWith Fulltext-
Appears in Collections:EEE Conference Papers
Files in This Item:
File Description SizeFormat 
Observation_based_Deep_Reinforcement_Learning_for_Multi_agent_Collaborative_Exploration.pdf515.77 kBAdobe PDFView/Open

Page view(s)

140
Updated on May 28, 2022

Download(s) 50

59
Updated on May 28, 2022

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.