Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/149369
Title: Optimal persistent monitoring using reinforcement learning
Authors: Hu, Litao
Keywords: Engineering::Electrical and electronic engineering
Issue Date: 2021
Publisher: Nanyang Technological University
Source: Hu, L. (2021). Optimal persistent monitoring using reinforcement learning. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/149369
Abstract: When monitoring a dynamically changing environment where a stationary group of agents cannot fully cover, a persistent monitoring problem (PMP) arises. In contrast to constantly monitoring, where every target must be monitored simultaneously, persistent monitoring requires a smaller number of agents and provides an effective and reliable prediction with a minimized uncertainty metric. This project aims to implement Reinforcement Learning (RL) in the multiple targets monitoring simulation with a single agent. This paper presents a comparative analysis of five implementations in Reinforcement Learning: Deep Q Network (DQN), Double Deep Q Network (DDQN), Dueling Deep Q Network (Dueling DQN), Multi-Objective Deep Reinforcement Learning (MODRL) and Hierarchical Deep Q Network (HDQN). Different designs of the reward function and stop condition are tested and evaluated to improve models’ decision capability. This paper presents experiences in applying the goal decomposition, a new approach to feature extension to solve the persistent monitoring problem without modifying images, and an improved method for a highly dynamic environment. These proposed approaches significantly enhance the model’s performance and stability.
URI: https://hdl.handle.net/10356/149369
Schools: School of Electrical and Electronic Engineering 
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:EEE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
Optimal Persistent Monitoring Using Reinforcement Learning.pdf
  Restricted Access
2.5 MBAdobe PDFView/Open

Page view(s)

198
Updated on Oct 3, 2023

Download(s)

9
Updated on Oct 3, 2023

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.