Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/182324
Title: Reinforcement learning for collaborative multi-airport slot re-allocation under reduced capacity scenarios
Authors: Nguyen-Duy, Anh
Pham, Duc-Thinh
Keywords: Computer and Information Science
Other
Issue Date: 2024
Source: Nguyen-Duy, A. & Pham, D. (2024). Reinforcement learning for collaborative multi-airport slot re-allocation under reduced capacity scenarios. 2024 International Workshop on ATM/CNS (IWAC).
Conference: 2024 International Workshop on ATM/CNS (IWAC)
Abstract: Airport Collaborative Decision Making (A-CDM) is currently implemented to foster collaboration for efficient airport slot allocation. In the ASEAN region, where a central decision-making authority is not available, each airport reserves its autonomy in managing its own airport resources, which leads to different decision-making policies. An effective collaborative airport slot allocation approach needs to demonstrate its ability to collaborate with different slot allocation policies. Reinforcement Learning, a learning-based approach, can make use of interactions between airports to capture the underlying policies of other airports. In this paper, we consider a multi-airport system with different slot allocation policies, consisting of a Reinforcement Learning airport agent interacting with fixed-policy airport agents. We want to validate if the Reinforcement Learning agent can utilize interactions between airports to learn to reallocate slots efficiently under reduced capacity scenarios. We perform validation on the Hong Kong-Singapore-Bangkok hub, with the 2018 OAG data. The performance of the Reinforcement Learning agent is compared with the Nearest Heuristic, which assigns delays based on the nearest available slots. Results show that the Reinforcement Learning agent performs significantly better than the Nearest Heuristic under a heavy-reduced capacity scenario, with a total delay of 84 and 107, respectively. For a medium-reduced capacity scenario, the Reinforcement Learning agent closely resembles the performance of the Nearest Heuristic, with a total delay of 45 and 41, respectively.
URI: https://hdl.handle.net/10356/182324
ISSN: https://iwac2024.org/docs/IWAC2024_ProgramBooklet.pdf
Schools: School of Mechanical and Aerospace Engineering 
Research Centres: Air Traffic Management Research Institute 
Rights: © 2024 Electronic Navigation Research Institute (ENRI). All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder.
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:MAE Conference Papers

Files in This Item:
File Description SizeFormat 
IWAC_Camera-ready.pdf660 kBAdobe PDFThumbnail
View/Open

Page view(s)

60
Updated on May 7, 2025

Download(s)

7
Updated on May 7, 2025

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.