Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/161934
Title: A deep reinforcement learning approach for airport departure metering under spatial-temporal airside interactions
Authors: Ali, Hasnain
Pham, Duc-Thinh
Schultz, Michael
Alam, Sameer
Keywords: Engineering::Computer science and engineering::Data::Coding and information theory
Issue Date: 2022
Source: Ali, H., Pham, D., Schultz, M. & Alam, S. (2022). A deep reinforcement learning approach for airport departure metering under spatial-temporal airside interactions. IEEE Transactions On Intelligent Transportation Systems. https://dx.doi.org/10.1109/TITS.2022.3209397
Journal: IEEE Transactions on Intelligent Transportation Systems 
Abstract: Airport taxi delays adversely affect airports and airlines around the world leading to airside congestion, increased Air Traffic Controllers/Pilot workload, and adverse environmental impact due to excessive fuel burn. Airport Departure Metering (DM) is an effective approach to contain taxi delays by controlling departure pushback timings. The key idea behind DM is to transfer aircraft waiting time from taxiways to gates. State-of-the-art DM methods use model-based control policies that rely on airside departure modeling to obtain simplified analytical equations. Consequently, these models fail to capture non-stationarity in the airside operations leading to poor performance of control policies under uncertainties. This work proposes model-free and learning-based DM using Deep Reinforcement Learning (DRL) approach to reduce taxi delays while meeting flight schedule constraints. This paper casts the DM problem in a markov decision process framework and develops a representative airport-airside simulator to simulate airside operations and evaluate the learnt DM policy. For effective state representation, this work introduces taxiway hotspot features to account for the spatial-temporal evolution of airside congestion levels. This significantly improves the DM policy convergence rate during training. The performance of the learnt policy is evaluated under different traffic densities with a reduction of approximately 44% in taxi out delays, in medium-density traffic scenarios, which corresponds to 2-minute savings in taxi-out time per aircraft. Furthermore, benchmarking DRL against an evolutionary method and another state-of-the-art simulation-based heuristic demonstrates the superior performance of our method, especially in high traffic density scenarios. With increased traffic density, taxi-time savings achieved by the learnt DM policy increase without a significant decrease in runway throughput. Results, on a typical day of simulated operations at Singapore Changi Airport, demonstrate that DRL can learn an effective DM policy to contain congestion on the taxiways, reduce total fuel consumption by approximately 22% and better manage the airside traffic.
URI: https://hdl.handle.net/10356/161934
ISSN: 1524-9050
DOI: 10.1109/TITS.2022.3209397
Rights: © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/TITS.2022.3209397.
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:ATMRI Journal Articles
MAE Journal Articles

Files in This Item:
File Description SizeFormat 
DRL_DM_author_copy AV..pdfAccepted version1.29 MBAdobe PDFThumbnail
View/Open

Page view(s)

57
Updated on Dec 4, 2022

Download(s) 50

20
Updated on Dec 4, 2022

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.