Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/153396
Full metadata record
DC FieldValueLanguage
dc.contributor.authorPham, Duc-Thinhen_US
dc.contributor.authorTran, Phu N.en_US
dc.contributor.authorAlam, Sameeren_US
dc.contributor.authorDuong, Vuen_US
dc.contributor.authorDelahaye, Danielen_US
dc.date.accessioned2022-01-03T05:01:32Z-
dc.date.available2022-01-03T05:01:32Z-
dc.date.issued2022-
dc.identifier.citationPham, D., Tran, P. N., Alam, S., Duong, V. & Delahaye, D. (2022). Deep reinforcement learning based path stretch vector resolution in dense traffic with uncertainties. Transportation Research Part C: Emerging Technologies, 135, 103463-. https://dx.doi.org/10.1016/j.trc.2021.103463en_US
dc.identifier.issn0968-090Xen_US
dc.identifier.urihttps://hdl.handle.net/10356/153396-
dc.description.abstractWith the continuous growth in the air transportation demand, air traffic controllers will have to handle increased traffic and consequently, more potential conflicts. This gives rise to the need for conflict resolution advisory tools that can perform well in high-density traffic scenarios given a noisy environment. Unlike model-based approaches, learning-based approaches can take advantage of historical traffic data and flexibly encapsulate environmental uncertainty. In this study, we propose a reinforcement learning approach that is capable of resolving conflicts, in the presence of traffic and inherent uncertainties in conflict resolution maneuvers, without the need for prior knowledge about a set of rules mapping from conflict scenarios to expected actions. The conflict resolution task is formulated as a decision-making problem in large and complex action space. The research also includes the development of a learning environment, scenario state representation, reward function, and a reinforcement learning algorithm inspired from Q-learning and Deep Deterministic Policy Gradient algorithms. The proposed algorithm, with two stages decision-making process, is used to train an agent that can serve as an advisory tool for air traffic controllers in resolving air traffic conflicts where it can learn from historical data by evolving over time. Our findings show that the proposed model gives the agent the capability to suggest high-quality conflict resolutions under different environmental conditions. It outperforms two baseline algorithms. The trained model has high performance under low uncertainty level (success rate >= 95% ) and medium uncertainty level (success rate >= 87%) with high traffic density. The detailed analysis of different impact factors such as the environment's uncertainty and traffic density on learning performance are investigated and discussed. The environment's uncertainty is the most important factor which affects the performance. Moreover, the combination of high-density traffic and high uncertainty will be a challenge for any learning model.en_US
dc.description.sponsorshipCivil Aviation Authority of Singapore (CAAS)en_US
dc.description.sponsorshipNational Research Foundation (NRF)en_US
dc.language.isoenen_US
dc.relation.ispartofTransportation Research Part C: Emerging Technologiesen_US
dc.rights© 2021 Elsevier Ltd. A. All rights reserved. This paper was published in Transportation Research Part C: Emerging Technologies and is made available with permission of Elsevier Ltd.en_US
dc.subjectEngineering::Aeronautical engineering::Aviationen_US
dc.subjectEngineering::Computer science and engineering::Computing methodologies::Artificial intelligenceen_US
dc.titleDeep reinforcement learning based path stretch vector resolution in dense traffic with uncertaintiesen_US
dc.typeJournal Articleen
dc.contributor.schoolSchool of Mechanical and Aerospace Engineeringen_US
dc.contributor.researchAir Traffic Management Research Instituteen_US
dc.identifier.doi10.1016/j.trc.2021.103463-
dc.description.versionAccepted versionen_US
dc.identifier.volume135en_US
dc.identifier.spage103463en_US
dc.subject.keywordsReinforcement Learningen_US
dc.subject.keywordsAir Traffic Controlen_US
dc.description.acknowledgementThis research / project* is supported by the National Research Foundation, Singapore, and the Civil Aviation Authority of Singapore, under the Aviation Transformation Programme. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore and the Civil Aviation Authority of Singapore.en_US
item.grantfulltextembargo_20231216-
item.fulltextWith Fulltext-
Appears in Collections:ATMRI Journal Articles
MAE Journal Articles
Files in This Item:
File Description SizeFormat 
TRC_DRL_Conflict_Resolution_Final_Submission.pdf
  Until 2023-12-16
2.74 MBAdobe PDFUnder embargo until Dec 16, 2023

Page view(s)

81
Updated on May 21, 2022

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.