Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/182435
Title: | Safety-aware human-in-the-loop reinforcement learning with shared control for autonomous driving | Authors: | Huang, Wenhui Liu, Haochen Huang, Zhiyu Lv, Chen |
Keywords: | Engineering | Issue Date: | 2024 | Source: | Huang, W., Liu, H., Huang, Z. & Lv, C. (2024). Safety-aware human-in-the-loop reinforcement learning with shared control for autonomous driving. IEEE Transactions On Intelligent Transportation Systems, 25(11), 16181-16192. https://dx.doi.org/10.1109/TITS.2024.3420959 | Project: | M22K2c0079 NRF2021-NRF-ANR003 HM Science MOE-T2EP50222-0002 |
Journal: | IEEE Transactions on Intelligent Transportation Systems | Abstract: | The learning from intervention (LfI) approach has been proven effective in improving the performance of RL algorithms; nevertheless, existing methodologies in this domain tend to operate under the assumption that human guidance is invariably devoid of risk, thereby possibly leading to oscillations or even divergence in RL training as a result of improper demonstrations. In this paper, we propose a safety-aware human-in-the-loop reinforcement learning (SafeHIL-RL) approach to bridge the abovementioned gap. We first present a safety assessment module based on the artificial potential field (APF) model that incorporates dynamic information of the environment under the Frenet coordinate system, which we call the Frenet-based dynamic potential field (FDPF), for evaluating the real-time safety throughout the intervention process. Subsequently, we propose a curriculum guidance mechanism inspired by the pedagogical principle of whole-to-part patterns in human education. The curriculum guidance facilitates the RL agent's early acquisition of comprehensive global information through continual guidance while also allowing for fine-tuning local behavior through intermittent human guidance through a human-AI shared control strategy. Consequently, our approach enables a safe, robust, and efficient reinforcement learning process independent of the quality of guidance human participants provide. The proposed method is validated in two highway autonomous driving scenarios under highly dynamic traffic flows (https://github.com/OscarHuangWind/Safe-Human-in-the-Loop-RL). The experiments' results confirm the superiority and generalization capability of our approach when compared to other state-of-the-art (SOTA) baselines, as well as the effectiveness of the curriculum guidance. | URI: | https://hdl.handle.net/10356/182435 | ISSN: | 1524-9050 | DOI: | 10.1109/TITS.2024.3420959 | Schools: | School of Mechanical and Aerospace Engineering | Rights: | © 2024 IEEE. All rights reserved. | Fulltext Permission: | none | Fulltext Availability: | No Fulltext |
Appears in Collections: | MAE Journal Articles |
SCOPUSTM
Citations
50
5
Updated on Mar 20, 2025
Page view(s)
31
Updated on Mar 23, 2025
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.