Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/169074
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWu, Jingdaen_US
dc.contributor.authorHuang, Zhiyuen_US
dc.contributor.authorHu, Zhongxuen_US
dc.contributor.authorLv, Chenen_US
dc.date.accessioned2023-06-28T04:56:52Z-
dc.date.available2023-06-28T04:56:52Z-
dc.date.issued2023-
dc.identifier.citationWu, J., Huang, Z., Hu, Z. & Lv, C. (2023). Toward human-in-the-loop AI: enhancing deep reinforcement learning via real-time human guidance for autonomous driving. Engineering, 21, 75-91. https://dx.doi.org/10.1016/j.eng.2022.05.017en_US
dc.identifier.issn2095-8099en_US
dc.identifier.urihttps://hdl.handle.net/10356/169074-
dc.description.abstractDue to its limited intelligence and abilities, machine learning is currently unable to handle various situations thus cannot completely replace humans in real-world applications. Because humans exhibit robustness and adaptability in complex scenarios, it is crucial to introduce humans into the training loop of artificial intelligence (AI), leveraging human intelligence to further advance machine learning algorithms. In this study, a real-time human-guidance-based (Hug)-deep reinforcement learning (DRL) method is developed for policy training in an end-to-end autonomous driving case. With our newly designed mechanism for control transfer between humans and automation, humans are able to intervene and correct the agent's unreasonable actions in real time when necessary during the model training process. Based on this human-in-the-loop guidance mechanism, an improved actor-critic architecture with modified policy and value networks is developed. The fast convergence of the proposed Hug-DRL allows real-time human guidance actions to be fused into the agent's training loop, further improving the efficiency and performance of DRL. The developed method is validated by human-in-the-loop experiments with 40 subjects and compared with other state-of-the-art learning approaches. The results suggest that the proposed method can effectively enhance the training efficiency and performance of the DRL algorithm under human guidance without imposing specific requirements on participants’ expertise or experience.en_US
dc.description.sponsorshipAgency for Science, Technology and Research (A*STAR)en_US
dc.description.sponsorshipNanyang Technological Universityen_US
dc.language.isoenen_US
dc.relationNAP-SUGen_US
dc.relationW1925d0046en_US
dc.relation.ispartofEngineeringen_US
dc.rights© 2022 THE AUTHORS. Published by Elsevier LTD on behalf of Chinese Academy of Engineering and Higher Education Press Limited Company. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).en_US
dc.subjectEngineering::Mechanical engineeringen_US
dc.titleToward human-in-the-loop AI: enhancing deep reinforcement learning via real-time human guidance for autonomous drivingen_US
dc.typeJournal Articleen
dc.contributor.schoolSchool of Mechanical and Aerospace Engineeringen_US
dc.identifier.doi10.1016/j.eng.2022.05.017-
dc.description.versionPublished versionen_US
dc.identifier.scopus2-s2.0-85146715634-
dc.identifier.volume21en_US
dc.identifier.spage75en_US
dc.identifier.epage91en_US
dc.subject.keywordsDeep Reinforcement Learningen_US
dc.subject.keywordsHuman Guidanceen_US
dc.description.acknowledgementThis work was supported in part by the SUG-NAP Grant of Nanyang Technological University and the A*STAR Grant (W1925d0046), Singapore.en_US
item.fulltextWith Fulltext-
item.grantfulltextopen-
Appears in Collections:MAE Journal Articles
Files in This Item:
File Description SizeFormat 
1-s2.0-S2095809922004878-main.pdf4.5 MBAdobe PDFThumbnail
View/Open

SCOPUSTM   
Citations 10

53
Updated on Jul 7, 2024

Web of ScienceTM
Citations 20

8
Updated on Oct 25, 2023

Page view(s)

126
Updated on Jul 13, 2024

Download(s)

37
Updated on Jul 13, 2024

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.