Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/169372
Title: Bi-level deep reinforcement learning for PEV decision-making guidance by coordinating transportation-electrification coupled systems
Authors: Xing, Qiang
Chen, Zhong
Wang, Ruisheng
Zhang, Ziqi
Keywords: Engineering::Electrical and electronic engineering
Issue Date: 2023
Source: Xing, Q., Chen, Z., Wang, R. & Zhang, Z. (2023). Bi-level deep reinforcement learning for PEV decision-making guidance by coordinating transportation-electrification coupled systems. Frontiers in Energy Research, 10. https://dx.doi.org/10.3389/fenrg.2022.944313
Journal: Frontiers in Energy Research 
Abstract: The random charging and dynamic traveling behaviors of massive plug-in electric vehicles (PEVs) pose challenges to the efficient and safe operation of transportation-electrification coupled systems (TECSs). To realize real-time scheduling of urban PEV fleet charging demand, this paper proposes a PEV decision-making guidance (PEVDG) strategy based on the bi-level deep reinforcement learning, achieving the reduction of user charging costs while ensuring the stable operation of distribution networks (DNs). For the discrete time-series characteristics and the heterogeneity of decision actions, the FEVDG problem is duly decoupled into a bi-level finite Markov decision process, in which the upper-lower layers are used respectively for charging station (CS) recommendation and path navigation. Specifically, the upper-layer agent realizes the mapping relationship between the environment state and the optimal CS by perceiving the PEV charging requirements, CS equipment resources and DN operation conditions. And the action decision output of the upper-layer is embedded into the state space of the lower-layer agent. Meanwhile, the lower-level agent determines the optimal road segment for path navigation by capturing the real-time PEV state and the transportation network information. Further, two elaborate reward mechanisms are developed to motivate and penalize the decision-making learning of the dual agents. Then two extension mechanisms (i.e., dynamic adjustment of learning rates and adaptive selection of neural network units) are embedded into the Rainbow algorithm based on the DQN architecture, constructing a modified Rainbow algorithm as the solution to the concerned bi-level decision-making problem. The average rewards for the upper-lower levels are ¥ -90.64 and ¥ 13.24 respectively. The average equilibrium degree of the charging service and average charging cost are 0.96 and ¥ 42.45, respectively. Case studies are conducted within a practical urban zone with the TECS. Extensive experimental results show that the proposed methodology improves the generalization and learning ability of dual agents, and facilitates the collaborative operation of traffic and electrical networks.
URI: https://hdl.handle.net/10356/169372
ISSN: 2296-598X
DOI: 10.3389/fenrg.2022.944313
Schools: School of Electrical and Electronic Engineering 
Rights: © 2023 Xing, Chen, Wang and Zhang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:EEE Journal Articles

Files in This Item:
File Description SizeFormat 
fenrg-10-944313.pdf3.12 MBAdobe PDFThumbnail
View/Open

Page view(s)

137
Updated on Mar 17, 2025

Download(s) 50

56
Updated on Mar 17, 2025

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.