Please use this identifier to cite or link to this item:
|Title:||Data-driven operation and control for power systems with high-level renewable energy resources||Authors:||Yan, Ziming||Keywords:||Engineering::Electrical and electronic engineering||Issue Date:||2021||Publisher:||Nanyang Technological University||Source:||Yan, Z. (2021). Data-driven operation and control for power systems with high-level renewable energy resources. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/161731||Abstract:||Environmental benefits promote the expansion of renewable energy sources (RESs) worldwide, which in turn imposes more challenges and uncertainties for modern power systems. Specifically, due to the intermit RESs power supply and low inertia of power-converters-interfaced RESs, there are increasing real-time power imbalances, dynamic frequency deviations, and real-time variations of optimal operating points. To adequately minimize system’s frequency deviation and reduce the real-time power imbalance, a data-driven load frequency control (LFC) method for stochastic power systems based on deep reinforcement learning (DRL) in continuous action domain is proposed. The proposed method can nonlinearly derive control strategies to minimize frequency deviation with faster response speed and stronger adaptability for unmolded system dynamics. The proposed method consists of (i) offline optimization of LFC strategies with DRL and continuous action search, and (ii) online control with policy network where features are extracted by stacked denoising auto-encoders (SDAE). Physical-model-assisted deep deterministic policy gradient is derived during the offline optimization of LFC strategies to update parameters of deep neural networks. Then, for load frequency control of multi-area power systems, a data-driven cooperative method based on multi-agent deep reinforcement learning (MA-DRL) in continuous action domain is proposed. The optimal coordinated control strategies for multiple LFC controllers can be nonlinearly and adaptively derived with the proposed method through centralized learning and decentralized implementation. The centralized learning is achieved by MA-DRL based on a global action-value function to quantify the overall LFC performance of the power system. To solve the MA-DRL problem, multi-agent deep deterministic policy gradient (DDPG) is derived to adjust control agents’ parameters considering the nonlinear generator behaviors. For implementation, each individual controller only needs local information in its control area to deliver optimal control signals. Thirdly, a DRL-based data-driven approach for optimal control of BESS for frequency support considering the battery lifetime degradation is proposed. A cost model considering battery cycle aging cost, unscheduled interchange price, and generation cost is proposed to estimate the total operational cost of BESS for power system frequency support. The actor-critic model is utilized for optimizing the BESS controller performance. Next, to rapidly and economically respond to the changes of power system operating state against the variabilities of RESs, a real-time optimal power flow (RT-OPF) approach through Lagrangian-based DRL in continuous action domain is proposed. The algorithm constructs a DRL agent to provide RT-OPF decisions and optimizes the agent using deep deterministic policy gradient. The operating constraints are incorporated into DRL action-value function with the Lagrangian approach. By incorporating DRL algorithm and power system constraints models, the proposed approach provides non-iterative RT-OPF solutions with theoretical optimality and constraint compliance. To further consider security constraints, a hybrid data-driven method for fast solutions of preventive security-constrained optimal power flow (SCOPF) is proposed. The proposed method formulates the SCOPF problems as constraints-satisfying training of DRL agents, where the action-value function of DRL is augmented by security constraints considering contingencies. In the training process, the proposed method hybridizes the primal-dual deep deterministic policy gradient and the classic SCOPF model. Instead of building reward critic networks and cost critic networks via interacting with the environment (i.e., power flow equations), the actor gradients are approximated by solving KKT conditions of the Lagrangian. Finally, with the formulated Jacobians of constraints and Hessians of Lagrangians, the interior point method is incorporated in primal-dual DDPG to derive the parameters updating rule of the DRL agent. Finally, to evaluate and mitigate the security risks of DRL models in power systems, a vulnerability assessment method is developed for DRL models under noisy data and cyber-attack. The vulnerability of a DRL model is assessed in a way that perturbations are constructed to minimize the model's performance. In addition, several vulnerability indices are proposed to identify the characteristics of perturbations that may cause malfunctions of DRL.||URI:||https://hdl.handle.net/10356/161731||DOI:||10.32657/10356/161731||Schools:||School of Electrical and Electronic Engineering||Rights:||This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).||Fulltext Permission:||open||Fulltext Availability:||With Fulltext|
|Appears in Collections:||EEE Theses|
Files in This Item:
|YAN ZIMING G1702944F PhD Thesis - Final Thesis.pdf||12.1 MB||Adobe PDF|
Updated on Dec 1, 2023
Updated on Dec 1, 2023
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.