Uncertainty-Aware Transient Stability-Constrained Preventive Redispatch: A Distributional Reinforcement Learning Approach

14 Feb 2024  ·  Zhengcheng Wang, Fei Teng, Yanzhen Zhou, Qinglai Guo, Hongbin Sun ·

Transient stability-constrained preventive redispatch plays a crucial role in ensuring power system security and stability. Since redispatch strategies need to simultaneously satisfy complex transient constraints and the economic need, model-based formulation and optimization become extremely challenging. In addition, the increasing uncertainty and variability introduced by renewable sources start to drive the system stability consideration from deterministic to probabilistic, which further exaggerates the complexity. In this paper, a Graph neural network guided Distributional Deep Reinforcement Learning (GD2RL) method is proposed, for the first time, to solve the uncertainty-aware transient stability-constrained preventive redispatch problem. First, a graph neural network-based transient simulator is trained by supervised learning to efficiently generate post-contingency rotor angle curves with the steady-state and contingency as inputs, which serves as a feature extractor for operating states and a surrogate time-domain simulator during the environment interaction for reinforcement learning. Distributional deep reinforcement learning with explicit uncertainty distribution of system operational conditions is then applied to generate the redispatch strategy to balance the user-specified probabilistic stability performance and economy preferences. The full distribution of the post-control transient stability index is directly provided as the output. Case studies on the modified New England 39-bus system validate the proposed method.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods