Posted on Leave a comment

GCN – Topology embedded DRL for Voltage Stability Control

Reference:

R. R. Hossain, Q. Huang and R. Huang, “Graph Convolutional Network-Based Topology Embedded Deep Reinforcement Learning for Voltage Stability Control,” in IEEE Transactions on Power Systems, vol. 36, no. 5, pp. 4848-4851, Sept. 2021, doi: 10.1109/TPWRS.2021.3084469.

http://10.1109/TPWRS.2021.3084469

Overview:

In order to effectively capture topological fluctuations and spatial correlations, the author of this research suggests a GCN (Graph Conventional Network) based DRL (Deep Reinforcement Learning) approach. This approach differs from existing GCN models, such as FCN-DRL.Therefore, when tested using the IEEE-39 bus system, the suggested model outperforms the FCN (Fully Connected Network)-DRL models in terms of performance and convergence.

Limitations in previous method: 

  •  Modern power grids face disruptions from renewable energy integration and dynamic demands.
  • Model Predictive Control (MPC) approaches have computational limitations.
  • Convolutional techniques in other papers overlook topological variations.
  • The author proposes a GCN-based DRL framework for voltage stability control.
  • The GC-DDQN method adjusts power system topology to enforce load-shedding.
  • It addresses short-term voltage stability issues resulting from FIDVR.
  • The DRL agent embeds grid topology information using GCN.

 

Framework of GCN-DRL for voltage control from the study by Ramij R. Hossain(2019)

Methodology: 

            on the basis of reward (rt) signal gains from the environment and the underlying state representation (termed as observation (st)), an agent network (figure above) generates actions (at). The agent’s goal is to find a policy πθ (st, at) that maximizes reward  .The network policy is divided into two phase they are 1) feature extraction and 2) policy approximation for the 1st one GCN is applied for the 2nd FCN applied respectivly      state, action, and reward of the MDP formulation as: 

  • state at time t: st = {Xt, At}
  • Action at : either 0 or 1
  • Reward rt at time t:

−10000, if Vi(t) < 0.95, t>Tpf + 4

c1∑i ΔVi(t) − c2∑j ΔPj (p.u.) − c3

GCN layer stands as an alone model for extracting features. In order to tackle this, the author used the classical Q-network to the GCN-based Q-Network Qgcn (s, a) by adding the GCN model to the optimization loop.     

   The GC-DDQN Algorithm is applied in IEEE-39 bus system a short circuit fault is introduced at one of five distinct locations—bus 4, 15, 21, 26, and 7 of IEEE bus- 39—at time 0.05 sec for a fault period of 0.08 sec during the GC-DDQN training phase. For the training situation, the effectiveness of GC-DDQN is compared with the traditional FCN-based DDQN (FC-DDQN) method. 

 

Conclusion: 

  • For one action step decision-making, the GC-DDQN and FC-DDQN approaches have average computation times of 0.90 and 0.88 milliseconds, respectively.

 

  • When comparing GC-DDQN to FC-DDQN, the figure below shows that more voltage recovery can be obtained with around 30% less total load shedding.                                                                                      

Fig of Testing results of trained GC-DDQN and FC-DDQN models. (a) Voltage of bus 7. Dashed line denotes the performance requirement for voltage recovery. (b) Total load shedding amount from the study by R. R. Hossain(2021)

 

  • GCN-DDQN outperforms FC-DDQN, with a less number of failed cases

 

 

 

 

Sakthivel R

I am a First-year M.Sc., AIML student enrolled at SASTRA University

I possess a high level of proficiency in a variety of programming languages and frameworks including Python. I have experience with cloud and database technologies, including SQL, Excel, Pandas, Scikit, TensorFlow, Git, and Power BI

Leave a Reply

Your email address will not be published. Required fields are marked *