Posted on Leave a comment

GRL for inverter-based active voltage control

Reference:

Mu, C., Liu, Z., Yan, J., Jia, H., & Zhang, X. (2023). Graph multi-agent reinforcement learning for inverter-based active voltage control. IEEE Transactions on Smart Grid.   https://doi.org/10.1109/TSG.2023.3298807

Overview:

To address voltage fluctuations caused by distributed generations like PVs in ADNs, this paper formulates the problem as minimizing voltage deviation and network loss by adjusting PV inverter reactive power. The AVC problem aims to stabilize voltage at specified levels through reactive power injection while minimizing network loss. The Dec-POMDP framework is adopted for multi-agent cooperation, treating the problem as a tuple including state space, joint action and observation spaces, transition probabilities, shared reward function, and a discount factor. Decentralized execution is coupled with centralized training within the multi-agent actor-critic framework, enabling efficient exploration and convergence to optimal policies.

Issues in the previous paper:

  • Renewable Energy Challenges: Although the usage of renewable energy is increasing, distributed generation such as photovoltaics (PVs) presents issues with voltage stability and network loss.
  • Innovative Voltage Control: To effectively regulate voltage, the MAGRL algorithm combines the voltage barrier function and GCN.
  • Advanced Decision-Making: By handling voltage variations from quick changes in photovoltaic systems, MAGRL outperforms conventional techniques.
  • Enhanced Safety Measures: By reducing voltage variations, the exponential voltage barrier function guarantees safe power system operation.
  • Sturdy Performance: MAGRL’s effectiveness over a range of graph topologies is shown by comparative experiments conducted on IEEE 33-bus and 141-bus systems.

Problem statement:

To address voltage fluctuations caused by distributed generations like PVs in ADNs, so for this paper formulates the problem as minimizing voltage deviation and network loss by adjusting PV inverter reactive power. Specifically, the AVC problem aims to stabilize voltage at specified levels through reactive power injection while simultaneously minimizing network loss. Consequently, the Dec-POMDP framework is adopted for multi-agent cooperation, treating the problem as a tuple including state space, joint action and observation spaces, transition probabilities, shared reward function, and a discount factor. Within the multi-agent actor-critic framework, decentralized execution is coupled with centralized training, thereby enabling efficient exploration of the environment and convergence to optimal policies.

Methodology:

 Formulation of the AVC Issue and Safety Mechanisms:

The AVC issue is formulated as a Markov game in the suggested technique, and the solution is introduced as an improved MARL algorithm called graph multi-agent reinforcement learning (MAGRL). First, each PV inverter is treated as an agent in the AVC issue, which is formulated as a Dec-POMDP. Shared observations are used to guarantee regional coordination. State, observation, action spaces, and a reward function incorporating voltage deviation and network loss reduction are all included in the formulation. A voltage barrier function, which maximizes voltage within a safe range, is developed to ensure safety.

Integration of GCN and Training Methodology in MAGRL:

To improve feature extraction from the distribution network topology and agent performance, a Graph Convolution Network (GCN) is also included. Through the integration of MADDPG and GCN, the MAGRL algorithm enables both centralized training and decentralized execution. Using experience replay and target networks, actor-critic networks are updated throughout training. Only the actor network develops control policies during testing. Real-time voltage control within practical limits is made possible by the methodology’s smooth learning and reduction of computing load.

The overall structure of the proposed multi agent GRL from the study by Mu, C., Liu, Z., Yan, J., Jia, H., & Zhang, X. (2023).

Conclusion:

  • Voltage quality in renewable energy systems is improved by the proposed multi-agent graph reinforcement learning (MAGRL).
  • Distribution network topology is better represented with the introduction of the Graph Convolution Network (GCN).
  • The voltage is stabilized for safe distribution network operation using the exponential voltage barrier function.
  • Simulation using IEEE 33-bus and 141-bus instances shows that MAGRL is better than MARL and conventional techniques.
  • In order to improve the effectiveness of the MARL algorithm, future study may examine attention processes.

Sakthivel R

I am a First-year M.Sc., AIML student enrolled at SASTRA University

I possess a high level of proficiency in a variety of programming languages and frameworks including Python. I have experience with cloud and database technologies, including SQL, Excel, Pandas, Scikit, TensorFlow, Git, and Power BI

Leave a Reply

Your email address will not be published. Required fields are marked *