Posted on Leave a comment

Decentralized Volt-VAR control using Multi-agent GRL

Reference: 

Hu, D., Li, Z., Ye, Z., Peng, Y., Xi, W., & Cai, T. (2024). Multi-agent graph reinforcement learning for decentralized Volt-VAR control in power distribution systems. International Journal of Electrical Power & Energy Systems155, 109531. https://doi.org/10.1016/j.ijepes.2023.109531

Overview: 

Volt/Var control, or VVC, is crucial to power distribution systems in order to reduce power loss and keep voltages within acceptable ranges. However, the lack of comprehensive network knowledge presents problems for standard model-based approaches. In order to handle VVC under partial observation limitations, our study presents MASAC-HGRN, a unique multi-agent graph-based deep reinforcement learning (DRL) technique. By using a decentralized training and execution paradigm, MASAC-HGRN differs from traditional methods in that it allows agents to receive information locally and make choices on their own. Studies using numerical data on IEEE test feeds show that VVC outperforms both conventional techniques and the most advanced RL algorithms. Comprehensive robustness experiments demonstrate the flexibility and robustness of the decentralized system. 

Limitations in the existing Methods: 

  • Power outages and voltage infractions in distribution networks are made worse by distributed energy resource integration.  
  • Rapid reaction times and unpredictability are problems for traditional Volt-VAR control techniques, which reduces system dependability. 
  • Incomplete network data and sluggish processing speeds are problems for model-based optimization.  
  • Scalability problems and communication loads plague single-agent reinforcement learning techniques.  
  • Decentralized reinforcement learning techniques try to mitigate the constraints of centralized control, but they might not be scalable or provide universal access to data.  
  • Network partitioning for decentralized control may impair optimization efficiency and dependability.  

Methodology: 

Methodology overview:  

  • This research presents a new approach to multi-agent voltage regulation in power distribution networks. 
  •  It uses a decentralized training and execution (DTDE) infrastructure with deep reinforcement learning (DRL) algorithms.  
  • The method seeks to solve issues with voltage control’s partial observability and scalability.  

 

Flowchart of the training process of MASAC-HGR from the study by Zhao, Z. Y., Che, Y., Luo, S., Wu, K., & Leung, V. C. (2023, August) 

Communication and Network Structure: 

  • A hierarchical graph recurrent network (HGRN) structure is used in the approach. 
  • To facilitate communication between heterogeneous agents, this structure combines graph attention networks (GAT) with deep recurrent Q networks (DRQN).  
  • The communication system facilitates efficient information flow between agents, enhancing voltage control coordination.
    Process of Training and Optimization:  
  • Multi-agent actor-critic techniques, such as multi-agent soft actor-critic hierarchical graph recurrent network (MASAC-HGRN), are used in the training process.  
  • The integration of maximum entropy learning guarantees steady and resilient policy optimization.  
  • During training, experience replay mechanisms are used to improve data stability and efficiency.  

 Conclusion: 

  • The DTDE paradigm-based MASAC-HGRN algorithm for power distribution system training is proposed.  
  • By optimizing the smart inverter and SVC set points, each sub-network functions as an agent depending on its observations. 
  • Performs more optimally and robustly than the most advanced RL-based algorithms.  
  • A computing speed test verified the lower training costs.  
  • Ineffective building-to-building coordination caused by current systems results in energy coupling and increased total consumption.

 Abbreviations:

  • VVC: Volt/Var Control
  • MASAC-HGRN: Multi-Agent Soft Actor-Critic Hierarchical Graph Recurrent Network
  • DTDE: Decentralized Training and Execution
  • HGRN: Hierarchical Graph Recurrent Network
  • DRQN: Deep Recurrent Q Networks
  • SVC: Static Var Compensator

Sakthivel R

I am a First-year M.Sc., AIML student enrolled at SASTRA University

I possess a high level of proficiency in a variety of programming languages and frameworks including Python. I have experience with cloud and database technologies, including SQL, Excel, Pandas, Scikit, TensorFlow, Git, and Power BI

Leave a Reply

Your email address will not be published. Required fields are marked *