Posted on Leave a comment

GRL for service restoration

Reference:

Fan, B., Liu, X., Xiao, G., Kang, Y., Wang, D., & Wang, P. (2023). Attention-Based Multi-Agent Graph Reinforcement Learning for Service Restoration. IEEE Transactions on Artificial Intelligencehttps://doi.org/10.1109/TAI.2023.3314395

Overview:

Distributed energy resources are revolutionizing power restoration because they allow recovery without transmission support. Although conventional techniques need accurate models, deep reinforcement learning (DRL) offers a viable substitute. In this paper, a multi-agent graph reinforcement learning strategy, utilizing attention mechanisms, is proposed for service restoration, which is modeled as a partially observable Markov decision process. Specifically, agents improve strategy formulation by extracting features using graph convolutional networks. Moreover, inter-agent collaboration is improved by centralized, attentive training. This innovative method, which combines energy storage, photovoltaics, and dispatchable generators, has been validated on the IEEE-118 system, thereby demonstrating how DRL may significantly improve power distribution resilience.

Issues in existing methodologies:

  • Inadequate Handling of High-Dimensional Continuous Domains: The use of traditional Q-learning in large-scale power systems is limited due to its inability to handle high-dimensional continuous domains.
  • Simplistic Application of DRL Algorithms: Numerous research employ simple DRL algorithms that don’t improve scalability or performance.
  • Single-Agent Policy Learning: Current methods rely on a single agent, which is unsuitable for active distribution networks of a large size because of the high expenses and duration of centralized data collecting.
  • Ignoring Network Topology: Prior research ignores the topology of the distribution network, omitting important geographical links and data.
  • Exaggeration Problems with DQN: In complicated contexts, DQN paired with graph learning overestimates and is unable to extract features.
  • Multi-Agent Systems (MAS): Difficulties Because agents update separately, standard DRL algorithms like DDPG encounter instability, which makes it difficult to reach Nash equilibrium and convergence.

Methodology:

An approach to reinforcement learning for Service Restoration in active distribution networks (ADNs) using multi-agent graphs is presented in this research. ADNs go into island mode in the event of a power outage, creating microgrids for Service Restoration. DERs maintained by the public are managed by the system operator using real-time ADN states.

Three agents—one that minimizes DG generating costs, another that lowers ES degradation costs, and a third that optimizes load restoration—make up the Partially Observable Markov Decision Process (POMDP) model for the SR process. Agent’s control DGs, ESs, and loads by monitoring local conditions and generating actions. PYPOWER models state transitions while accounting for uncertainty in the load and DER.

Graph convolutional networks (GCNs) are used by agents for centralized training, whereas self-attention-enhanced critic networks are used for feature extraction. So,by utilizing sophisticated graph learning and reinforcement learning techniques, this strategy improves the robustness and efficiency of SR.

The structure of actor-critic based multi-agent graph reinforcement learning method for service restoration from the study conducted by Fan, B (2023).

Conclusion:

  • Proposed attention-based multi-agent graph reinforcement learning for service restoration.
  • Defined power network state using graph data, integrating topology and power flow information.
  • Enabled state perception with graph learning using graph convolutional networks.
  • Introduced self-attention into centralized training, enhancing agent collaboration.
  • Experimental results showed agents with graph learning achieved higher rewards.
  • Attention-based training improved efficiency and convergence of multi-agent deep reinforcement learning.
  • Future work will focus on enhancing deep reinforcement learning’s generalizability to various network scenarios.

Abbreviations:

  • DQN: Deep Q-Network
  • DDPG: Deep Deterministic Policy Gradient
  • MAS: Multi-Agent Systems
  • ADN: Active Distribution Network
  • DER: Distributed Energy Resource
  • DG: Distributed Generator
  • ES: Energy Storage
  • POMDP: Partially Observable Markov Decision Process
  • SR: Service Restoration

 

Sakthivel R

I am a First-year M.Sc., AIML student enrolled at SASTRA University

I possess a high level of proficiency in a variety of programming languages and frameworks including Python. I have experience with cloud and database technologies, including SQL, Excel, Pandas, Scikit, TensorFlow, Git, and Power BI

Leave a Reply

Your email address will not be published. Required fields are marked *