Posted on Leave a comment

Adaptive Dispatch for Renewable Energy

Reference:

Bai, Y., Chen, S., Zhang, J., Xu, J., Gao, T., Wang, X., & Gao, D. W. (2023). An adaptive active power rolling dispatch strategy for high proportion of renewable energy based on distributed deep reinforcement learning. Applied Energy, 330, 120294.

https://doi.org/10.1016/j.apenergy.2022.120294

Overview:

In this paper, we address the uncertainties associated with high-proportion renewable energy by presenting an adaptive active power rolling dispatch approach based on distributed deep reinforcement learning. In order to improve the generalization capabilities of numerous agents in active power flow regulation, we plan to integrate graph attention layers and recurrent neural network layers into each agent’s network structure. Additionally, we suggest a regional graph attention network technique that would assist agents in efficiently combining local data from their communities, enhancing their information gathering skills. Agents can effectively adapt to dynamic surroundings by employing a structure of “centralized training, distributed execution”. Case studies show good generalization across temporal granularity and network topology, allowing multi-agents to adopt efficient active power control techniques.This strategy, in our opinion, will improve the applicability and flexibility of distributed AI techniques for power system control problems.

Limitation in the existing methods:

  • The uncertainty of dispersed renewable energy poses a challenge to current approaches.
  • Flexible resource coordination is unsuccessful with centralized dispatch techniques.
  • Accurate mathematical models are necessary for traditional optimization.
  • Complete topological information is required for traditional APRD approaches.
  • Changes in grid topology necessitate rebuilding mathematical models.
  • High degree of unpredictability in power systems (N-1 failure, load variations).
  • In complicated power systems, DRL encounters dimensionality problems.
  • Although it tackles distributed dispatch, DDRL has computational issues.
  • RL-based techniques have trouble effectively extracting state characteristics.
  • High-order neighbor information is necessary for GAT models to function properly.
  • In big grids, traditional methods are unable to guarantee the effectiveness of control actions.

Methodology:

Improved GAT for Power Systems

The suggested approach addresses the problem of sparse topological connections in power systems by extending the aggregation range of the Graph Attention Network (GAT) to include K-order neighbors. We achieve this improvement by adding a spatial discount factor, which modifies neighbor nodes’ contributions according to distance. Additionally, we use a multi-head attention method to improve data capture and stability.

Structure and Training of Neural Networks

Using a fully connected neural network, agents estimate Q-values. They then choose behaviors that optimize these values and store their experiences in a replay buffer for future training. To avoid overestimating activities that are not optimal, the methodology uses a target network in addition to the primary Q-network. Stochastic gradient descent is used in training to minimize a loss function while updating parameters iteratively.

R-GAT layer schematic diagram from the study by Bai, Y., Chen, S., Zhang, J., Xu, J., Gao, T., Wang, X., & Gao, D. W. (2023)

Distributed Execution and Multi-Agent Scenarios

Performance in decentralized systems is enhanced by merging local value functions in multi-agent scenarios by factorizing the value function using QMIX. While distributed execution depends on incomplete observations to improve computing efficiency, centralized training makes use of global state information. By combining these processes, the Distributed Active Power Rolling Dispatching Algorithm (DAPRDA) enables centralized training and distributed execution, facilitating effective decision-making in power grid management.

Structure of R-GAT – QMIX algorithm from the study by Bai, Y., Chen, S., Zhang, J., Xu, J., Gao, T., Wang, X., & Gao, D. W. (2023)

Conclusions:

  • A multi-agent RL technique for adaptive power dispatch that is adaptive is proposed.
  • Exhibits economical regulatory approaches.
  • Good network topology and time granularity generalization.
  • RNN improves generalization at various temporal granularities.
  • The GAT network becomes more resilient to changes in topology.
  • Captures network feature data efficiently and adaptably.
  • The rising number of discrete actions causes Q-MIX to struggle.
  • Continued research will examine continuous control strategies.

Sakthivel R

I am a First-year M.Sc., AIML student enrolled at SASTRA University

I possess a high level of proficiency in a variety of programming languages and frameworks including Python. I have experience with cloud and database technologies, including SQL, Excel, Pandas, Scikit, TensorFlow, Git, and Power BI

Leave a Reply

Your email address will not be published. Required fields are marked *