Posted on Leave a comment

A GRL for Dynamic Renewable Energy Dispatch

Reference :

Chen, Junbin, et al. “A scalable graph reinforcement learning algorithm based stochastic dynamic dispatch of power system under high penetration of renewable energy.” International Journal of Electrical Power & Energy Systems 152 (2023): 109212.  https://doi.org/10.1016/j.ijepes.2023.109212                       

overview:

The study discusses how uncertainty affects power systems due to increasing renewable energy integration. This poses challenges for secure and economical power system operation. Dynamic economic dispatch (DED) faces difficulties due to uncertainties. RL techniques offer dispatch rules, but they rely on Euclidean data representations. To address scalability and computing efficiency, the author proposes a novel GRL. It enhances scalability and generalization, yielding higher-quality solutions online.

Issues in existing methods:

  •  Determining the best dispatch strategy in dynamic economic dispatch (DED) situations is challenging due to the vast state and action space.
  •  Existing model predictive control (MPC) methods are limited by their dependence on accurate models and sufficient data, which are difficult to obtain and maintain.
  • DED problems entail a large state and action space, with nonstationary variations in load and renewable energy sources (RES), making prediction challenging.
  • Harvesting latent connections between system topology and state is a challenge in DED.
  • Deep neural network (DNN) approaches face scalability issues when applied to large-scale systems.
  • Current methods utilizing matrix or vector information fail to capture the topological information of the model.
  • Low sample efficiency and scalability issues in existing methods result in high sample data requirements. To tackle these limitations the author proposes a novel GRL tailored for DED in power systems.

problem statement:

The author proposes problem formulation for the DED problem as a multistage stochastic sequential decision-making problem and suggests a GRL technique to solve it. Creating the best possible strategy to reduce the anticipation of cumulative costs over the dispatch horizon is the goal as :

min E(∑[t=1 to T] F(t)) = E(∑[t=1 to T] (FG(t) + FRES(t) + FESS(t)))

Explanation:
`min`: Minimize the expression.
`E`: Denotes the expectation operator.
`∑[t=1 to T] `: Summation over time from 1 to T.
`F(t)`: Decision variables representing power output at stage t.
`FG(t)`: Operation cost function of conventional generation cost at stage t.
`FRES(t)`: Operation cost function of renewable energy curtailment at stage t.
`FESS(t)`: Operation cost function of grid-level energy storage system at stage t.

 

Methodology:

They frame DED as a dynamic sequential choice issue and formulate it as a MDP to get solved using Rl algorithms the SAC algorithm is chosen for improvement. Second, they provide a graph-based depiction of the state of the system that successfully integrates the implicit correlations present in the system topology. while it capturing the non-Euclidean features of dispatch operation data. Thirdly, they create a GRL algorithm to find the best course of action for mapping the state of the system represented by a graph to DED decisions. To optimize dispatch rules, the SAC algorithm is trained iteratively using historical data. The critic network assesses the quality of these activities, while the actor network learns to choose actions depending on the system state represented in the graph.

Testing :

Test data is used to evaluate the trained model performance and it can generate dispatch choices that optimize costs while satisfying operational requirements and maintaining system stability.

Framework GRL based DED from the study by Chen(2023)

Major Evalution result of case study conducted using IEEE- 39:

Conclusion:

  • GRL outperformed conventional reinforcement learning methods in DED problems.
  • GRL has strong scalability, which allows it to adapt well to state space changes by using graph representation of states.
  • GRL is valuable for large systems and extended optimization due to its computationally efficient, nearly optimal solutions compared to algorithms like MPC.
  • Future scope: Future work will benefit from looking at more effective ways to describe input for GRL in order to lessen computation strain, as energy systems get more complex and more controllable resources are integrated.

Sakthivel R

I am a First-year M.Sc., AIML student enrolled at SASTRA University

I possess a high level of proficiency in a variety of programming languages and frameworks including Python. I have experience with cloud and database technologies, including SQL, Excel, Pandas, Scikit, TensorFlow, Git, and Power BI

Leave a Reply

Your email address will not be published. Required fields are marked *