Posted on Leave a comment

Optimal energy management strategies for energy internet

Reference:

Hua, H., Qin, Y., Hao, C., & Cao, J. (2019). Optimal energy management strategies for energy Internet via deep reinforcement learning approach. Applied energy, 239, 598-609.

https://doi.org/10.1016/j.apenergy.2019.01.145

Overview:

This study addresses a critical energy management issue within the Energy Internet (EI) by exploring interdisciplinary methods. Despite the EI’s established framework, numerous basic and technological challenges persist. Focusing on a novel energy control problem grounded in economic intuition and operational concepts, the study tackles restricted optimum control without relying on explicit mathematical models of renewable power generation and loads. Standard techniques fall short due to the problem’s complexity. Instead, a model-free deep reinforcement learning approach is employed to achieve the desired control scheme, offering an innovative and practical solution. Numerical simulations demonstrate the efficiency and viability of this approach, highlighting a significant advancement in optimizing energy management within the dynamic context of the Ene

Limitations in the existing methods:

  • Power outages and voltage violations in distribution networks are made worse by the integration of distributed energy resources.
  • Conventional Volt-Var control methods have trouble with unpredictable behavior and quick reaction times, which lowers system reliability.
  • Slow processing and missing network data provide serious problems for model-based optimization.
  • Scalability issues and communication loads are encountered by single-agent reinforcement learning systems.
  • Decentralized reinforcement learning techniques might not guarantee universal data access or scale well.
  • In decentralized control systems, network partitioning can reduce the effectiveness and dependability of optimization.
  • In wider search spaces, heuristic methods encounter low exploration efficiency.
  • In order to obtain exact results, grid-based dynamic programming techniques necessitate large computation.
  • Specific system modeling is required by conventional methods, which increases complexity.
  • A lot of heuristic algorithms have trouble directly exploring high-dimensional spaces.

Methodology:

System description:

The Energy Internet (EI) network comprises interconnected sub-grids, each managed by an Energy Router (ER). ERs standardize operations and exchange information with cloud data centers and other ERs, ensuring unified control of devices like Distributed Generators (DGs) and Battery Energy Storages (BESs). Sub-grids include components such as PVs, WTGs, MTs, FCs, DEGs, BESs, and loads. Power outputs of PVs, WTGs, and loads are uncontrollable and modeled using historical data. DGs and BESs provide energy balance, with their control signals adjusting power output. All control inputs are bounded to prevent over-control. Power flows in the EI network are calculated using the panda power package.

A typical sub grid image from the study by Hua, H., Qin, Y., Hao, C., & Cao, J. (2019)

Process:

An Energy Internet (EI) system made up of linked sub-grids is optimized in this paper. It minimizes expenses and complies with regulations while balancing power generation and consumption. The approach entails controlling power flows, keeping battery energy storage systems (BESs) within appropriate charge/discharge limits, and covering the operational expenses of distributed generators (DGs), such as microturbines, diesel generators, and fuel cells. With weighted priority, electricity transmission, DG operation, and BES management are all included in the total cost function. By adjusting the power outputs of energy resources to reduce costs, the Asynchronous Advantage Actor-Critic (A3C) reinforcement learning method resolves the optimal control problem. The method’s capacity to achieve balanced power flows, cost efficiency, and system reliability is validated by simulations.

Flow of Simulation process from the study by Hua, H., Qin, Y., Hao, C., & Cao, J. (2019)

Conclusions:

  • Studied into energy management for an EI system that is broader.
  • Deep reinforcement learning applied for best control.
  • Exhibited better performance than the ideal power flow approach.
  • Strictly limited the exchange of power with the external grid.
  • Improved sub-grid local power sharing.
  • Reasoned use of BES under suggested management.
  • For improved control, future work will incorporate power flows and spatial linkages.

Sakthivel R

I am a First-year M.Sc., AIML student enrolled at SASTRA University

I possess a high level of proficiency in a variety of programming languages and frameworks including Python. I have experience with cloud and database technologies, including SQL, Excel, Pandas, Scikit, TensorFlow, Git, and Power BI

Leave a Reply

Your email address will not be published. Required fields are marked *