Posted on Leave a comment

GCRL for Advanced Energy-Aware Process Planning


Xiao, Q., Niu, B., Xue, B., & Hu, L. (2022). Graph convolutional reinforcement learning for advanced energy-aware process planning. IEEE Transactions on Systems, Man, and Cybernetics: Systems53(5), 2802-2814.


This paper discusses the challenges in excuting advanced energy-aware process planning (AEPP), emphasizing how advanced machining systems are susceptible to disruptions. These issues are addressed by a novel approach that makes use of graph convolutional reinforcement learning (GCRL). The adaptability of AEPP across machines, processes, and cutting tools is demonstrated via a graph convolutional policy network that has been trained to adapt to a variety of jobs. To represent the dynamic character of process plan development, the problem is reformulated as a Markov decision process (MDP). For process planning, a graph convolutional network (GCN) streamlines the input graph topology, and reinforcement learning (RL) guarantees resilient learning. The versatility of GCRL is improved by a two-phase multitask training methods that takes into account both task-specific rules and intertrack commonalities.

Issues in the existing methodologies:

  • Conventional approaches, like expert systems, lack flexibility and struggle with changing production conditions.
  • Metaheuristics such as simulated annealing and evolutionary algorithms are limited by static programming.
  • Expert availability requirements and case-specific background knowledge hinder conventional methods.
  • These approaches often fail to reduce energy and time consumption in manufacturing operations.
  • Machine learning (ML) techniques encounter challenges in obtaining optimal operation sequence labels.
  • ML techniques, while promising, face difficulties in real-world application.


The paper presents a new approach to AEPP called graph convolutional reinforcement learning (GCRL). By transforming the process plan formulation into a graph-based Markov decision process (MDP), this method improves flexibility and scalability by ,Combining planning and reinforcement learning.


Graph Convolutional Reinforcement Learning (GCRL), an automated energy-efficient process planning (AEPP) technique, is presented in this article. This process uses less energy and takes less time to produce. By taking care of feature sequencing and resource selection, constraints guarantee logical AEPP systems. Graph Convolutional Networks are used by the GCRL framework to integrate graph embedding, multitask training, and reinforcement learning. By representing AEPP as a Markov Decision Process (MDP), where an RL agent chooses nodes and actions based on rewards and transition dynamics, this method improves process plans. The success of energy-efficient process planning is increased through multitask training, which improves generalization across various planning activities.

 GCN-RL framework from the study by Xiao, Q., Niu, B., Xue, B., & Hu, L (2023).

The case study makes use of a Python 3 simulation framework for Internet of Things-based energy monitoring. The important evaluation results are displayed in the graphic below.

Comparision results of case study from the study by Xiao, Q., Niu, B., Xue, B., & Hu, L. (2023).


  • GCRL, a network approach, efficiently generates graphs for the stochastic AEPP issue.
  • GCRL combines GCN, RL, and multitask training for a comprehensive solution.
  • Comparative analysis favors GCRL over metaheuristics in convergence, stability, and solution quality.
  • Prioritizing low-power processes, GCRL resolves goal conflicts for energy-efficient manufacturing.



Sakthivel R

I am a First-year M.Sc., AIML student enrolled at SASTRA University

I possess a high level of proficiency in a variety of programming languages and frameworks including Python. I have experience with cloud and database technologies, including SQL, Excel, Pandas, Scikit, TensorFlow, Git, and Power BI

Leave a Reply

Your email address will not be published. Required fields are marked *