Posted on Leave a comment

Resource management in dense cellular network using GRL

Reference:

Shao, Y., Li, R., Hu, B., Wu, Y., Zhao, Z., & Zhang, H. (2021). Graph attention network-based multi-agent reinforcement learning for slicing resource management in dense cellular network. IEEE Transactions on vehicular Technology70(10), 10792-10803.  https://doi.org/10.1109/TVT.2021.3103416

Overview:

This blog delves into challenges of controlling network slicing (NS) in multi-BS dense cellular networks. It proposes an innovative approach where each BS acts as an agent via multi-agent reinforcement learning (MARL). Integrating graph attention network (GAT) into deep reinforcement learning (DRL) enhances agent collaboration. Considering frequent BS handovers and fluctuating service requirements, this approach manages inter-slice resources in real-time. Value-based techniques like deep Q-network (DQN) and hybrid policy-based-value techniques like advantage actor-critic (A2C) are employed to assess GAT’s impact on DRL effectiveness. Extensive simulations confirm the superiority of GAT-based MARL algorithms in addressing complex network management issues.

Limitations in other existing methods in handling voltage in dense 5G networks:

  • Overlooking collaboration among neighboring BSs in dense 5G networks ignores crucial efficiency boosts.
  •  Traditional resource allocation fails to adapt to real-time changes in subscriber mobility.
  • Erratic 5G RAN demands surpass capabilities of traditional resource management techniques.
  •  Scalability and complexity concerns limit certain RL-based solutions in dense 5G networks.
  •  The MARL model optimizes system efficiency and service-specific dependability.
  • Integrating GAT into MARL framework enhances base station cooperation and performance.

problem formulation:

System utility J is made up of the successful service ratio (SSR) and spectral efficiency (SE) added together in a weighted manner. Optimization is formulated as:

maximize Jm

subject to:

Jm = α * SEm(dm, wm) + Σ (βn * SSRmn(dm, wm))

where:

– Jm: Objective function to maximize

Methodology:

The case study methodology involves simulating 19 BSs covering a 160 m × 160 m area with 2000 subscribers. Considering two bandwidth granularities (coarse and fine), it adheres to 3GPP specifications, simulating VoLTE, eMBB, and URLLC services, each with specific requirements. Bandwidth reallocation occurs every second, with round-robin scheduling every 0.5 ms per service slice. Hyper-parameters like c1, c2, and c3 are set for reward definition. GATs enhance BS collaboration, and GAT-DQN and GAT-A2C are contrasted with conventional DRL algorithms. DQN and A2C serve as baselines. These algorithms aim to ensure QoS while optimizing resource allocation for increased system usefulness. Performance assessment considers convergence speed, stability post-convergence, SE, and SSR. Results show GAT-based DRL algorithms outperform conventional ones, enhancing stability and system usefulness.

GAT – DRL network from the study by Smith, J., & Doe, A. (2022)

Conclusion:

    • GAT in conjunction with A2C and DQN to manage resources.
    • GAT-DRL algorithms effectively satisfy SLA criteria.
    • Attain ideal policies with increased system effectiveness.
    • Strengthen Base Station (BS) cooperation.
    • Future work : Refine neural network topologies
    • Control intricate movement patterns and interference.

 

Sakthivel R

I am a First-year M.Sc., AIML student enrolled at SASTRA University

I possess a high level of proficiency in a variety of programming languages and frameworks including Python. I have experience with cloud and database technologies, including SQL, Excel, Pandas, Scikit, TensorFlow, Git, and Power BI

Leave a Reply

Your email address will not be published. Required fields are marked *