|国家预印本平台
首页|PPO-EPO: Energy and Performance Optimization for O-RAN Using Reinforcement Learning

PPO-EPO: Energy and Performance Optimization for O-RAN Using Reinforcement Learning

PPO-EPO: Energy and Performance Optimization for O-RAN Using Reinforcement Learning

来源:Arxiv_logoArxiv
英文摘要

Energy consumption in mobile communication networks has become a significant challenge due to its direct impact on Capital Expenditure (CAPEX) and Operational Expenditure (OPEX). The introduction of Open RAN (O-RAN) enables telecommunication providers to leverage network intelligence to optimize energy efficiency while maintaining Quality of Service (QoS). One promising approach involves traffic-aware cell shutdown strategies, where underutilized cells are selectively deactivated without compromising overall network performance. However, achieving this balance requires precise traffic steering mechanisms that account for throughput performance, power efficiency, and network interference constraints. This work proposes a reinforcement learning (RL) model based on the Proximal Policy Optimization (PPO) algorithm to optimize traffic steering and energy efficiency. The objective is to maximize energy efficiency and performance gains while strategically shutting down underutilized cells. The proposed RL model learns adaptive policies to make optimal shutdown decisions by considering throughput degradation constraints, interference thresholds, and PRB utilization balance. Experimental validation using TeraVM Viavi RIC tester data demonstrates that our method significantly improves the network's energy efficiency and downlink throughput.

Rawlings Ntassah、Gian Michele Dell'Aera、Fabrizio Granelli

无线通信

Rawlings Ntassah,Gian Michele Dell'Aera,Fabrizio Granelli.PPO-EPO: Energy and Performance Optimization for O-RAN Using Reinforcement Learning[EB/OL].(2025-04-20)[2025-04-30].https://arxiv.org/abs/2504.14749.点此复制

评论