|国家预印本平台
首页|Generalization in Reinforcement Learning for Radio Access Networks

Generalization in Reinforcement Learning for Radio Access Networks

Generalization in Reinforcement Learning for Radio Access Networks

来源:Arxiv_logoArxiv
英文摘要

Modern RAN operate in highly dynamic and heterogeneous environments, where hand-tuned, rule-based RRM algorithms often underperform. While RL can surpass such heuristics in constrained settings, the diversity of deployments and unpredictable radio conditions introduce major generalization challenges. Data-driven policies frequently overfit to training conditions, degrading performance in unseen scenarios. To address this, we propose a generalization-centered RL framework for RAN control that: (i) encodes cell topology and node attributes via attention-based graph representations; (ii) applies domain randomization to broaden the training distribution; and (iii) distributes data generation across multiple actors while centralizing training in a cloud-compatible architecture aligned with O-RAN principles. Although generalization increases computational and data-management complexity, our distributed design mitigates this by scaling data collection and training across diverse network conditions. Applied to downlink link adaptation in five 5G benchmarks, our policy improves average throughput and spectral efficiency by ~10% over an OLLA baseline (10% BLER target) in full-buffer MIMO/mMIMO and by >20% under high mobility. It matches specialized RL in full-buffer traffic and achieves up to 4- and 2-fold gains in eMBB and mixed-traffic benchmarks, respectively. In nine-cell deployments, GAT models offer 30% higher throughput over MLP baselines. These results, combined with our scalable architecture, offer a path toward AI-native 6G RAN using a single, generalizable RL agent.

Burak Demirel、Yu Wang、Cristian Tatino、Pablo Soldati

无线电设备、电信设备无线通信计算技术、计算机技术

Burak Demirel,Yu Wang,Cristian Tatino,Pablo Soldati.Generalization in Reinforcement Learning for Radio Access Networks[EB/OL].(2025-07-09)[2025-07-18].https://arxiv.org/abs/2507.06602.点此复制

评论