|国家预印本平台
首页|Energy-Efficient Dynamic Training and Inference for GNN-Based Network Modeling

Energy-Efficient Dynamic Training and Inference for GNN-Based Network Modeling

Energy-Efficient Dynamic Training and Inference for GNN-Based Network Modeling

来源:Arxiv_logoArxiv
英文摘要

Efficient network modeling is essential for resource optimization and network planning in next-generation large-scale complex networks. Traditional approaches, such as queuing theory-based modeling and packet-based simulators, can be inefficient due to the assumption made and the computational expense, respectively. To address these challenges, we propose an innovative energy-efficient dynamic orchestration of Graph Neural Networks (GNN) based model training and inference framework for context-aware network modeling and predictions. We have developed a low-complexity solution framework, QAG, that is a Quantum approximation optimization (QAO) algorithm for Adaptive orchestration of GNN-based network modeling. We leverage the tripartite graph model to represent a multi-application system with many compute nodes. Thereafter, we apply the constrained graph-cutting using QAO to find the feasible energy-efficient configurations of the GNN-based model and deploying them on the available compute nodes to meet the network modeling application requirements. The proposed QAG scheme closely matches the optimum and offers atleast a 50% energy saving while meeting the application requirements with 60% lower churn-rate.

Chetna Singhal、Yassine Hadjadj-Aoul

电子技术应用

Chetna Singhal,Yassine Hadjadj-Aoul.Energy-Efficient Dynamic Training and Inference for GNN-Based Network Modeling[EB/OL].(2025-03-24)[2025-07-20].https://arxiv.org/abs/2503.18706.点此复制

评论