RL-based Adaptive Task Offloading in Mobile-Edge Computing for Future IoT Networks
RL-based Adaptive Task Offloading in Mobile-Edge Computing for Future IoT Networks
The Internet of Things (IoT) has been increasingly used in our everyday lives as well as in numerous industrial applications. However, due to limitations in computing and power capabilities, IoT devices need to send their respective tasks to cloud service stations that are usually located at far distances. Having to transmit data far distances introduces challenges for services that require low latency such as industrial control in factories and plants as well as artificial intelligence assisted autonomous driving. To solve this issue, mobile edge computing (MEC) is deployed at the networks edge to reduce transmission time. In this regard, this study proposes a new offloading scheme for MEC-assisted ultra dense cellular networks using reinforcement learning (RL) techniques. The proposed scheme enables efficient resource allocation and dynamic offloading decisions based on varying network conditions and user demands. The RL algorithm learns from the networks historical data and adapts the offloading decisions to optimize the networks overall performance. Non-orthogonal multiple access is also adopted to improve resource utilization among the IoT devices. Simulation results demonstrate that the proposed scheme outperforms other stateof the art offloading algorithms in terms of energy efficiency, network throughput, and user satisfaction.
Ziad Qais Al Abbasi、Khaled M. Rabie、Senior Member、Xingwang Li、Senior Member、Wali Ullah Khan、Asma Abu Samah
通信无线通信
Ziad Qais Al Abbasi,Khaled M. Rabie,Senior Member,Xingwang Li,Senior Member,Wali Ullah Khan,Asma Abu Samah.RL-based Adaptive Task Offloading in Mobile-Edge Computing for Future IoT Networks[EB/OL].(2025-06-20)[2025-07-16].https://arxiv.org/abs/2506.22474.点此复制
评论