SensorDrop: A Reinforcement Learning Framework for Communication Overhead Reduction on the Edge
SensorDrop: A Reinforcement Learning Framework for Communication Overhead Reduction on the Edge
In IoT solutions, it is usually desirable to collect data from a large number of distributed IoT sensors at a central node in the cloud for further processing. One of the main design challenges of such solutions is the high communication overhead between the sensors and the central node (especially for multimedia data). In this paper, we aim to reduce the communication overhead and propose a method that is able to determine which sensors should send their data to the central node and which to drop data. The idea is that some sensors may have data which are correlated with others and some may have data that are not essential for the operation to be performed at the central node. As such decisions are application dependent and may change over time, they should be learned during the operation of the system, for that we propose a method based on Advantage Actor-Critic (A2C) reinforcement learning which gradually learns which sensor's data is cost-effective to be sent to the central node. The proposed approach has been evaluated on a multi-view multi-camera dataset, and we observe a significant reduction in communication overhead with marginal degradation in object classification accuracy.
Rong Zheng、Amir Hossein Rassafi、Saeed Sharifian、Pooya Khandel、Vahid Pourahmadi
通信无线通信计算技术、计算机技术
Rong Zheng,Amir Hossein Rassafi,Saeed Sharifian,Pooya Khandel,Vahid Pourahmadi.SensorDrop: A Reinforcement Learning Framework for Communication Overhead Reduction on the Edge[EB/OL].(2019-10-03)[2025-06-30].https://arxiv.org/abs/1910.01601.点此复制
评论