|国家预印本平台
首页|P$^2$U: Progressive Precision Update For Efficient Model Distribution

P$^2$U: Progressive Precision Update For Efficient Model Distribution

P$^2$U: Progressive Precision Update For Efficient Model Distribution

来源:Arxiv_logoArxiv
英文摘要

Efficient model distribution is becoming increasingly critical in bandwidth-constrained environments. In this paper, we propose a simple yet effective approach called Progressive Precision Update (P$^2$U) to address this problem. Instead of transmitting the original high-precision model, P$^2$U transmits a lower-bit precision model, coupled with a model update representing the difference between the original high-precision model and the transmitted low precision version. With extensive experiments on various model architectures, ranging from small models ($1 - 6$ million parameters) to a large model (more than $100$ million parameters) and using three different data sets, e.g., chest X-Ray, PASCAL-VOC, and CIFAR-100, we demonstrate that P$^2$U consistently achieves better tradeoff between accuracy, bandwidth usage and latency. Moreover, we show that when bandwidth or startup time is the priority, aggressive quantization (e.g., 4-bit) can be used without severely compromising performance. These results establish P$^2$U as an effective and practical solution for scalable and efficient model distribution in low-resource settings, including federated learning, edge computing, and IoT deployments. Given that P$^2$U complements existing compression techniques and can be implemented alongside any compression method, e.g., sparsification, quantization, pruning, etc., the potential for improvement is even greater.

Homayun Afrabandpey、Hamed Rezazadegan Tavakoli

计算技术、计算机技术

Homayun Afrabandpey,Hamed Rezazadegan Tavakoli.P$^2$U: Progressive Precision Update For Efficient Model Distribution[EB/OL].(2025-06-28)[2025-07-20].https://arxiv.org/abs/2506.22871.点此复制

评论