|国家预印本平台
首页|基于知识蒸馏的云端协同框架

基于知识蒸馏的云端协同框架

Cloud-Device Collaboration Framework based on Knowledge Distillation

中文摘要英文摘要

随着移动设备的快速发展,使用深度学习模型为用户提供高质量的服务不再是服务器的专利。先前的研究着重于在云服务器上训练高性能的模型为用户提供服务,或在边缘服务器对数据进行预处理后再将数据发送至云端,由云服务器完成训练任务。但是这样的方法主要有两个限制,首先,用户到云服务器的链路较长,难以确保较低的网络链路延迟。其次,用户需要上传自己采集到的图片等个人数据,有潜在的隐私泄露风险。针对该问题,本研究提出了一种基于知识蒸馏的云端协同框架,该框架可以在保护用户隐私的前提下,为用户提供快速,高性能的服务。该框架通过下发云服务器训练的老师模型到移动设备,用以指导移动设备上部署的学生模型训练得到最终模型,最终模型拥有比原本更加出色的性能和相对于老师模型更快的推断时间。同时最终是动态更新的,随着用户采集的数据增多,最终模型的性能会逐渐提高而逼近老师模型的性能。实验结果表明,最终模型可以将服务的准确率提高17.01%,推断速度相对于老师模型缩短了31.67%。同时在更新阶段,最终模型仅使用16.67%的训练样本,就可以获得仅比老师模型低3.18%的准确率。

With the rapid development of mobile devices, using deep learning models to provide users with high-quality services is no longer a privilege for servers. Previous research focused on training high-performance models on cloud servers to provide services to users, or preprocessing the data at the edge server before sending the data to the cloud, where the cloud server completes the training task. However, these methods have certain limitations.First, network link between the user and the cloud server is long so it is difficult to ensure a low network delay. Secondly, users need to upload personal data such as photos they have taken, which poses a potential risk of privacy leakage. In response to this problem, a cloud-device collaboration framework based on knowledge distillation is proposed, which can provide users with fast and high-performance services while protecting user privacy. The framework delivers the teacher model trained on cloud servers to the mobile devices to guide the training of the student model deployed on mobile devices and to obtain the final model. The final model has better performance than the original and faster inference time compared to the teacher model. At the same time, the final model is dynamically updated. As the data collected by users increases, the performance of the final model will gradually improve and approach the performance of the teacher model. Experimental results show that the final model can increase the accuracy of the service by 17.01%, and the inference speed is reduced by 31.67% compared to the teacher model. At the same time, at the update stage, the final model only uses 16.67% of the training samples, and can obtain an accuracy rate that is only 3.18% lower than the teacher model.

杜国铭

计算技术、计算机技术

知识蒸馏云端协同个性化

knowledge distillationCloud-Device CollaborationPersonality

杜国铭.基于知识蒸馏的云端协同框架[EB/OL].(2021-01-07)[2025-08-05].http://www.paper.edu.cn/releasepaper/content/202101-12.点此复制

评论