|国家预印本平台
首页|FedDTG:Federated Data-Free Knowledge Distillation via Three-Player Generative Adversarial Networks

FedDTG:Federated Data-Free Knowledge Distillation via Three-Player Generative Adversarial Networks

FedDTG:Federated Data-Free Knowledge Distillation via Three-Player Generative Adversarial Networks

来源:Arxiv_logoArxiv
英文摘要

While existing federated learning approaches primarily focus on aggregating local models to construct a global model, in realistic settings, some clients may be reluctant to share their private models due to the inclusion of privacy-sensitive information. Knowledge distillation, which can extract model knowledge without accessing model parameters, is well-suited for this federated scenario. However, most distillation methods in federated learning (federated distillation) require a proxy dataset, which is difficult to obtain in the real world. Therefore, in this paper, we introduce a distributed three-player Generative Adversarial Network (GAN) to implement data-free mutual distillation and propose an effective method called FedDTG. We confirmed that the fake samples generated by GAN can make federated distillation more efficient and robust. Additionally, the distillation process between clients can deliver good individual client performance while simultaneously acquiring global knowledge and protecting data privacy. Our extensive experiments on benchmark vision datasets demonstrate that our method outperforms other federated distillation algorithms in terms of generalization.

Chao Wu、Lingzhi Gao、Zhenyuan Zhang

计算技术、计算机技术

Chao Wu,Lingzhi Gao,Zhenyuan Zhang.FedDTG:Federated Data-Free Knowledge Distillation via Three-Player Generative Adversarial Networks[EB/OL].(2025-06-28)[2025-07-18].https://arxiv.org/abs/2201.03169.点此复制

评论