Differentially Private Federated Quantum Learning via Quantum Noise
Differentially Private Federated Quantum Learning via Quantum Noise
Quantum federated learning (QFL) enables collaborative training of quantum machine learning (QML) models across distributed quantum devices without raw data exchange. However, QFL remains vulnerable to adversarial attacks, where shared QML model updates can be exploited to undermine information privacy. In the context of noisy intermediate-scale quantum (NISQ) devices, a key question arises: How can inherent quantum noise be leveraged to enforce differential privacy (DP) and protect model information during training and communication? This paper explores a novel DP mechanism that harnesses quantum noise to safeguard quantum models throughout the QFL process. By tuning noise variance through measurement shots and depolarizing channel strength, our approach achieves desired DP levels tailored to NISQ constraints. Simulations demonstrate the framework's effectiveness by examining the relationship between differential privacy budget and noise parameters, as well as the trade-off between security and training accuracy. Additionally, we demonstrate the framework's robustness against an adversarial attack designed to compromise model performance using adversarial examples, with evaluations based on critical metrics such as accuracy on adversarial examples, confidence scores for correct predictions, and attack success rates. The results reveal a tunable trade-off between privacy and robustness, providing an efficient solution for secure QFL on NISQ devices with significant potential for reliable quantum computing applications.
Atit Pokharel、Ratun Rahman、Shaba Shaon、Thomas Morris、Dinh C. Nguyen
物理学
Atit Pokharel,Ratun Rahman,Shaba Shaon,Thomas Morris,Dinh C. Nguyen.Differentially Private Federated Quantum Learning via Quantum Noise[EB/OL].(2025-08-27)[2025-09-06].https://arxiv.org/abs/2508.20310.点此复制
评论