|国家预印本平台
首页|Theoretically Unmasking Inference Attacks Against LDP-Protected Clients in Federated Vision Models

Theoretically Unmasking Inference Attacks Against LDP-Protected Clients in Federated Vision Models

Theoretically Unmasking Inference Attacks Against LDP-Protected Clients in Federated Vision Models

来源:Arxiv_logoArxiv
英文摘要

Federated Learning enables collaborative learning among clients via a coordinating server while avoiding direct data sharing, offering a perceived solution to preserve privacy. However, recent studies on Membership Inference Attacks (MIAs) have challenged this notion, showing high success rates against unprotected training data. While local differential privacy (LDP) is widely regarded as a gold standard for privacy protection in data analysis, most studies on MIAs either neglect LDP or fail to provide theoretical guarantees for attack success rates against LDP-protected data. To address this gap, we derive theoretical lower bounds for the success rates of low-polynomial time MIAs that exploit vulnerabilities in fully connected or self-attention layers. We establish that even when data are protected by LDP, privacy risks persist, depending on the privacy budget. Practical evaluations on federated vision models confirm considerable privacy risks, revealing that the noise required to mitigate these attacks significantly degrades models' utility.

Quan Nguyen、Minh N. Vu、Truc Nguyen、My T. Thai

计算技术、计算机技术

Quan Nguyen,Minh N. Vu,Truc Nguyen,My T. Thai.Theoretically Unmasking Inference Attacks Against LDP-Protected Clients in Federated Vision Models[EB/OL].(2025-06-16)[2025-07-16].https://arxiv.org/abs/2506.17292.点此复制

评论