Differential Privacy in Machine Learning: From Symbolic AI to LLMs
Differential Privacy in Machine Learning: From Symbolic AI to LLMs
Machine learning models should not reveal particular information that is not otherwise accessible. Differential privacy provides a formal framework to mitigate privacy risks by ensuring that the inclusion or exclusion of any single data point does not significantly alter the output of an algorithm, thus limiting the exposure of private information. This survey paper explores the foundational definitions of differential privacy, reviews its original formulations and tracing its evolution through key research contributions. It then provides an in-depth examination of how DP has been integrated into machine learning models, analyzing existing proposals and methods to preserve privacy when training ML models. Finally, it describes how DP-based ML techniques can be evaluated in practice. %Finally, it discusses the broader implications of DP, highlighting its potential for public benefit, its real-world applications, and the challenges it faces, including vulnerabilities to adversarial attacks. By offering a comprehensive overview of differential privacy in machine learning, this work aims to contribute to the ongoing development of secure and responsible AI systems.
Francisco Aguilera-Martínez、Fernando Berzal
计算技术、计算机技术
Francisco Aguilera-Martínez,Fernando Berzal.Differential Privacy in Machine Learning: From Symbolic AI to LLMs[EB/OL].(2025-06-13)[2025-06-21].https://arxiv.org/abs/2506.11687.点此复制
评论