|国家预印本平台
首页|Clustering and Median Aggregation Improve Differentially Private Inference

Clustering and Median Aggregation Improve Differentially Private Inference

Clustering and Median Aggregation Improve Differentially Private Inference

来源:Arxiv_logoArxiv
英文摘要

Differentially private (DP) language model inference is an approach for generating private synthetic text. A sensitive input example is used to prompt an off-the-shelf large language model (LLM) to produce a similar example. Multiple examples can be aggregated together to formally satisfy the DP guarantee. Prior work creates inference batches by sampling sensitive inputs uniformly at random. We show that uniform sampling degrades the quality of privately generated text, especially when the sensitive examples concern heterogeneous topics. We remedy this problem by clustering the input data before selecting inference batches. Next, we observe that clustering also leads to more similar next-token predictions across inferences. We use this insight to introduce a new algorithm that aggregates next token statistics by privately computing medians instead of averages. This approach leverages the fact that the median has decreased local sensitivity when next token predictions are similar, allowing us to state a data-dependent and ex-post DP guarantee about the privacy properties of this algorithm. Finally, we demonstrate improvements in terms of representativeness metrics (e.g., MAUVE) as well as downstream task performance. We show that our method produces high-quality synthetic data at significantly lower privacy cost than a previous state-of-the-art method.

Kareem Amin、Salman Avestimehr、Sara Babakniya、Alex Bie、Weiwei Kong、Natalia Ponomareva、Umar Syed

计算技术、计算机技术

Kareem Amin,Salman Avestimehr,Sara Babakniya,Alex Bie,Weiwei Kong,Natalia Ponomareva,Umar Syed.Clustering and Median Aggregation Improve Differentially Private Inference[EB/OL].(2025-06-04)[2025-06-15].https://arxiv.org/abs/2506.04566.点此复制

评论