|国家预印本平台
首页|1-2-3 Check: Enhancing Contextual Privacy in LLM via Multi-Agent Reasoning

1-2-3 Check: Enhancing Contextual Privacy in LLM via Multi-Agent Reasoning

1-2-3 Check: Enhancing Contextual Privacy in LLM via Multi-Agent Reasoning

来源:Arxiv_logoArxiv
英文摘要

Addressing contextual privacy concerns remains challenging in interactive settings where large language models (LLMs) process information from multiple sources (e.g., summarizing meetings with private and public information). We introduce a multi-agent framework that decomposes privacy reasoning into specialized subtasks (extraction, classification), reducing the information load on any single agent while enabling iterative validation and more reliable adherence to contextual privacy norms. To understand how privacy errors emerge and propagate, we conduct a systematic ablation over information-flow topologies, revealing when and why upstream detection mistakes cascade into downstream leakage. Experiments on the ConfAIde and PrivacyLens benchmark with several open-source and closed-sourced LLMs demonstrate that our best multi-agent configuration substantially reduces private information leakage (\textbf{18\%} on ConfAIde and \textbf{19\%} on PrivacyLens with GPT-4o) while preserving the fidelity of public content, outperforming single-agent baselines. These results highlight the promise of principled information-flow design in multi-agent systems for contextual privacy with LLMs.

Wenkai Li、Liwen Sun、Zhenxiang Guan、Xuhui Zhou、Maarten Sap

计算技术、计算机技术

Wenkai Li,Liwen Sun,Zhenxiang Guan,Xuhui Zhou,Maarten Sap.1-2-3 Check: Enhancing Contextual Privacy in LLM via Multi-Agent Reasoning[EB/OL].(2025-08-11)[2025-08-24].https://arxiv.org/abs/2508.07667.点此复制

评论