Investigating User Perspectives on Differentially Private Text Privatization
Investigating User Perspectives on Differentially Private Text Privatization
Recent literature has seen a considerable uptick in $\textit{Differentially Private Natural Language Processing}$ (DP NLP). This includes DP text privatization, where potentially sensitive input texts are transformed under DP to achieve privatized output texts that ideally mask sensitive information $\textit{and}$ maintain original semantics. Despite continued work to address the open challenges in DP text privatization, there remains a scarcity of work addressing user perceptions of this technology, a crucial aspect which serves as the final barrier to practical adoption. In this work, we conduct a survey study with 721 laypersons around the globe, investigating how the factors of $\textit{scenario}$, $\textit{data sensitivity}$, $\textit{mechanism type}$, and $\textit{reason for data collection}$ impact user preferences for text privatization. We learn that while all these factors play a role in influencing privacy decisions, users are highly sensitive to the utility and coherence of the private output texts. Our findings highlight the socio-technical factors that must be considered in the study of DP NLP, opening the door to further user-based investigations going forward.
Stephen Meisenbacher、Alexandra Klymenko、Alexander Karpp、Florian Matthes
计算技术、计算机技术
Stephen Meisenbacher,Alexandra Klymenko,Alexander Karpp,Florian Matthes.Investigating User Perspectives on Differentially Private Text Privatization[EB/OL].(2025-03-12)[2025-04-29].https://arxiv.org/abs/2503.09338.点此复制
评论