Evaluating the Efficacy of Large Language Models for Generating Fine-Grained Visual Privacy Policies in Homes
Evaluating the Efficacy of Large Language Models for Generating Fine-Grained Visual Privacy Policies in Homes
The proliferation of visual sensors in smart home environments, particularly through wearable devices like smart glasses, introduces profound privacy challenges. Existing privacy controls are often static and coarse-grained, failing to accommodate the dynamic and socially nuanced nature of home environments. This paper investigates the viability of using Large Language Models (LLMs) as the core of a dynamic and adaptive privacy policy engine. We propose a conceptual framework where visual data is classified using a multi-dimensional schema that considers data sensitivity, spatial context, and social presence. An LLM then reasons over this contextual information to enforce fine-grained privacy rules, such as selective object obfuscation, in real-time. Through a comparative evaluation of state-of-the-art Vision Language Models (including GPT-4o and the Qwen-VL series) in simulated home settings , our findings show the feasibility of this approach. The LLM-based engine achieved a top machine-evaluated appropriateness score of 3.99 out of 5, and the policies generated by the models received a top human-evaluated score of 4.00 out of 5.
Shuning Zhang、Ying Ma、Xin Yi、Hewu Li
电子技术概论微电子学、集成电路
Shuning Zhang,Ying Ma,Xin Yi,Hewu Li.Evaluating the Efficacy of Large Language Models for Generating Fine-Grained Visual Privacy Policies in Homes[EB/OL].(2025-08-01)[2025-08-11].https://arxiv.org/abs/2508.00321.点此复制
评论