|国家预印本平台
| 注册
首页|Membership Inference Attacks on LLM-based Recommender Systems

Membership Inference Attacks on LLM-based Recommender Systems

Membership Inference Attacks on LLM-based Recommender Systems

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) based Recommender Systems (RecSys) can flexibly adapt recommendation systems to different domains. It utilizes in-context learning (ICL), i.e., the prompts, to customize the recommendation functions, which include sensitive historical user-specific item interactions, e.g., implicit feedback like clicked items or explicit product reviews. Such private information may be exposed to novel privacy attack. However, no study has been done on this important issue. We design four membership inference attacks (MIAs), aiming to reveal whether victims' historical interactions have been used by system prompts. They are \emph{direct inquiry, hallucination, similarity, and poisoning attacks}, each of which utilizes the unique features of LLMs or RecSys. We have carefully evaluated them on three LLMs that have been used to develop ICL-LLM RecSys and two well-known RecSys benchmark datasets. The results confirm that the MIA threat on LLM RecSys is realistic: direct inquiry and poisoning attacks showing significantly high attack advantages. We have also analyzed the factors affecting these attacks, such as the number of shots in system prompts and the position of the victim in the shots.

Jiajie He、Yuechun Gu、Min-Chun Chen、Keke Chen

计算技术、计算机技术

Jiajie He,Yuechun Gu,Min-Chun Chen,Keke Chen.Membership Inference Attacks on LLM-based Recommender Systems[EB/OL].(2025-08-26)[2025-09-06].https://arxiv.org/abs/2508.18665.点此复制

评论