|国家预印本平台
首页|LLM-Based User Simulation for Low-Knowledge Shilling Attacks on Recommender Systems

LLM-Based User Simulation for Low-Knowledge Shilling Attacks on Recommender Systems

LLM-Based User Simulation for Low-Knowledge Shilling Attacks on Recommender Systems

来源:Arxiv_logoArxiv
英文摘要

Recommender systems (RS) are increasingly vulnerable to shilling attacks, where adversaries inject fake user profiles to manipulate system outputs. Traditional attack strategies often rely on simplistic heuristics, require access to internal RS data, and overlook the manipulation potential of textual reviews. In this work, we introduce Agent4SR, a novel framework that leverages Large Language Model (LLM)-based agents to perform low-knowledge, high-impact shilling attacks through both rating and review generation. Agent4SR simulates realistic user behavior by orchestrating adversarial interactions, selecting items, assigning ratings, and crafting reviews, while maintaining behavioral plausibility. Our design includes targeted profile construction, hybrid memory retrieval, and a review attack strategy that propagates target item features across unrelated reviews to amplify manipulation. Extensive experiments on multiple datasets and RS architectures demonstrate that Agent4SR outperforms existing low-knowledge baselines in both effectiveness and stealth. Our findings reveal a new class of emergent threats posed by LLM-driven agents, underscoring the urgent need for enhanced defenses in modern recommender systems.

Shengkang Gu、Jiahao Liu、Dongsheng Li、Guangping Zhang、Mingzhe Han、Hansu Gu、Peng Zhang、Ning Gu、Li Shang、Tun Lu

计算技术、计算机技术

Shengkang Gu,Jiahao Liu,Dongsheng Li,Guangping Zhang,Mingzhe Han,Hansu Gu,Peng Zhang,Ning Gu,Li Shang,Tun Lu.LLM-Based User Simulation for Low-Knowledge Shilling Attacks on Recommender Systems[EB/OL].(2025-05-18)[2025-06-19].https://arxiv.org/abs/2505.13528.点此复制

评论