|国家预印本平台
首页|Publish to Perish: Prompt Injection Attacks on LLM-Assisted Peer Review

Publish to Perish: Prompt Injection Attacks on LLM-Assisted Peer Review

Publish to Perish: Prompt Injection Attacks on LLM-Assisted Peer Review

来源:Arxiv_logoArxiv
英文摘要

Large Language Models (LLMs) are increasingly being integrated into the scientific peer-review process, raising new questions about their reliability and resilience to manipulation. In this work, we investigate the potential for hidden prompt injection attacks, where authors embed adversarial text within a paper's PDF to influence the LLM-generated review. We begin by formalising three distinct threat models that envision attackers with different motivations -- not all of which implying malicious intent. For each threat model, we design adversarial prompts that remain invisible to human readers yet can steer an LLM's output toward the author's desired outcome. Using a user study with domain scholars, we derive four representative reviewing prompts used to elicit peer reviews from LLMs. We then evaluate the robustness of our adversarial prompts across (i) different reviewing prompts, (ii) different commercial LLM-based systems, and (iii) different peer-reviewed papers. Our results show that adversarial prompts can reliably mislead the LLM, sometimes in ways that adversely affect a "honest-but-lazy" reviewer. Finally, we propose and empirically assess methods to reduce detectability of adversarial prompts under automated content checks.

Matteo Gioele Collu、Umberto Salviati、Roberto Confalonieri、Mauro Conti、Giovanni Apruzzese

计算技术、计算机技术

Matteo Gioele Collu,Umberto Salviati,Roberto Confalonieri,Mauro Conti,Giovanni Apruzzese.Publish to Perish: Prompt Injection Attacks on LLM-Assisted Peer Review[EB/OL].(2025-08-29)[2025-09-04].https://arxiv.org/abs/2508.20863.点此复制

评论