PRISON: Unmasking the Criminal Potential of Large Language Models
PRISON: Unmasking the Criminal Potential of Large Language Models
As large language models (LLMs) advance, concerns about their misconduct in complex social contexts intensify. Existing research overlooked the systematic understanding and assessment of their criminal capability in realistic interactions. We propose a unified framework PRISON, to quantify LLMs' criminal potential across five dimensions: False Statements, Frame-Up, Psychological Manipulation, Emotional Disguise, and Moral Disengagement. Using structured crime scenarios adapted from classic films, we evaluate both criminal potential and anti-crime ability of LLMs via role-play. Results show that state-of-the-art LLMs frequently exhibit emergent criminal tendencies, such as proposing misleading statements or evasion tactics, even without explicit instructions. Moreover, when placed in a detective role, models recognize deceptive behavior with only 41% accuracy on average, revealing a striking mismatch between conducting and detecting criminal behavior. These findings underscore the urgent need for adversarial robustness, behavioral alignment, and safety mechanisms before broader LLM deployment.
Xinyi Wu、Geng Hong、Pei Chen、Yueyue Chen、Xudong Pan、Min Yang
法律
Xinyi Wu,Geng Hong,Pei Chen,Yueyue Chen,Xudong Pan,Min Yang.PRISON: Unmasking the Criminal Potential of Large Language Models[EB/OL].(2025-06-19)[2025-07-23].https://arxiv.org/abs/2506.16150.点此复制
评论