PapersPlease: A Benchmark for Evaluating Motivational Values of Large Language Models Based on ERG Theory
PapersPlease: A Benchmark for Evaluating Motivational Values of Large Language Models Based on ERG Theory
Evaluating the performance and biases of large language models (LLMs) through role-playing scenarios is becoming increasingly common, as LLMs often exhibit biased behaviors in these contexts. Building on this line of research, we introduce PapersPlease, a benchmark consisting of 3,700 moral dilemmas designed to investigate LLMs' decision-making in prioritizing various levels of human needs. In our setup, LLMs act as immigration inspectors deciding whether to approve or deny entry based on the short narratives of people. These narratives are constructed using the Existence, Relatedness, and Growth (ERG) theory, which categorizes human needs into three hierarchical levels. Our analysis of six LLMs reveals statistically significant patterns in decision-making, suggesting that LLMs encode implicit preferences. Additionally, our evaluation of the impact of incorporating social identities into the narratives shows varying responsiveness based on both motivational needs and identity cues, with some models exhibiting higher denial rates for marginalized identities. All data is publicly available at https://github.com/yeonsuuuu28/papers-please.
Junho Myung、Yeon Su Park、Sunwoo Kim、Shin Yoo、Alice Oh
计算技术、计算机技术
Junho Myung,Yeon Su Park,Sunwoo Kim,Shin Yoo,Alice Oh.PapersPlease: A Benchmark for Evaluating Motivational Values of Large Language Models Based on ERG Theory[EB/OL].(2025-06-27)[2025-07-16].https://arxiv.org/abs/2506.21961.点此复制
评论