Source framing triggers systematic evaluation bias in Large Language Models
Source framing triggers systematic evaluation bias in Large Language Models
Large Language Models (LLMs) are increasingly used not only to generate text but also to evaluate it, raising urgent questions about whether their judgments are consistent, unbiased, and robust to framing effects. In this study, we systematically examine inter- and intra-model agreement across four state-of-the-art LLMs (OpenAI o3-mini, Deepseek Reasoner, xAI Grok 2, and Mistral) tasked with evaluating 4,800 narrative statements on 24 different topics of social, political, and public health relevance, for a total of 192,000 assessments. We manipulate the disclosed source of each statement to assess how attribution to either another LLM or a human author of specified nationality affects evaluation outcomes. We find that, in the blind condition, different LLMs display a remarkably high degree of inter- and intra-model agreement across topics. However, this alignment breaks down when source framing is introduced. Here we show that attributing statements to Chinese individuals systematically lowers agreement scores across all models, and in particular for Deepseek Reasoner. Our findings reveal that framing effects can deeply affect text evaluation, with significant implications for the integrity, neutrality, and fairness of LLM-mediated information systems.
Federico Germani、Giovanni Spitale
计算技术、计算机技术世界政治外交、国际关系
Federico Germani,Giovanni Spitale.Source framing triggers systematic evaluation bias in Large Language Models[EB/OL].(2025-05-14)[2025-07-03].https://arxiv.org/abs/2505.13488.点此复制
评论