How Much Content Do LLMs Generate That Induces Cognitive Bias in Users?
How Much Content Do LLMs Generate That Induces Cognitive Bias in Users?
Large language models (LLMs) are increasingly integrated into applications ranging from review summarization to medical diagnosis support, where they affect human decisions. Even though LLMs perform well in many tasks, they may also inherit societal or cognitive biases, which can inadvertently transfer to humans. We investigate when and how LLMs expose users to biased content and quantify its severity. Specifically, we assess three LLM families in summarization and news fact-checking tasks, evaluating how much LLMs stay consistent with their context and/or hallucinate. Our findings show that LLMs expose users to content that changes the sentiment of the context in 21.86% of the cases, hallucinates on post-knowledge-cutoff data questions in 57.33% of the cases, and primacy bias in 5.94% of the cases. We evaluate 18 distinct mitigation methods across three LLM families and find that targeted interventions can be effective. Given the prevalent use of LLMs in high-stakes domains, such as healthcare or legal analysis, our results highlight the need for robust technical safeguards and for developing user-centered interventions that address LLM limitations.
Abeer Alessa、Akshaya Lakshminarasimhan、Param Somane、Julian Skirzynski、Julian McAuley、Jessica Echterhoff
信息传播、知识传播计算技术、计算机技术科学、科学研究
Abeer Alessa,Akshaya Lakshminarasimhan,Param Somane,Julian Skirzynski,Julian McAuley,Jessica Echterhoff.How Much Content Do LLMs Generate That Induces Cognitive Bias in Users?[EB/OL].(2025-07-03)[2025-07-16].https://arxiv.org/abs/2507.03194.点此复制
评论