|国家预印本平台
首页|Cognitive Debiasing Large Language Models for Decision-Making

Cognitive Debiasing Large Language Models for Decision-Making

Cognitive Debiasing Large Language Models for Decision-Making

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) have shown potential in supporting decision-making applications, particularly as personal conversational assistants in the financial, healthcare, and legal domains. While prompt engineering strategies have enhanced the capabilities of LLMs in decision-making, cognitive biases inherent to LLMs present significant challenges. Cognitive biases are systematic patterns of deviation from norms or rationality in decision-making that can lead to the production of inaccurate outputs. Existing cognitive bias mitigation strategies assume that input prompts contain (exactly) one type of cognitive bias and therefore fail to perform well in realistic settings where there maybe any number of biases. To fill this gap, we propose a cognitive debiasing approach, called self-debiasing, that enhances the reliability of LLMs by iteratively refining prompts. Our method follows three sequential steps -- bias determination, bias analysis, and cognitive debiasing -- to iteratively mitigate potential cognitive biases in prompts. Experimental results on finance, healthcare, and legal decision-making tasks, using both closed-source and open-source LLMs, demonstrate that the proposed self-debiasing method outperforms both advanced prompt engineering methods and existing cognitive debiasing techniques in average accuracy under no-bias, single-bias, and multi-bias settings.

Yougang Lyu、Shijie Ren、Yue Feng、Zihan Wang、Zhumin Chen、Zhaochun Ren、Maarten de Rijke

计算技术、计算机技术

Yougang Lyu,Shijie Ren,Yue Feng,Zihan Wang,Zhumin Chen,Zhaochun Ren,Maarten de Rijke.Cognitive Debiasing Large Language Models for Decision-Making[EB/OL].(2025-04-05)[2025-04-28].https://arxiv.org/abs/2504.04141.点此复制

评论