|国家预印本平台
首页|Conservative Bias in Large Language Models: Measuring Relation Predictions

Conservative Bias in Large Language Models: Measuring Relation Predictions

Conservative Bias in Large Language Models: Measuring Relation Predictions

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) exhibit pronounced conservative bias in relation extraction tasks, frequently defaulting to No_Relation label when an appropriate option is unavailable. While this behavior helps prevent incorrect relation assignments, our analysis reveals that it also leads to significant information loss when reasoning is not explicitly included in the output. We systematically evaluate this trade-off across multiple prompts, datasets, and relation types, introducing the concept of Hobson's choice to capture scenarios where models opt for safe but uninformative labels over hallucinated ones. Our findings suggest that conservative bias occurs twice as often as hallucination. To quantify this effect, we use SBERT and LLM prompts to capture the semantic similarity between conservative bias behaviors in constrained prompts and labels generated from semi-constrained and open-ended prompts.

Toyin Aguda、Erik Wilson、Allan Anzagira、Simerjot Kaur、Charese Smiley

计算技术、计算机技术

Toyin Aguda,Erik Wilson,Allan Anzagira,Simerjot Kaur,Charese Smiley.Conservative Bias in Large Language Models: Measuring Relation Predictions[EB/OL].(2025-06-09)[2025-07-09].https://arxiv.org/abs/2506.08120.点此复制

评论