|国家预印本平台
首页|Developing A Framework to Support Human Evaluation of Bias in Generated Free Response Text

Developing A Framework to Support Human Evaluation of Bias in Generated Free Response Text

Developing A Framework to Support Human Evaluation of Bias in Generated Free Response Text

来源:Arxiv_logoArxiv
英文摘要

LLM evaluation is challenging even the case of base models. In real world deployments, evaluation is further complicated by the interplay of task specific prompts and experiential context. At scale, bias evaluation is often based on short context, fixed choice benchmarks that can be rapidly evaluated, however, these can lose validity when the LLMs' deployed context differs. Large scale human evaluation is often seen as too intractable and costly. Here we present our journey towards developing a semi-automated bias evaluation framework for free text responses that has human insights at its core. We discuss how we developed an operational definition of bias that helped us automate our pipeline and a methodology for classifying bias beyond multiple choice. We additionally comment on how human evaluation helped us uncover problematic templates in a bias benchmark.

Surabhi Bhargava、Moumita Sinha、Md Nadeem Akhtar、Jennifer Healey、Laurie Byrum

计算技术、计算机技术

Surabhi Bhargava,Moumita Sinha,Md Nadeem Akhtar,Jennifer Healey,Laurie Byrum.Developing A Framework to Support Human Evaluation of Bias in Generated Free Response Text[EB/OL].(2025-05-05)[2025-08-02].https://arxiv.org/abs/2505.03053.点此复制

评论